-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Standardised interface for entailment rules #26
Comments
I don't know enough myself about entailment rules, but perhaps @phochste and @joachimvh might be interested in this. |
My only experience is with N3 reasoning in the past. When I made a reasoner I used the terms permise and conclusion instead of antecedents and consequences. Specifically in N3 a rule is also just a triple with a specific predicate ( But I haven't worked with this for some time, so I don't really have an opinion on what typings should be added here. |
In the N3 inference engine I'm trying to build there is a similar definition of a rule but with different names. I've actually tried to search for better names than my implicator/implications. Your and @joachimvh 's naming conventions are better and I'll copy them. From my viewpoint the definition of the rule as stated above is not enough for N3 reasoning (at least not how I implement it). A rule with just antecedents and consequences leaves the question open what to do with the quantifiers in the quads. Plus the lack of nesting. See for example Dörthe Arndt article about this issue https://www.sciencedirect.com/science/article/abs/pii/S1570826819300241 |
Thanks for the input @joachimvh and @phochste. I also agree that my proposal above is insufficient for N3 reasoning, and had built it only with basic RDFs style rules in mind. @joachimvh I'm inclined to prefer your naming convention as they are shorter and less prone to being misspelled by developers. @phochste I have read the article that you linked and am starting to get my head around the problems with rule interpretation in this context. In terms of introducing the ability to quantify variables and nest rules - I suppose one could do something like what I have below - though a major potential problem is that it doesn't allow you to express the order of quantification without doing some nasty nesting of rules. @phochste do you have something better since it sounds like you have begun to work on this with what you are doing? interface Rule {
/**
* Existentially quantified variables/blank-nodes
*/
exists?: (RDF.BlankNode | RDF.Variable)[]
/**
* Universally quantified variables/blank-nodes
*/
forAll?: (RDF.BlankNode | RDF.Variable)[]
/**
* Premises for the rule
*/
premise: (RDF.Quad | Rule)[]; // Note: Order doesn't theoretically matter so this could also be a set
/**
* Conclusions of the rule
*/
conclusion: (RDF.Quad | Rule)[] | boolean;
/**
* @param other The rule to compare with.
* @return True if and only if other has the same sets of antecedents and consequences
*/
equals(other: Rule | null | undefined): boolean;
} I'm also conscious that it might be worth keeping the interface generic enough that it is compatible with flavours of Description Logic that aren't necessarily expressible in N3Logic. In the same vein - it may be worth having a key in the interface that tracks the flavour of DL required to express the rule, e.g. @rubensworks If this has enough interest should I open a draft PR to make it easier to discuss and add suggestions? |
@jeswr Up until now, all RDF/JS typings that were defined, were defined in a spec first. Similarly, we're defining some new query-related interfaces. Once there is consensus around the spec, we plan on defining TS interfaces for them. I'm not married to this process myself, but if we follow a different process, perhaps we should check with the rest of the RDF/JS community first. |
My Rule looks more like (with your template): interface Rule {
premise: {
quantifiers: Map<RDF.BlankNode | RDF.Variable, RDF.Variable>;
quads: RDF.Quad[];
};
conclusion: {
quantifiers: Map<RDF.BlankNode | RDF.Variable, RDF.BlankNode>;
quads: RDF.Quad[];
};
equals(other: Rule | null | undefined): boolean;
} For each of the premise and conclusion there can be an other mapping of quantifiers. There is a complication in N3 like expressed in the article of Dörthe what the scope is of these quantifiers. Different implementations can have different opinions about this. I must also confess that I have a vested interest not to use nesting of Rules in my own interface definition. My hope is to create more implementations for Notation3 without the complications of rules that generate rules. But, this luxury of my pragmatic choices in my own implementation could be different than what rdfs/types wants to do for more generic cases. As suggested it would be better to define a scope what these rules try to cover (according to what spec), than tie oneself to one interpretation how rules should work. |
@phocste - with the map for quantifiers above, I'm assuming the key is the term that you with to quantify. Why is the object an RDF.Variable considering this appears to be your choice of quantification. |
Indeed, this is way how N3 is interpreted. The mapping has a different purpose in the premise and conclusion. Premise Given a formula like:
The mapping is something like:
You are not searching for exact the blank nodes
Is the conclusion Here is the mapping about the production of blank nodes (I made a mistake in my interface Rule which I corrected above). This tells the reasoner engine what blank nodes need to be created in the output data if the premise holds. Given the formula:
When the premise is true you don't want to say that
Alas, this mapping is dynamic. When this output data is again used as input data for the rules and there is a premise match on the skolemized blank node it again can create a new blank node. With the mapping:
One needs to keep track of all these mappings to create the correct output. Well, at least that is what I conclude from my N3 experiments in https://github.com/MellonScholarlyCommunication/NO3 . I still need to discuss with Dörthe if my interpretation is correct. This is still an early stage of coding (you're welcome to search for JS implementations of N3 :). The mapping in the premise looks like something that is required in a generic interface. It defines how quantifiers are interpreted and the scoping of them. The mapping in the conclusion, I'm not sure if this is just a artefact of my way of coding or something that is mandatory. |
In my initial comment I actually assumed that the plan was not to cover N3 rules with this interface, because there are several things you can express there that are not possible in other RDF formats. I agree with @phochste that it would probably be better to first define the scope of what this interface intends to cover. RDF.js does not really have support for N3 anyway, making it hard to then add an interface specifically for its rules. |
IMO the interface should be able to handle rules for OWL2 reasoning profiles as well as RDFs, since AFAIK these are the rules used in most current reasoning engines. It would be nice if this were designed in a way such that an N3Rule interface could |
@ignazio1977 - I've been looking into the OWLApi definitions for OwlAxiom and Reasoners to see if it would provide a good basis for this; especially given that it has different concepts for reasoning profiles built-in. Given that you have done a lot of work on this API and reasoners that use it, I was wondering whether you had any insights as to a good structure for Rules (and interfaces for Reasoners for that matter) in Typescript. |
@jeswr in the OWLAPI and the reasoners I've dealt with, the closest concept to rules as discussed here is SWRL rules, but they're limited in many respects when compared with the discussion so far, and I'd say they're subsumed in the proposed designs as is - in that I think they can be fully represented with the approaches described. Reasoners themselves don't use a rule based formalization of the owl inference rules (speaking of my experience) - Apache Jena being an exception, as it has a rule based reasoner that could be put into service for this purpose. However many existing optimizations for tableaux and hypertableaux aren't amenable to this formalization, so I guess this reduces the relevance to this thread. I'm afraid I don't have insights to add that could improve the discussion - there seems to be plenty of expertise going on here already, I could probably learn from it rather than the other way around. |
@jeswr as already explained above, I think it really depends on what kind of reasoning you want to support. Indeed, many of the OWL2 profile's axioms cannot be translated directly into rules, except for the OWL2 RL profile which is specifically designed to be able to execute on top of rule engines. Many of the RDF rule engines out there support some form of Datalog, e.g. RDFox. These kinds of rules are less expressive than N3Logic in the sense that they do not allow any existentials in the head of the rule, no support for negation, no nesting of rules, etc. I'm not completely sure I understand @phochste his approach where the mapping of the quantifiers is already in the rules, but I think the skolemization process should be done by the reasoner under the hood and not something that is part of the interface you want to expose. It probably makes sense to make a survey of all rule languages out there, related to the semantic web and see if you can create a generic rule that can cover them all. Its up the the rule engines themselves to check if they can support the rules. (This is also what happens with most OWL reasoners when you feed them some ontology construct that they cant support.) |
I forgot to mention at the time of writing the previous comment, but did you check the Rule Interchange Format (RIF) yet?https://www.w3.org/TR/rif-overview/ |
I'm currently working on inferencing engines for the web and noticed that there seems to be a lack of standardisation for the representation of entailment rules. I was wondering whether it would be appropriate to add an interface to the
@rdfjs/types
package along the lines of:The text was updated successfully, but these errors were encountered: