Three questions crop up when working with inexact rule-based models:
  • Are the answers given trustworthy/valid?
  • Why is the model giving a certain answer - what is the detailed reasoning behind it?
  • How can the certainty rules (the certainty rule network - the model) be improved?

Tracking the use of inexact rules

Pro/3 stores the actual result of using inexact rules as sentences in the KB. Two sentences types are generated i.e. inexact rule evaluation is recorded and inexact evaluation tree is recorded.  The generation of the sentences is optional - refer to processing options

Inexact rule reasoning trees

Inexact rule reasoning trees are based on the inexact evaluation tree is recorded-sentences. One tree is generated for each successful interpretation of a root-rule i.e. an inexact rule called from a sentence rule

The sentences are used by the inexact rule reasoning network drawings generated by Pro/3. (They can also be listed or viewed as any other sentence, however this is not a practical way of evaluating the model). 

A rule reasoning network drawing shows the actual results of rule evaluations for a given set of parameters. The purpose is to show how a given certainty or membership grade was derived. The reasoning network is drawn as a tree, and the drawing is selected in the Inexact rule reasoning-window: 

 

RULE TREE FORMAT
  • sub-trees of failed rules are not shown
  • the evaluated value of each rule is shown
  • min/max and increment/decrement values are not shown
  • query rules that failed are marked with +
  • rules out-of-context are marked with *

INEXACT RULE REASONING REPORT

The inexact rule reasoning report gives a detailed textual documentation of the reasoning carried out with the inexact rule network. One example from the MC model shows the general contents of the report.

The following mouse-operations are available in certainty rule reasoning trees:
MOUSE GESTURE

EFFECT

left button single click highlights the clicked node
left button double click toggle for opening/closing the clicked node's sub-tree
right button single click (on highlighted certainty rule node) makes the highlighted certainty rule current in the annotation part of the KE Assistant window
right button double click opens the certainty rule editor with the highlighted certainty rule selected; highlights the clicked node

Evaluation Sub-Models

One inexact rule evaluation is recorded-sentence is generated for each interpretation of a certainty rule:

 
The context is the interpretation of a sentence rule which concludes the sentence type manufacturer makes the most suitable bike. The inexact rule bike is suitable bike is assigned to the certainty data element type in the has suitable predicate in the sentence rule's conclusion.

One of the  actual parameter used in calling the bike is suitable-rule (the root-rule) was "Bonneville America" (which is a bike model). The interpretation of the bike is suitable inexact rule involved the interpretation of other inexact rules i.e. rules directly or indirectly called by the root-rule.

One of these called rules were the bikes that meet type related preferences (a fuzzy set), which unsurprisingly also was called with the actual parameter "Bonneville America" (actual parameters are typically passed down the rule network, however this depends on how the rules are defined). The bikes that meet type related preferences returned the value 0.5842.. which is the membership grade of given model in this fuzzy set. 

 

 Three evaluation-types are indicated:
  • OUT OF CONTEXT - the rule was out of context and the corresponding out-of-context value defined for the rule was returned
  • DEFAULT VALUE -  the "normal" interpretation of the rule failed and the default value defined for the rule was returned
  • EVALUATED - all other cases

USING THE RULE EVALUATION SENTENCES

The generated evaluation sentences can be listed and manually reviewed as any other Pro/3 sentences. It is practical in most cases to output the sentences to e.g. MS Excel via the 3DL-output format for this purpose.

More sophisticated techniques for evaluating inexact rule models involve the creation of evaluation sub-models. This will involve sentence rule(s) for deriving new sentence types from the inexact rule evaluation is recorded-sentences. The following example illustrate the technique further. 

The example is from an expert system which suggests which are the "best" stocks to buy, "best" meaning the stocks which are anticipated to give the highest return over the next several months. The system attempts to validate its suggestion-logic, by generating similar advices as of one or more historical dates (based on information available on those dates), and subsequently correlating these advices with the actual returns for set of periods following these historical dates (called reference points in the model). 

Principally, certainty factors suggest a ranking of certainty rather than an absolute measure of certainty, and it is thus better to use the certainty factor ranking rather than the factor itself. The system also uses ranking of the returns rather than the actual returns to minimize distortions by exceptionally high or low returns.

 

The above rule unpacks the actual parameters in the inexact rule evaluation is recorded-sentences with the list element no built-in function, for the purpose of (i) selecting only some of the evaluation sentences by comparing it with the from day data element in the week is reference point for stock evaluation-sentences; and (ii) deriving a sentence type where day and ticker (corresponding to the first and the second actual parameter in the rule call) are explicit and separate data element types. (Note that a "ticker" is an identifier for a stock).

The rule result is evaluated-sentences can now be ranked such that each rule call is ranked according to their returned values on each evaluation day. 

By correlating the ranking of the rule calls with the actual return (ranking of the return) for each stock, the relative success of the rules in predicting the best stocks can to some degree be measured. 

 

The evaluation model in this example shows the general idea. Actual evaluation models can be made much more complex by more detailed analysis of the rules, possibly also employing inexact reasoning rules to determine the performance of the model! 

INEXACT RULE ASSOCIATIONS

It is sometimes useful to draw an inexact rule tree where each rule might have an associated value found in the knowledge base. This could typically be a value derived in an evaluation sub-model which tells something about each rule (e.g. its effectiveness or relative success).

The inexact rule association is in the KB-sentence type can be used for this purpose:

Example:

the inexact rule association with rule bike is suitable, associated value 3.2, domain_name NUMBER, output format ".1"  and associated parameter 52 is in the KB!

The bike is suitable rule can now be drawn with its associated value 3.2 shown (the domain name and output format data elements are used to obtain the correct display of the value in the tree). The associated parameter is given by the KE as an input to the rule drawing. It makes it possible to have several associated values for each rule.