|
Is the expert system or reasoning model you have created, performing correctly and according to expectations? Are the answers credible? |
This crucial question is often
difficult to answer in applications where the knowledge derived by the
system cannot be validated against empirical data or competently judged by
a human expert.
Validating the model involves validation of integrity and consistency (a mostly mechanical process supported by a various Pro/3 reports and other functions), and validation of the semantics of rules. Knowledge in Pro/3 is derived in two different ways, that is, via sentence rules - essentially an exact reasoning concept; and via inexact reasoning rules. Sentence rules should principally not impose any problem as the knowledge derived is predictable - as long as the knowledge engineer understands the different rule types and the individual rules are correctly specified. However as for any specification language, mistakes are invariably done. See validation of sentence rules for more details. |
Inexact rule-based (sub-)models |
Certainty rule-based models are more
difficult to evaluate:
Pro/3 has special features for evaluating the performance of certainty rules, and these can form the basis of sophisticated evaluation approaches. See evaluation of inexact rules for more details. |