When Do Assertions Become Facts?
In his post, Semantic Wave: When Do Assertions Become Facts?, Jamie Pitts struggles with some of the same issues I have been struggling with recently.
Jamie notes:
As time passes, the latest assertions about role will inevitably contradict previous assertions.
For some relationships, such as an individual’s role, the addition of a property indicating “effective dates” is an appropriate way of avoiding contradiction.
All individuals fulfill a role for limited time; that’s why we put dates on resumes. It’s entirely possible and desirable to reflect this in RDF.
As with the database, the “current” understanding is really an implicit query looking for statements that are effective today. Thankfully, it’s much easier to model statements about statements in RDF than in relational models.
It goes without saying that interpretations of reality are formed in the mind through an ongoing process of re-assessment. We qualitatively compare present impressions with recent impressions. Enough contradiction, and we form a new working state of understanding.
Well, this is really AI territory, isn’t it? An AI needs to recognize assertions that contradict its current knowledge base, decide whether to resolve the contradiction by throwing out the new or old assertion, and re-initialize its deduction/inference engine to recreate all derived knowledge.
Hmmm…let’s run with that for a minute (thinking out loud here).
Here’s an AI; it manages one or more knowledge bases in RDF/OWL and serves up answers to SPARQL queries, possibly formulated by some NLS parser.
It gathers information by spidering or exchanging RDF with other AIs. It reasons by tapping external deduction/inference engines.
One of the knowledge bases contains trust metrics, used to weight the new RDF statements.
A reasoner is set up to check the new information for internal consistency - if the information contradicts itself the source is presented a proof and metrics may be adjusted.
A reasoner then compares the new information with the one or more existing knowledge bases looking for contradictions. Multiple reasoners should be consulted.
If no contradictions, the new information is added. If contradictions are found, the stronger assertion is persists; strength depends on many metrics, including trustworthiness of original sources, degree of corroboration from other sources, metrics concerning number of other assertions affected, and so forth.
One or more reasoning daemons takes the new knowledge base and deduces as many new facts as possible. The strength of the underlying premises determines the strength of the new assertions.
A couple of things we can add - an imagination. Create random triples, then use a reasoner to look for contradictions or proofs. Add surviving assertions to the knowledge base. We can try less random things such as making assertions about superclasses or removing restrictions.
How about experimentation? Some sensors and motors, and now we can start adding experimentally derived facts to the knowledge base.
Of course, for Jamie’s example, he can accomplish what he wants simply by adding new statements. All that’s needed is a shift in thinking; you’re not making assertions about the current state of things, you’re making assertions about a date range that happens to include today (and possibly not making any assertions, yet, about the end of the date range). Now facts are not changing, you’re merely acquiring new facts.