Most data is constrained - it has rules for what is and what is not allowed. These rules can govern things like cardinality (is it required? is it repeatable?), value types (date, currency, string, IRI), and data relationships (If A, then not B; either A or B+C). This controlling aspect of data is what many data stores are built around; a bank, a warehouse, or even a library manage their activities through controlled data.
RDF has a different logical basis. RDF allows you to draw conclusions from the data (called "inferencing") but there is no mechanism of control that would do what we are accustomed to with our current business rules. This seems like such an obvious lack that you might wonder just how the developers of RDF thought it would be used. The answer is that they were not thinking about banking or company databases. The main use case for RDF development was using artificial intelligence-like axioms on the web. That's a very different use case from the kind of data work that most of us engage in.
RDF is characterized by what is called the "open world assumption" which says that:
- at any moment a set of data may be incomplete; that does not make it illegitimate
- anyone can say anything about anything; like the web in general there are no controls over what can and cannot be stated and who can participate
However, RDF is being used in areas where data with controls was once employed; where data is validated for quality and rejected if it doesn't meet certain criteria; where operating on the data is limited to approved actors. This means that we have a mis-match between our data model and some of the uses of that data model.
This mis-match was evident to people using RDF in their business operations. W3C held a preliminary meeting on "Validation of Data Shapes" in which there were presentations over two days that demonstrated some of the solutions that people had developed. This then led to the Data Shapes working group in 2014 which produced the shapes validation language, SHACL (SHApes Constraint Language) in 2017. Of the interesting ways that people had developed to validate their RDF data, the use of SPARQL searches to determine if expected patterns were met became the basis for SHACL. Another RDF validation language, ShEx (Shape Expressions), is independent of SPARQL but has essentially the same functionality of SHACL. There are other languages as well (SPIN, StarDog, etc.) and they all assume a closed world rather than the open world of RDF.
My point on all this is to note that we now have a way to validate RDF instance data but no standard way(s) to define our metadata schema, with constraints, that we can use to produce that data. It's kind of a "tail wagging the dog" situation. There have been musings that the validation languages could also be used for metadata definition, but we don't have a proof of concept and I'm a bit skeptical. The reason I'm skeptical is that there's a certain human-facing element in data design and creation that doesn't need to be there in the validation phase. While there is no reason why the validation languages cannot also contain or link to term definitions, cataloging rules, etc. these would be add-ons. The validation languages also do most of their work at the detailed data level, while some guidance for humans happens at the macro definition of a data model - What is this data for? Who is the audience? What should the data creator know or research before beginning? What are the reference texts that one should have access to? While admittedly the RDA Toolkit used in library data creation is an extreme form of the genre, you can see how much more there is beyond defining specific data elements and their valid values. Using a metadata schema in concert with RDF validation - yes! That's a winning combination, but I think we need bot.
Note that there are also efforts to use the validation languages to analyze existing graphs.(PDF) These could be a quick way to get an overview of data for which you have no description, but the limitations of this technique are easy to spot. They have basically the same problem that AI training datasets do: you only learn what is in that dataset, not the full range of possible graphs and values that can be produced. If your data is very regular then this analysis can be quite helpful; if your data has a lot of variation (as, for example, bibliographic data does) then the analysis of a single file of data may not be terribly helpful. At the same time, exercising the validation languages in this way is one way to discover how we can use algorithms to "look at" RDF data.
Another thing to note is that there's also quite a bit of "validation" that the validation languages do not handle, such as the reconciliation work that if often done in OpenRefine. The validation languages take an atomistic view of the data, not an overall one. I don't see a way to ask the question "Is this entry compatible with all of the other entries in this file?" That the validation languages don't cover this is not a fault, but it must be noted that there is other validation that may need to be done.
WOL, meet WVL
We need a data modeling language that is suitable to RDF data, but that provides actual constraints, not just inferences. It also needs to allow one to choose a closed world rule. The RDF suite of standards has provided the Web Ontology Language, which should be WOL but has been given the almost-acronym name of OWL. OWL does define "constraints", but they aren't constraints in the way we need for data creation. OWL constrains the axioms of inference. That means that it gives you rules to use when operating over a graph of data, and it still works in the open world. The use of the term "ontology" also implies that this is a language for the creation of new terms in a single namespace. That isn't required, but that is becoming a practice.
What we need is a web vocabulary language. WVL. But using the liberty that went from WOL to OWL, we can go from WVL to VWL, and that can be nicely pronounced as VOWEL. VOWEL (I'm going to write it like that because it isn't familiar to readers yet) can supply the constrained world that we need for data creation. It is not necessarily an RDF-based language, but it will use HTTP identifiers for things. It could function as linked data but it also can be entirely in a closed world. Here's what it needs to do:
- describe the things of the metadata
- describe the statements about those things and the values that are valid for those statements
- give cardinality rules for things and statements
- constrain values by type
- give a wide range of possibilities for defining values, such as lists, lists of namespaces, ranges of computable values, classes, etc.
- for each thing and statement have the ability to carry definitions and rules for input and decision-making about the value
- can be serialized in any language that can handle key/value pairs or triples
- can (hopefully easily) be translatable to a validation language or program
One of the ironies of this project is that VOWEL needs to be expressed as a VOWEL. Presumably one could develop an all-new ontology for this, but the fact is that most of what is needed exists already. So this gets meta right off the bat which makes it a bit harder to think about but easier to produce.
There will be a group starting up in the Dublin Core space to continue development of this idea. I will announce that widely when it happens. I think we have some real possibilities here, to make VOWEL a reality. One of my goals will be to follow the general principles of the original Dublin Core metadata, which is that simple wins out over complex, and it's easier to complex-ify simple than to simplify complex.
No comments:
Post a Comment
Comments are moderated, so may not appear immediately, depending on how far away I am from email, time zones, etc.