Monday, May 25, 2020

1982

I've been trying to capture what I remember about the early days of library automation. Mostly my memory is about fun discoveries in my particular area (processing MARC records into the online catalog). I did run into an offprint of some articles in ITAL from 1982 (*) which provide very specific information about the technical environment, and I thought some folks might find that interesting. This refers to the University of California MELVYL union catalog, which at the time had about 800,000 records.

Operating system: IBM 360/370
Programming language: PL/I
CPU: 24 megabytes of memory
Storage: 22 disk drives, ~ 10 gigabytes
DBMS: ADABAS

The disk drives were each about the size of an industrial washing machine. In fact, we referred to the room that held them as "the laundromat."

Telecommunications was a big deal because there was no telecommunications network linking the libraries of the University of California. There wasn't even one connecting the campuses at all. The article talks about the various possibilities, from an X.25 network to the new TCP/IP protocol that allows "internetwork communication." The first network was a set of dedicated lines leased from the phone company that could transmit 120 characters per second (character = byte) to about 8 ASCII terminals at each campus over a 9600 baud line. There was a hope to be able to double the number of terminals.

In the speculation about the future, there was doubt that it would be possible to open up the library system to folks outside of the UC campuses, much less internationally. (MELVYL was one of the early libraries to be open access worldwide over the Internet, just a few years later.) It was also thought that libraries would charge other libraries to view their catalogs, kind of like an inter-library loan.

And for anyone who has an interest in Z39.50, one section of the article by David Shaughnessy and Clifford Lynch on telecommunications outlines a need for catalog-to-catalog communication which sounds very much like the first glimmer of that protocol.

-----

(*) Various authors in a special edition: (1982). In-Depth: University of California MELVYL. Information Technology and Libraries, 1(4)

I wish I could give a better citation but my offprint does not have page numbers and I can't find this indexed anywhere. (Cue here the usual irony that libraries are terrible at preserving their own story.)

Monday, April 27, 2020

Ceci n'est pas une Bibliothèque

On March 24, 2020, the Internet Archive announced that it would "suspend waitlists for the 1.4 million (and growing) books in our lending library," a service they then named The National Emergency Library. These books were previously available for lending on a one-to-one basis with the physical book owned by the Archive, and as with physical books users would have to wait for the book to be returned before they could borrow it. Worded as a suspension of waitlists due to the closure of schools and libraries caused by the presence of the coronavirus-19, this announcement essentially eliminated the one-to-one nature of the Archive's Controlled Digital Lending program. Publishers were already making threatening noises about the digital lending when it adhered to lending limitations, and surely will be even more incensed about this unrestricted lending.

I am not going to comment on the legality of the Internet Archive's lending practices. Legal minds, perhaps motivated by future lawsuits, will weigh in on that. I do, however, have much to say on the use of the term "library" for this set of books. It's a topic worthy of a lengthy treatment, but I'll give only a brief account here.

LIBRARY … BIBLIOTHÈQUE … BIBLIOTEK


The roots “LIBR…” and “BIBLIO…” both come down to us from ancient words for trees and tree bark. It is presumed that said bark was the surface for early writings. “LIBR…”, from the Latin word liber meaning “book,” in many languages is a prefix that indicates a bookseller’s shop, while in English it has come to mean a collection of books and from that also the room or building where books are kept. “BIBLIO…” derives instead from the Greek biblion (one book) and biblia (books, plural). We get the word Bible through the Greek root, which leaked into old Latin and meant The Book.

Therefore it is no wonder that in the minds of many people, books = library.  In fact, most libraries are large collections of books, but that does not mean that every large collection of books is a library. Amazon has a large number of books, but is not a library; it is a store where books are sold. Google has quite a few books in its "book search" and even allows you to view portions of the books without payment, but it is also not a library, it's a search engine. The Internet Archive, Amazon, and Google all have catalogs of metadata for the books they are offering, some of it taken from actual library catalogs, but a catalog does not make a quantity of books into a library. After all, Home Depot has a catalog, Walmart has a catalog; in essence, any business with an inventory has a catalog.
"...most libraries are large collections of books, but that does not mean that every large collection of books is a library."

The Library Test

First, I want to note that the Internet Archive has met the State of California test to be defined as a library, and this has made it possible for the Archive to apply for library-related grants for some of its projects. That is a Good Thing because it has surely strengthened the Archive and its activities. However, it must be said that the State of California requirements are pretty minimal, and seem to be limited to a non-profit organization making materials available to the general public without discrimination. There doesn't seem to be a distinction between "library" and "archive" in the state legal code, although librarians and archivists would not generally consider them easily lumped together as equivalent services.

The Collection

The Archive's blog post says "the Internet Archive currently lends about as many as a US library that serves a population of about 30,000." As a comparison, I found in the statistics gathered by the California State Library those of the Benicia Public Library in Benicia California. Benicia is a city with a population of 31,000; the library has about 88,000 books. Well, you might say, that's not as good as over one million books at the Internet Archive. But, here's the thing: those are not 88,000 random books, they are books chosen to be, as far as the librarians could know, the best books for that small city. If Benicia residents were, for example, primarily Chinese-speaking, the library would surely have many books in Chinese. If the city had a large number of young families then the children's section would get particular attention. The users of the Internet Archive's books are a self-selected (and currently un-defined) set of Internet users. Equally difficult to define is the collection that is available to them:
This library brings together all the books from Phillips Academy Andover and Marygrove College, and much of Trent University’s collections, along with over a million other books donated from other libraries to readers worldwide that are locked out of their libraries.
Each of these is (or was, in the case of Marygrove, which has closed) a collection tailored to the didactic needs of that institution. How one translates that, if one can, to the larger Internet population is unknown. That a collection has served a specific set of users does not mean that it can serve all users equally well. Then there is that other million books, which are a complete black box.

Library science

I've argued before against dumping a large and undistinguished set of books on a populace, regardless of the good intentions of those doing so. Why not give the library users of a small city these one million books? The main reason is the ability of the library to fulfill the 5 Laws of Library Science:
  1. Books are for use.
  2. Every reader his or her book.
  3. Every book its reader.
  4. Save the time of the reader.
  5. The library is a growing organism. [0]
The online collection of the Internet Archive nicely fulfills laws 1 and 5: the digital books are designed for use, and the library can grow somewhat indefinitely. The other three laws are unfortunately hindered by the somewhat haphazard nature of the set of books, combined with the lack of user services.

Of the goals of librarianship, matching readers to books is the most difficult. Let's start with law 3, "every book its reader." When you follow the URL to the National Emergency Library, you see something like this:
The lack of cover art is not the problem here. Look at what books you find: two meeting reports, one journal publication, and a book about hand surgery, all from 1925. Scroll down for a bit and you will find it hard to locate items that are less obscure than this, although undoubtedly there are some good reads in this collection. These are not the books whose readers will likely be found in our hypothetical small city. These are books that even some higher education institutions would probably choose not to have in their collections. While these make the total number of available books large, they may not make the total number of useful books large. Winnowing this set to one or more (probably more) wheat-filled collections could greatly increase the usability of this set of books.

"While these make the total number of available books large, they may not make the total number of useful books large."

A large "anything goes" set of documents is a real challenge for laws 2 and 4: every reader his or her book, and save the time of the reader. The more chaff you have the harder it is for a library user to find the wheat they are seeking. The larger the collection the more of the burden is placed on the user to formulate a targeted search query and to have the background to know which items to skip over. The larger the retrieved set, the less likely that any user will scroll through the entire display to find the best book for their purposes. This is the case for any large library catalog, but these libraries have built their collection around a particular set of goals. Those goals matter. Goals are developed to address a number of factors, like:
  • What are the topics of interest to my readers and my institution?
  • How representative must my collection be in each topic area?
  • What are the essential works in each topic area?
  • What depth of coverage is needed for each topic? [1]
If we assume (and we absolutely must assume this) that the user entering the library is seeking information that he or she lacks, then we cannot expect users to approach the library as an expert in the topic being researched. Although anyone can type in a simple query, fewer can assess the validity and the scope of the results. A search on "California history" in the National Emergency Library yields some interesting-looking books, but are these the best books on the topic? Are any key titles missing? These are the questions that librarians answer when developing collections.

The creation of a well-rounded collection is a difficult task. There are actual measurements that can be run against library collections to determine if they have the coverage that can be expected compared to similar libraries. I don't know if any such statistical packages can look beyond quantitative measures to judge the quality of the collection; the ones I'm aware of look at call number ranges, not individual titles.  There

Library Service


The Archive's own documentation states that "The Internet Archive focuses on preservation and providing access to digital cultural artifacts. For assistance with research or appraisal, you are bound to find the information you seek elsewhere on the internet." After which it advises people to get help through their local public library. Helping users find materials suited to their need is a key service provided by libraries. When I began working in libraries in the dark ages of the 1960's, users generally entered the library and went directly to the reference desk to state the question that brought them to the institution. This changed when catalogs went online and were searchable by keyword, but prior to then the catalog in a public library was primarily a tool for librarians to use when helping patrons. Still, libraries have real or virtual reference desks because users are not expected to have the knowledge of libraries or of topics that would allow them to function entirely on their own. And while this is true for libraries it is also true, perhaps even more so, for archives whose collections can be difficult to navigate without specialized information. Admitting that you give no help to users seeking materials makes the use of the term "library" ... unfortunate.

What is to be done?


There are undoubtedly a lot of useful materials among the digital books at the Internet Archive. However, someone needing materials has no idea whether they can expect to find what they need in this amalgamation. The burden of determining whether the Archive's collection might suit their needs is left entirely up to the members of this very fuzzy set called "Internet users." That the collection lends at the rate of a public library serving a population of 30,000 shows that it is most likely under-utilized. Because the nature of the collection is unknown one can't approach, say, a teacher of middle-school biology and say: "they've got what you need." Yet the Archive cannot implement a policy to complete areas of the collection unless it knows what it has as compared to known needs.

"... these warehouses of potentially readable text will remain under-utilized until we can discover a way to make them useful in the ways that libraries have proved to be useful."

I wish I could say that a solution would be simple - but it would not. For example, it would be great to extract from this collection works that are commonly held in specific topic areas in small, medium and large libraries. The statistical packages that analyze library holdings all are, AFAIK, proprietary. (If anyone knows of an open source package that does this, please shout it out!) If would also be great to be able to connect library collections of analog books to their digital equivalents. That too is more complex than one would expect, and would have to be much simpler to be offered openly. [2]

While some organizations move forward with digitizing books and other hard copy materials, these warehouses of potentially readable text will remain under-utilized until we can discover a way to make them useful in the ways that libraries have proved to be useful. This will mean taking seriously what modern librarianship has developed over its circa 2 centuries, and in particular those 5 laws that give us a philosophy to guide our vision of service to the users of libraries.

-----

[0] Even if you are familiar with the 5 laws you may not know that Ranganathan was not as succinct as this short list may imply. The book in which he introduces these concepts is over 450 pages long, with extended definitions and many homey anecdotes and stories.

[1] A search on "collection development policy" will yield many pages of policies that you can peruse. To make this a "one click" here are a few *non-representative* policies that you can take a peek at:
[2] Dan Scott and I did a project of this nature with a Bay Area public library and it took a huge amount of human intervention to determine whether the items matched were really "equivalent". That's a discussion for another time, but, man, books are more complicated than they appear.

Monday, February 03, 2020

Use the Leader, Luke!

If you learned the MARC format "on the job" or in some other library context you may have learned that the record is structured as fields with 3-digit tags, each with two numeric indicators, and that subfields have a subfield indicator (often shown as "$" because it is a non-printable character) and a single character subfield code (a-z, 0-9). That is all true for the MARC records that libraries create and process, but the MAchine Readable Cataloging standard (Z39.2 or ISO 2709) has other possibilities that we are not using. Our "MARC" (currently MARC21) is a single selection from among those possibilities, in essence an application profile of the MARC standard. The key to the possibilities afforded by MARC is in the MARC Leader, and in particular in two positions that our systems generally ignore because they always contain the same values in our data:
Leader byte 10 -- Indicator count
Leader byte 11 -- Subfield code length
In MARC21 records, Leader byte 10 is always "2" meaning that fields have 2-byte indicators, and Leader byte 11 is always 2 because the subfield code is always two characters in length. That was a decision made early on in the life of MARC records in libraries, and it's easy to forget that there were other options that were not taken. Let's take a short look at the possibilities the record format affords beyond our choice.

Both of these Leader positions are single bytes that can take values from 0 to 9. An application could use the MARC record format and have zero indicators. It isn't hard to imagine an application that has no need of indicators or that has determined to make use of subfields in their stead. As an example, the provenance of vocabulary data for thesauri like LCSH or the Art and Architecture Thesaurus could always be coded in a subfield rather than in an indicator:
650 $a Religion and science $2 LCSH
Another common use of indicators in MARC21 is to give a byte count for the non-filing initial articles on title strings. Istead of using an indicator value for this some libraries outside of the US developed a non-printing code to make the beginning and end of the non-filing portion. I'll use backslashes to represent these codes in this example:
245 $a \The \Birds of North America
I am not saying that all indicators in MARC21 should or even could be eliminated, but that we shouldn't assume that our current practice is the only way to code data.

In the other direction, what if you could have more than two indicators? The MARC record would allow you have have as many as nine. In addition, there is nothing to say that each byte in the indicator has to be a separate data element; you could have nine indicator positions that were defined as two data elements (4 + 5), or some other number (1 + 2 + 6). Expanding the number of indicators, or beginning with a larger number, could have prevented the split in provenance codes for subject vocabularies between one indicator value and the overflow subfield, $2, when the number exceeded the capability of a single numerical byte. Having three or four bytes for those codes in the indicator and expanding the values to include a-z would have been enough to include the full list of authorities for the data in the indicators. (Although I would still prefer putting them all in $2 using the mnemonic codes for ease of input.)

In the first University of California union catalog in the early 1980's we expanded the MARC indicators to hold an additional two bytes (or was it four?) so that we could record, for each MARC field, which library had contributed it. Our union catalog record was a composite MARC record with fields from any and all of the over 300 libraries across the University of California system that contributed to the union catalog as dozen or so separate record feeds from OCLC and RLIN. We treated the added indicator bytes as sets of bits, turning on bits to represent the catalog feeds from the libraries. If two or more libraries submitted exactly the same MARC field we stored the field once and turned on a bit for each separate library feed. If a library submitted a field for a record that was new to the record, we added the field and turned on the appropriate bit. When we created a user display we selected fields from only one of the libraries. (The rules for that selection process were something of a secret so as not to hurt anyone's feelings, but there was a "best" record for display.) It was a multi-library MARC record, made possible by the ability to use more than two indicators.

Now on to the subfield code. The rule for MARC21 is that there is a single subfield code and that is a lower case a-z and 0-9. The numeric codes have special meaning and do not vary by field; the alphabetic codes aare a bit more flexible. That gives use 26 possible subfields per tag, plus the 10 pre-defined numeric ones. The MARC21 standard has chosen to limit the alphabetic subfield codes to lower case characters. As the fields reached the limits of the available subfield codes (and many did over time) you might think that the easiest solution would be to allow upper case letters as subfield codes. Although the subfield code limitation was reached decades ago for some fields I can personally attest to the fact that suggesting the expansion of subfield codes to upper case letters was met with horrified glares at the MARC standards meeting. While clearly in 1968 the range of a-z seemed ample, that has not be the case for nearly half of the life-span of MARC.

The MARC Leader allows one to define up to 9 characters total for subfield codes. The value in this Leader position includes the subfield delimiter so this means that you can have a subfield delimiter and up to 8 characters to encode a subfield. Even expanding from a-z to aa-zz provides vastly more possibilities, and allow upper case as well give you a dizzying array of choices.

The other thing to mention is that there is no prescription that field tags must be numeric. They are limited to three characters in the MARC standard, but those could be a-z, A-Z, 0-9, not just 0-9, greatly expanding the possibilities for adding new tags. In fact, if you have been in the position to view internal systems records in your vendor system you may have been able to see that non-numeric tags have been used for internal system purposes, like noting who made each edit, whether functions like automated authority control have been performed on the record, etc. Many of the "violations" of the MARC21 rules listed here have been exploited internally -- and since early days of library systems.

There are other modifiable Leader values, in particular the one that determines the maximum length of a field, Leader 20. MARC21 has Leader 20 set at "4" meaning that fields cannot be longer than 9999. That could be longer, although the record size itself is set at only 5 bytes, so a record cannot be longer than 99999. However, one could limit fields to 999 (Leader value 20 set at "3") for an application that does less pre-composing of data compared to MARC21 and therefore comfortably fits within a shorter field length. 

The reason that has been given, over time, why none of these changes were made was always: it's too late, we can't change our systems now. This is, as Caesar might have said, cacas tauri. Systems have been able to absorb some pretty intense changes to the record format and its contents, and a change like adding more subfield codes would not be impossible. The problem is not really with the MARC21 record but with our inability (or refusal) to plan and execute the changes needed to evolve our systems. We could sit down today and develop a plan and a timeline. If you are skeptical, here's an example of how one could manage a change in length to the subfield codes:

a MARC21 record is retrieved for editing
  1. read the Leader 10 of the MARC21 record
  2. if the value is "2" and you need to add a new subfield that uses the subfield code plus two characters, convert all of the subfield codes in the record:
    • $a becomes $aa, $b becomes $ba, etc.
    • $0 becomes $01, $1 becomes $11, etc.
    • Leader 10 code is changed to "3"
  3. (alternatively, convert all records opened for editing)

a MARC21 record is retrieved for display
  1. read the Leader 10 of the MARC21 record
  2. if the value is "2" use the internal table of subfield codes for records with the value "2"
  3. if the value is "3" use the internal table of subfield codes for records with the value "3"

Sounds impossible? We moved from AACR to AACR2, and now from AACR2 to RDA without going back and converting all of our records to the new content.  We have added new fields to our records, such as the 336, 337, 338 for RDA values, without converting all of the earlier records in our files to have these fields. The same with new subfields, like $0, which has only been added in recent years. Our files have been using mixed record types for at least a couple of generations -- generations of systems and generations of catalogers.

Alas, the time to make these kinds of changes this was many years ago. Would it be worth doing today? That depends on whether we anticipate a change to BIBFRAME (or some other data format) in the near future. Changes do continue to be made to the MARC21 record; perhaps it would have a longer future if we could broach the subject of fixing some of the errors that were introduced in the past, in particular those that arose because of the limitations of MARC21 that could be rectified with an expansion of that record standard. That may also help us not carry over some of the problems in MARC21 that are caused by these limitations to a new record format that does not need to be limited in these ways.

Epilogue


Although the MARC  record was incredibly advanced compared to other data formats of its time (the mid-1960's), it has some limitations that cannot be overcome within the standard itself. One obvious one is the limitation of the record length to 5 bytes. Another is the fact that there are only two levels of nesting of data: the field and the subfield. There are times when a sub-subfield would be useful, such as when adding information that relates to only one subfield, not the entire field (provenance, external URL link). I can't advocate for continuing the data format that is often called "binary MARC" simply because it has limitations that require work-arounds. MARCXML, as defined as a standard, gets around the field and record length limitations, but it is not allowed to vary from the MARC21 limitations on field and subfield coding. It would be incredibly logical to move to a "non-binary" record format (XML, JSON, etc.) beginning with the existing MARC21 and  to allow expansions where needed. It is the stubborn adherence to the ISO 2709 format really has limited library data, and it is all the more puzzling because other solutions that can keep the data itself intact have been available for many decades.

Tuesday, January 28, 2020

Pamflets

I was always a bit confused about the inclusion of "pamflets" in the subtitle of the Decimal System, such as this title page from the 1922 edition:


Did libraries at the time collect numerous pamphlets? For them to be the second-named type of material after books was especially puzzling.

I may have discovered an answer to my puzzlement, if not THE answer, in Andrea Costadoro's 1856 work:
A "pamphlet" in 1856 was not (necessarily) what I had in mind, which was a flimsy publication of the type given out by businesses, tourist destinations, or public health offices. In the 1800's it appears that a pamphlet was a literary type, not a physical format. Costadoro says:
"It has been a matter of discussion what books should be considered pamphlets and what not. If this appellation is intended merely to refer to the SIZE of the book, the question can be scarecely worth considering ; but if it is meant to refer to the NATURE of a work, it may be considered to be of the same class and to stand in the same connexion with the word Treatise as the words Tract ; Hints ; Remarks ; &c, when these terms are descriptive of the nature of the books to which they are affixed." (p. 42)
To be on the shelves of libraries, and cataloged, it is possible that these pamphlets were indeed bound, perhaps by the library itself. 

The Library of Congress genre list today has a cross-reference from "pamphlet" to "Tract (ephemera)". While Costadoro's definition doesn't give any particular subject content to the type of work, LC's definition says that these are often issued by religious or political groups for proselytizing. So these are pamphlets in the sense of the political pamphlets of our revolutionary war. Today they would be blog posts, or articles in Buzzfeed or Slate or any one of hundreds of online sites that post such content.

Churches I have visited often have short publications available near the entrance, and there is always the Watchtower, distributed by Jehovah's Witnesses at key locations throughout the world, and which is something between a pamphlet (in the modern sense) and a journal issue. These are probably not gathered in most libraries today. In Dewey's time the printing (and collecting by libraries) of sermons was quite common. In a world where many people either were not literate or did not have access to much reading material, the Sunday sermon was a "long form" work, read by a pastor who was probably not as eloquent as the published "stars" of the Sunday gatherings. Some sermons were brought together into collections and published, others were published (and seemingly bound) on their own.  Dewey is often criticized for the bias in his classification, but what you find in the early editions serves as a brief overview of the printed materials that the US (and mostly East Coast) culture of that time valued. 

What now puzzles me is what took the place of these tracts between the time of Dewey and the Web. I can find archives of political and cultural pamphlets in various countries and they all seem to end around the 1920's-30's, although some specific collections, such as the Samizdat publications in the Soviet Union, exist in other time periods.

Of course the other question now is: how many of today's tracts and treatises will survive if they are not published in book form?

Saturday, November 23, 2019

The Work

The word "work" generally means something brought about by human effort, and at times implies that this effort involves some level of creativity. We talk about "works of art" referring to paintings hanging on walls. The "works" of Beethoven are a large number of musical pieces that we may have heard. The "works" of Shakespeare are plays, in printed form but also performed. In these statements the "work" encompasses the whole of the thing referred to, from the intellectual content to the final presentation.

This is not the same use of the term as is found in the Library Reference Model (LRM). If you are unfamiliar with the LRM, it is the successor to FRBR (which I am assuming you have heard of) and it includes the basic concepts of work, expression, manifestation and item that were first introduced in that previous study. "Work," as used in the LRM is a concept designed for use in library cataloging data. It is narrower than the common use of the term illustrated in the previous paragraph and is defined thus:
Class: Work
Definition: An abstract notion of an artistic or intellectual creation.
In this definition the term only includes the idea of a non-corporeal conceptual entity, not the totality that would be implied in the phrase "the works of Shakespeare." That totality is described when the work is realized through an LRM-defined "expression" which in turn is produced in an LRM-defined "manifestation" with an LRM-defined "item" as its instance.* These four entities are generally referred to as a group with the acronym WEMI.

Because many in the library world are very familiar with the LRM definition of work, we have to use caution when using the word outside the specific LRM environment. In particular, we must not impose the LRM definition on uses of the work that are not intending that meaning. One should expect that the use of the LRM definition of work would be rarely found in any conversation that is not about the library cataloging model for which it was defined. However, it is harder to distinguish uses within the library world where one might expect the use to be adherent to the LRM.

To show this, I want to propose a particular use case. Let's say that a very large bibliographic database has many records of bibliographic description. The use case is that it is deemed to be easier for users to navigate that large database if they could get search results that cluster works rather than getting long lists of similar or nearly identical bibliographic items. Logically the cluster looks like this:


In data design, it will have a form something like this:


This is a great idea, and it does appear to have a similarity to the LRM definition of work: it is gathering those bibliographic entries that are judged to represent the same intellectual content. However, there are reasons why the LRM-defined work could not be used in this instance.

The first is that there is only one WEMI relationship for work, and that is from LRM work to LRM expression. Clearly the bibliographic records in this large library catalog are not LRM expressions; they are full bibliographic descriptions including, potentially, all of the entities defined in the LRM.

To this you might say: but there is expression data in the bibliographic record, so we can think of this work as linking to the expression data in that record. That leads us to the second reason: the entities of WEMI are defined as being disjoint. That means that no single "thing" can be more than one of those entities; nothing can be simultaneously a work and an expression, or any other combination of WEMI entities. So if the only link we have available in the model is from work to expression, unless we can somehow convince ourselves that the bibliographic record ONLY represents the expression (which it clearly does not since it has data elements from at least three of the LRM entities) any such link will violate the rule of disjointness.

Therefore, the work in our library system can have much in common with the conceptual definition of the LRM work, but it is not the same work entity as is defined in that model.

This brings me back to my earlier blog post with a proposal for a generalized definition of WEMI-like entities for created works.  The WEMI concepts are useful in practice, but the LRM model has some constraints that prevent some desirable uses of those entities. Providing unconstrained entities would expand the utility of the WEMI concepts both within the library community, as evidenced by the use case here, and in the non-library communities that I highlight in that previous blog post and in a slide presentation.

To be clear, "unconstrained" refers not only to the removal of the disjointness between entities, but also to allow the creation of links between the WEMI entities and non-WEMI entities, something that is not anticipated in the LRM. The work cluster of bibliographic records would need a general relationship, perhaps, as in the case of VIAF, linked through a shared cluster identifier and an entity type identifying the cluster as representing an unconstrained work.

----
* The other terms are defined in the LRM as:

Class: Expression
Definition: A realization of a single work usually in a physical form.

Class: Manifestation
Definition: The physical embodiment of one or more expressions.

Class: Item
Definition: An exemplar of a single manifestation.

Monday, April 08, 2019

I, too, want answers

Around 1966-67 I worked on the reference desk at my local public library. For those too young to remember, this was a time when all information was in paper form, and much of that paper was available only at the library. The Internet was just a twinkle in the eye of some scientists at DARPA, and none of us had any idea what kind of information environment was in our future.* The library had a card catalog and the latest thing was that check-outs were somehow recorded on microfilm, as I recall.

As you entered the library the reference desk was directly in front of you in the prime location in the middle of the main room. A large number of library users went directly to the desk upon entering. Some of these users had a particular research in mind: a topic, an author, or a title. They came to the reference desk to find the quickest route to what they sought. The librarian would take them to the card catalog, would look up the entry, and perhaps even go to the shelf with the user to look for the item.**

There was another type of reference request: a request for facts, not resources. If one wanted to know what was the population of Milwaukee, or how many slot machines there were in Saudia Arabia***, one turned to the library for answers. At the reference desk we had a variety of reference materials: encyclopedias, almanacs, dictionaries, atlases. The questions that we could answer quickly were called "ready reference." These responses were generally factual.

Because the ready reference service didn't require anything of the user except to ask the question, we also provided this service over the phone to anyone who called in. We considered ourselves at the forefront of modern information services when someone would call and ask us: "Who won best actor in 1937?" OK, it probably was a bar bet or a crossword puzzle clue but we answered, proud of ourselves.

I was reminded of all this by a recent article in Wired magazine, "Alexa, I Want Answers."[1] The argument as presented in the article is that what people REALLY want is an answer; they don't want to dig through books and journals at the library; they don't even want an online search that returns a page of results; what they want is to ask a question and get an answer, a single answer. What they want is "ready reference" by voice, in their own home, without having to engage with a human being. The article is about the development of the virtual, voice-first, answer machine: Alexa.

There are some obvious observations to be made about this. The glaringly obvious one is that not all questions lend themselves to a single, one sentence answer. Even a question that can be asked concisely may not have a concise answer. One that I recall from those long-ago days on the reference desk was the question: "When did the Vietnam War begin?" To answer this you would need to clarify a number of things: on whose part? US? France? Exactly what do you mean by begin? First personnel? First troops? Even with these details in hand experts would differ in their answers.

Another observation is that in the question/answer method over a voice device like Alexa, replying with a lengthy answer is not foreseen. Voice-first systems are backed by databases of facts, not explanatory texts. Like a GPS system they take facts and render them in a way that seems conversational. Your GPS doesn't reply with the numbers of longitude and latitude, and your weather app wraps the weather data in phrases like: "It's 63 degrees outside and might rain later today." It doesn't, however, offer a lengthy discourse on the topic. Just the facts, ma'am.[3]

It is very troubling that we have no measure of the accuracy of these answers. There are quite a few anecdotes about wrong answers (especially amusing ones) from voice assistants, but I haven't seen any concerted studies of the overall accuracy rate. Studies of this nature were done in the 1970's and 1980's on library reference services, and the results were shocking. Even though library reference was done by human beings who presumably would be capable of detecting wrong answers, the accuracy of answers hovered around 50-60%.[2] Repeated studies came up with similar results, and library journals were filled with articles about this problem. The  solution offered was to increase training of reference staff. Before the problem could be resolved, however, users who previously had made use of "ready reference" had moved on to in-sourcing their own reference questions by using the new information system: the Internet. If there still is ready reference occuring in libraries, it is undoubtedly greatly reduced in the number of questions asked, and it doesn't appear that studying the accuracy is on our minds today.

I have one final observation, and that is that we do not know the source(s) of the information behind the answers given by voice assistants. The companies behind these products have developed databases that are not visible to us, and no source information is given for individual answers. The voice-activated machines themselves are not the main product: they are mere user interfaces, dressed up with design elements that make them appealing as home decor. The data behind the machines is what is being sold, and is what makes the machines useful. With all of the recent discussion of algorithmic bias in artificial intelligence we should be very concerned about where these answers come from, and we should seriously consider if "answers" to some questions are even appropriate or desirable.

Now, I have question: how is it possible that so much of our new technology is based on so little intellectual depth? Is reductionism an essential element of technology,  or could we do better? I'm not going to ask Alexa**** for an answer to that.

[1] Vlahos, James. “Alexa, I Want Answers.” Wired, vol. 27, no. 3, Mar. 2019, p. 58. (Try EBSCO)
[2] Weech, Terry L. “Review of The Accuracy of Telephone Reference/Information Services in Academic Libraries: Two Studies.” The Library Quarterly: Information, Community, Policy, vol. 54, no. 1, 1984, pp. 130–31.
[3] https://en.wikipedia.org/wiki/Joe_Friday


* The only computers we saw were the ones on Star Trek (1966), and those were clearly a fiction.
** This was also the era in which the gas station attendent pumped your gas, washed your windows, and checked your oil while you waited in your car.
*** The question about Saudia Arabia is one that I actually got. I also got the one about whether there were many "colored people" in Haiti. I don't remember how I answered the former, but I do remember that the user who asked the latter was quite disappointed with the answer. I think he decided not to go.
**** Which I do not have; I find it creepy even though I can imagine some things for which it could be useful.

Tuesday, March 12, 2019

I'd like to buy a VOWEL

One of the "defects" of RDF for data management is that it does not support business rules. That's a generality, so let me explain a bit.

Most data is constrained - it has rules for what is and what is not allowed. These rules can govern things like cardinality (is it required? is it repeatable?), value types (date, currency, string, IRI), and data relationships (If A, then not B; either A or B+C). This controlling aspect of data is what many data stores are built around; a bank, a warehouse, or even a library manage their activities through controlled data.

RDF has a different logical basis. RDF allows you to draw conclusions from the data (called "inferencing") but there is no mechanism of control that would do what we are accustomed to with our current business rules. This seems like such an obvious lack that you might wonder just how the developers of RDF thought it would be used. The answer is that they were not thinking about banking or company databases. The main use case for RDF development was using artificial intelligence-like axioms on the web. That's a very different use case from the kind of data work that most of us engage in.

RDF is characterized by what is called the "open world assumption" which says that:

- at any moment a set of data may be incomplete; that does not make it illegitimate
- anyone can say anything about anything; like the web in general there are no controls over what can and cannot be stated and who can participate

However, RDF is being used in areas where data with controls was once employed; where data is validated for quality and rejected if it doesn't meet certain criteria; where operating on the data is limited to approved actors. This means that we have a mis-match between our data model and some of the uses of that data model.

This mis-match was evident to people using RDF in their business operations. W3C held a preliminary meeting on "Validation of Data Shapes" in which there were presentations over two days that demonstrated some of the solutions that people had developed.  This then led to the Data Shapes working group in 2014 which produced the shapes validation language, SHACL (SHApes Constraint Language) in 2017. Of the interesting ways that people had developed to validate their RDF data, the use of SPARQL searches to determine if expected patterns were met became the basis for SHACL. Another RDF validation language, ShEx (Shape Expressions), is independent of SPARQL but has essentially the same functionality of SHACL. There are other languages as well (SPIN, StarDog, etc.) and they all assume a closed world rather than the open world of RDF.

My point on all this is to note that we now have a way to validate RDF instance data but no standard way(s) to define our metadata schema, with constraints, that we can use to produce that data. It's kind of a "tail wagging the dog" situation. There have been musings that the validation languages could also be used for metadata definition, but we don't have a proof of concept and I'm a bit skeptical. The reason I'm skeptical is that there's a certain human-facing element in data design and creation that doesn't need to be there in the validation phase. While there is no reason why the validation languages cannot also contain or link to term definitions, cataloging rules, etc. these would be add-ons. The validation languages also do most of their work at the detailed data level, while some guidance for humans happens at the macro definition of a data model - What is this data for? Who is the audience? What should the data creator know or research before beginning? What are the reference texts that one should have access to? While admittedly the RDA Toolkit  used in library data creation is an extreme form of the genre, you can see how much more there is beyond defining specific data elements and their valid values. Using a metadata schema in concert with RDF validation - yes! That's a winning combination, but I think we need bot.

Note that there are also efforts to use the validation languages to analyze existing graphs.(PDF) These could be a quick way to get an overview of data for which you have no description, but the limitations of this technique are easy to spot. They have basically the same problem that AI training datasets do: you only learn what is in that dataset, not the full range of possible graphs and values that can be produced. If your data is very regular then this analysis can be quite helpful; if your data has a lot of variation (as, for example, bibliographic data does) then the analysis of a single file of data may not be terribly helpful. At the same time, exercising the validation languages in this way is one way to discover how we can use algorithms to "look at" RDF data.

Another thing to note is that there's also quite a bit of "validation" that the validation languages do not handle, such as the reconciliation work that if often done in OpenRefine. The validation languages take an atomistic view of the data, not an overall one. I don't see a way to ask the question "Is this entry compatible with all of the other entries in this file?" That the validation languages don't cover this is not a fault, but it must be noted that there is other validation that may need to be done.

WOL, meet WVL

 

We need a data modeling language that is suitable to RDF data, but that provides actual constraints, not just inferences. It also needs to allow one to choose a closed world rule. The RDF suite of standards has provided the Web Ontology Language, which should be WOL but has been given the almost-acronym name of OWL. OWL does define "constraints", but they aren't constraints in the way we need for data creation. OWL constrains the axioms of inference. That means that it gives you rules to use when operating over a graph of data, and it still works in the open world. The use of the term "ontology" also implies that this is a language for the creation of new terms in a single namespace. That isn't required, but that is becoming a practice.

What we need is a web vocabulary language. WVL. But using the liberty that went from WOL to OWL, we can go from WVL to VWL, and that can be nicely pronounced as VOWEL. VOWEL (I'm going to write it like that because it isn't familiar to readers yet) can supply the constrained world that we need for data creation. It is not necessarily an RDF-based language, but it will use HTTP identifiers for things. It could function as linked data but it also can be entirely in a closed world. Here's what it needs to do:
  • describe the things of the metadata
  • describe the statements about those things and the values that are valid for those statements
  • give cardinality rules for things and statements
  • constrain values by type
  • give a wide range of possibilities for defining values, such as lists, lists of namespaces, ranges of computable values, classes, etc.
  • for each thing and statement have the ability to carry definitions and rules for input and decision-making about the value
  • can be serialized in any language that can handle key/value pairs or triples
  • can (hopefully easily) be translatable to a validation language or program
Obviously there may be more. This is not fully-formed yet, just the beginning. I have defined some of it in a github repo. (Ignore the name of the repo - that came from an earlier but related project.) That site also has some other thoughts, such as design patterns, a requirements document, and some comparison between existing proposals, such as the Dublin Core community's Description Set Profile, BIBFRAME, and soon Stanford's profle generator, Sinopia.

One of the ironies of this project is that VOWEL needs to be expressed as a VOWEL. Presumably one could develop an all-new ontology for this, but the fact is that most of what is needed exists already. So this gets meta right off the bat which makes it a bit harder to think about but easier to produce.

There will be a group starting up in the Dublin Core space to continue development of this idea. I will announce that widely when it happens. I think we have some real possibilities here, to make VOWEL a reality. One of my goals will be to follow the general principles of the original Dublin Core metadata, which is that simple wins out over complex, and it's easier to complex-ify simple than to simplify complex.