Friday, May 25, 2007

The DMCA's Silver Lining

I've just returned from the UMUC conference on copyright, this year with a theme of Copyright Utopia. This is the seventh year that UMUC has held this conference and the first that I have attended, and it was an excellent two days with many interesting talks. I'll mention here two in particular, that of Fred von Lohmann of the Electronic Frontier Foundation, who talked about the "mashup culture," and that of William "Terry" Fisher, Director, Berkman Center for Internet and Society, Harvard Law School, who proposed a solution to the file sharing hoopla.

von Lohmann showed a number of highly entertaining videos from YouTube, all of which had some level of potential copyright infringement. As he explained, content of this nature would never appear through traditional media channels such as television or even theaters or bookstores. The reason? Because of carrier liability; that is, because a TV station or bookstore could be held liable for the content that it makes available, even if it didn't create that content. Thanks to the DMCA and its provision that treats internet service providers (ISPs) as common carriers, organizations like Youtube cannot be held responsible for the content that flows across the portion of the internet that they control, as long as the organization has a "take-down" procedure in place to respond to complaints of copyright or trademark violation.

The restrictions on liability in the DMCA were the result of heavy lobbying by ISPs interested in preserving their own bottom line. This has had the unintended effect of creating free speech zones on the net that we don't have in other media. The result is that we are now seeing a huge amount of creative re-use of copyrighted material, and even of material that is owned by some of the more powerful and more assertive of copyright holders. A prime example is a video explaining copyright that is constructed entirely of snippets from Disney films. It opens with a "parody" of the FBI warning that reads:
WARNING. Federal law allows citizens to reproduce, distribute, or exhibit portions of copyright motion pictures, video tapes, or video discs under certain circumstances without authorization of the copyright holder. This infringement of copyright is called "Fair use" and is allowed for purposes of criticism, news reporting, teaching, and parody.


Whether or not such works are infringing is open to interpretation, but as von Lohmann explains you can't even wonder about infringement if the works do not get distributed in the first place. This is a brave new world.

Terry Fisher ran through a wide swath of possible solutions to today's copyright environment with an interesting proposal that seems to be a kind of ASCAP for all intellectual property.
In brief, here's how the system works: In each country, copyright owners (record companies, music publishers, film studios, etc.) authorize Noank to distribute digital copies of their works. Noank, in turn, enters into contracts with major network service providers: broadband consumer ISPs; mobile phone providers; and universities. Noank provides the service providers' end-users with unlimited downloading, streaming, and copying licenses. In return, each access provider pays Noank a fee on behalf of each of its end-users (consumers, students, employees). 85% of the money collected from these content fees is distributed to content copyright owners. A small software program on the users' device counts the content use. That information (automatically aggregated to protect users' privacy) is used to determine the amount of money paid to each copyright owner.

This system, called Noank Media, is operating today primarily in China and Canada. In China they are charging $20 per year per user. Twenty bucks doesn't seem like much, but that is probably a significant fee in China, which is definitely a country where you can make it up on volume. The main thing is that Noank is an "all you can eat" model rather than a "pay per view" one. The copyright holders get paid in proportion to the relative use of their content. It's slice of the pie, not whatever the market can bear. However, if copyright holders go for it their content will get the kind of exposure it does today on peer-to-peer networks but as a revenue stream.

P2P was a big topic at the conference because of the RIAA's recent wave of letters to college students. The RIAA appears to be targeting about 400 students per month that it identifies as having illegally downloaded music files. The letters are sent to the university to be distributed to the named students, in an obvious attempt to make the university a party to the action. The students are offered a "buy-out" of $3,000 to avoid an actual lawsuit. The role of the university is an interesting one -- in this case, the files are on the students' computers, so the university can't act as an ISP with a "take down" policy. It's not clear at all whether the university has any responsibility for the actions of its students in relation to non-university activities, even if they are illegal. Many universities have set up or licensed music download services to try to offer the students a legal alternative to P2P downloading, and some of the universities that have received the RIAA letters do offer such services. It's an uneasy role for the educational institutions to be in, and some of the conference participants felt strongly that the RIAA is attempting to use the universities to create a precedent that will undermine the DMCA's ISP immunity.

Tuesday, May 15, 2007

Books, books everywhere... but not libraries

A walk through Turin's downtown reveals a thriving book culture. There are many dozens of independent bookstores, of all stripes, from those specializing in elegant tomes of art and architecture to one that carries books on the theme of "taste": books on wine, chocolate, cheese. There are bookstores that are in elegant stores with gorgeous wood shelves, and there are bookstores that pile their books on tables in the wide walkways of the downtown.




A book on Turin bookstores lists 69 stores. It also lists 26 libraries, but they are barely recognizable as libraries in the American sense. To begin with, they have few users, and not much space for people. Next, entering them is intimidating. Not only do you have to have a library card to check out books, in some cases you have to show your card to enter the library. There are many that do not have open shelves, or as I found in the gorgeous "royal library" in the center of town, the books are locked behind glass. The catalog of that library is not only on cards, they are written by hand. Admittedly, this is a library of historical interest, an archive. It holds the famous self-portrait of Leonardo da Vinci, and two of the three solid wood desks at the library are taken up with giant computer screens where you can look at a digitized image of the page, enlarge it and even turn the image sideways. No, I don't know why you'd want to, but there it is.

Sunday, May 13, 2007

The Web DOES have borders

Ok, call me naive, but this is the first time that I have spent considerable hours on the Internet in a non-English-speaking country. Before, I have hopped on to check email, but at an Internet cafè it's hard to linger for long. I am at the home of friends in Turin, Italy, a three-computer family with wireless in the home. And I have discovered that the Net looks very different from here. Not different in a bad way, but different.

It seems logical (although I had never thought about it) that a search on Google would provide a different ranking based on language and network boundaries. However, because I was having problems hooking up with their wireless setup I needed technical materials and although I am fairly functional as a tourist in Italy, I prefer my support materials in English. I did a search using English terms, and up popped the Google entry for one of those Microsoft Knowledge Base articles in English. However, when I retrieved it, I was given the Italian translation. This is considered a service by my Italian friends, but is yet another proof that we aren't on the net as individuals -- I can't get MY language or MY preferences. I am a web address in Italy, and that's as far as it goes.

Yahoo, on the other hand, appears in English and gives me a choice of searching all of the web, or just Italy. The latter doesn't give me the results I expected because numerous sites in English and outside of the .it range appear. Google defaults to an Italian interface, making it once again preferable to most people as the friendliest search engine. And the results greatly favor materials in Italian. Altavista appears in Italian, defaults to searching in Italy (but with a choice to expand to the entire web) and allows you to select either all languages or an English+Italian combination. A search on a term that originated in English but is used here, such as "barcamp," in both Google and Altavista shows the entry from wikipedia.it as one of the first ranked pages.

I'm not at all sure the significance of this, other than the fact that once again Google excels at giving people what they want. I suspect that ranking is based on links found in .it pages, with some weight for language. It happens seamlessly and appears to be magic. I gotta give it to them, they're good, they're very good.

Thursday, May 10, 2007

At the Turin Book Fair




For the past 20 years, Turin, Italy has held a book fair which by now has become quite well known and well attended. In fact, today, the first day of the fair, there were times when the crowd was crushing. It is especially interesting to attend such a fair in a world where 1) there is still a strong reverence for the book and 2) there are probably as many authors as there are readers. I am astonished at the number of what appear to be small presses. There are four large exhibit halls, and from my first pass through it appears that everyone in Italy has written a book.

I came here to think about The Book (and to eat salame). Instead, I'm finding that my thoughts are not so much on the book itself but on the process of production. What strikes me here, and what struck me when I attended the Paris book fair some years ago, is the importance of the publishing house, rather than the individual title. A few months ago in Venice I went into a bookstore looking to find anything they had on the topic of the history of the book. The clerk had to take me all around the store to select a few titles because the store was organized not by topic but by publisher. However, even in a store where the books are shelved by topic, you can easily recognize the publishers because each one has a distinctive cover style.




It's not the individual book that counts, but the context of that book; the editorial context. Authors are discoveries, greatly prized and carefully taken care of, but not separate from the publisher who, in the great tradition of Pygmalion, has turned them into a refined cultural asset. And for sure, the selling point is not the flash of the cover, but the fact of belonging to a known intellectual tradition.

OK, I'm getting maudlin. Maybe it's the jet lag.

Thursday, May 03, 2007

Astonishing announcement: RDA goes 2.0

As a result of a meeting at the British Library on April 30 and May 1, it has been announced that RDA and Dublin Core communities will work together in the following ways:

  • develop a formal RDA Element vocabulary (probably following the DC Abstract Model)
  • develop an application profile of RDA for Dublin Core using FRBR and FRAD
  • use RDF and SKOS to disclose RDA vocabularies (see Diane Hillmann's work on this)
The document lists the "benefits" as:
  • the library community gets a metadata standard that is compatible with the Web Architecture and that is fully interoperable with other Semantic Web initiatives
  • the DCMI community gets a libraries application profile firmly based on the DCAM and FRBR (which will be a high profile exemplar for others to follow)
  • the Semantic Web community get a significant pool of well thought-out metadata terms to re-use
  • there is wider uptake of RDA
So it appears that the call for a modernization of the library approach to metadata has been heard. What does all of this mean? Diane Hillmann was at the meeting, and I asked her the following questions:

KC: What does it mean that there will be a "formal RDA Element vocabulary?"

D: It will look something like the Dublin Core registered terms. They will be both human readable (as displayed in a browser) and machine readable (in a format like RDF). Try clicking on this link, and you can see on the right the different machine-readable formats.

KC: What happens now to the "tome" that has been developed through the RDA process?

D: The "instructions" as we see them in the RDA documentation, will not be affected. The element vocabulary, the formal vocabulary, will be separated out, and the documentation will point to the formal vocabulary terms. Many users of the documentation will not see the formal element vocabulary and may not know that it exists. The vocabulary, however, will be behind the online tools that are being created. This will make it easier to create a system that allows people to click on a term and get a definition or to see the related hierarchy.

Having the formal vocabulary means that there can be a testbed for the many and complex relationships that are being expressed in RDA, FRBR and FRAD.

KC: How is this going to be accomplished? It looks like a lot of work.

D: There will be an effort to find funding to accomplish this, but the work itself will be done with the combined efforts of the RDA and Dublin Core communities. Some of the work, such as the RDA vocabularies, is already begun.

This is nothing short of revolutionary, at least in comparison to where we started in the late 1990's on a revision of AACR2. Imagine a library that is seamlessly integrated with the semantic web.... we seem to be on our way.