Friday, October 28, 2011

Open Access Doubts


Science embraces the concept of weakly held strong ideas. This was illustrated recently by the excited reaction of the High-Energy Physics (HEP) community to a recent experiment. ("Measurement of the neutrino velocity with the OPERA detector in the CNGS beam", arXiv:1109.4897v1) If confirmed, it puts into doubt the speed of light as an absolute limit. The relevant paper is available through arXiv, which started as a HEP preprint repository and blazed a trail for Open Access. In light of the origins of the Open Access Movement, let us again be inspired by the HEP community and its willingness to follow experiments, wherever they may lead. Assessing the ongoing Open Access experiment, where are our doubts? I have three.

Is Affordable Better than Free?

All else being equal, open is better than closed. But… all else is not equal. A robust and user-friendly network of open scholarly systems seems farther away than ever because of inexpertly formatted content and bad, incomplete, and non-public (!) metadata. While there is always room for improvement, pay-walled journals provide professionally formatted and organized content with excellent metadata and robust services. The problem is cost. Unfortunately, we did nothing to reduce cost. We only negotiated prices.

What if we could significantly reduce cost by implementing pay walls differently? The root of the problem is site licenses. For details, see “What if Libraries were the Problem?”, “Libraries: Paper Tigers in a Digital World”, “The Fourth Branch Library”, and “The Publisher’s Dilemma”. Site licenses are market-distorting products that preserve paper-era business processes of publishers, aggregators, and libraries.

Universities can cut the Gordian knot right now by replacing site licenses with direct subsidies to researchers. After a few months of chaos, consumer-oriented services with all kinds of pricing models would emerge. Researchers, empowered to make individual price-value judgments, would become consumers in a suddenly competitive market for content and information services. The inception of a vibrant marketplace is impossible as long as universities mindlessly renew site licenses.

What are the Goals of Institutional Repositories?

Open Access advocates have articulated at least five goals for institutional repositories: (1) release hidden information, (2) rein in journal prices, (3) archive an institution’s scholarly record, (4) enable fast research communication, and (5) provide free access to author-formatted articles.

Institutional repositories are ideal vehicles for releasing hidden information that, until recently, had no suitable distribution platform (1). For example, archives must protect original pieces, but they can distribute the digitized content.

The four remaining goals, all related to scholarly journals, are more problematic. Institutional repositories fall short as a mechanism to rein in journal prices (2), because they are not a credible alternative for the current archival scholarly record. Without (2), goals (3), (4), and (5) are irrelevant. If we pay for journals anyway, we can achieve (3) by maintaining a database of links to the formal literature. Secure in the knowledge that their journals are not in jeopardy, publishers would be happy to provide (4) and (5).

A scenario consistent with this analysis is unfolding right now. The HEP community launched a rescue mission for HEP journals, which lost much of their role to arXiv. The SCOAP3 initiative pools funds currently spent on site-licensing HEP journals. This strikes me as a heavy-handed approach to protect existing revenue streams of established journals. On the other hand, SCOAP3 protects the quality of the HEP archival scholarly record and converts HEP journals to the open-access model.

Are Open-Access Journals a Form of Vanity Publishing?

If a journal’s scholarly discipline loses influence or if its editorial board lowers its standards, the journal’s standing diminishes and various quality assessments fall. In these circumstances, a pay-walled journal loses subscribers and, eventually, fails. An open-access journal, on the other hand, survives as long as it attracts a sufficient number of paying authors (perhaps by lowering standards even further). Financial viability of a pay wall is a crude measure of quality, but it is nonnegotiable and cannot be rationalized away: the journal fails, its editorial board disappears, its scholarly discipline loses some of its stature, and its authors must publish elsewhere.

We should not overstate this particular advantage of the pay wall. Publishers have kept marginal pay-walled journals alive through bundling and consortium incentives, effectively using strong journals to shore up weak ones. Open-access journals may not be perfect, but we happily ignore some flaws in return for free access to the scholarly record. For now, open-access journals are managed by innovators out to prove a point. Can successive generations maintain quality despite a built-in incentive to the contrary?

Wednesday, October 19, 2011

The Birth of the Open Access Movement


Twelve years ago, on October 21st 1999, Clifford Lynch and Don Waters called to order a meeting in Santa Fe, New Mexico. The organizers, Paul Ginsparg, Rick Luce, and Herbert Van de Sompel, had a modest goal: generalize the High Energy Physics preprint archive into a Universal Preprint Service available to any scholarly discipline. (Currently known as arXiv and hosted by Cornell University, the HEP preprint archive was then hosted at the Los Alamos National Laboratory.)

This meeting constructed the technical foundation for open access: the Open Archives Initiative and the OAI Protocol for Metadata Harvesting (OAI-PMH). It coined the term repository. (Yes, it was a compromise.) It inspired participants. Some went home and developed OAI-compliant repository software. Some built or expanded institutional and disciplinary repositories. Some started initiatives to raise awareness.

At the meeting, there were high-flying discussions on the merits of disciplinary versus institutional repositories. Some argued that disciplinary repositories would be better at attracting content. Others (including me) thought institutional repositories were easier to sustain for the long haul, because costs are distributed. In retrospect, both sides were right and wrong. In the years that followed, even arXiv, our inspirational model, had problems sustaining its funding, but the HEP community rallied to its support. Institutional repositories got relatively easily funded, but never attracted a satisfactory percentage of research output. (It is too early to tell whether sufficiently strong mandates will be widely adopted.)

There were high hopes for universal free access to the scholarly literature, for open access journals, for lower-priced journals, for access to data, for better research infrastructure. Many of these goals remain high hopes. Yet, none of the unfulfilled dreams can detract from the many significant accomplishments of the Open Access Movement.

Happy Twelfth Birthday to the Open Access Movement!

Friday, September 23, 2011

Information Literacy, Libraries, and Schools

On September 14th, Los Angeles Times’ columnist Steve Lopez covered the closure and near-closure of libraries in elementary, middle, and high schools. In the best of times, school libraries play second fiddle to issues like improving student-teacher ratio. In crisis times like today, these libraries do not stand a chance. A week later, he covered the parents’ reaction.

The parents’ efforts to rescue these libraries are laudable, but lack vision and ambition. They are merely trying to retain a terrible status quo. A room of books is not the kind of library where primary literacy skills are learned. The school superintendent, John Deasy, has it basically right: primary literacy skills are learned in the classroom. Critical reading, identifying high-quality information, web-research techniques, and specific sources for particular subject matters are skills that can be learned only if they are incorporated in every class, every day.

At every level in our society, the response to this terrible economic crisis has been one of incremental retrenchment instead of visionary reinvention. The phrase “don’t let a crisis go to waste” may have a bad image, but it applies in this case. California is the birthplace of information technology, and its schools and their infrastructure should reflect this.

Around the same time as the first column, rumors started circulating that Amazon is planning an electronic library available by monthly subscription. This is a technology and a business model that can provide every student with a custom digital library. It may even save money by eliminating the management and warehousing of print books (including text books).

School districts should put out requests for proposals to supply every student with an e-book reader, tablet, or notebook computer that has access to a digital library of books and other resources. Big-name enterprises, such as Amazon, Apple, Barnes&Noble, and Google, would be eager to capture his young demographic. Some philanthropic organizations might be willing to pitch in by buying the rights of some books and putting them in the public domain. A slice of public library funds should be allocated to this digital library.

Traditional school libraries are inadequate. It is time to shelve twentieth century infrastructure and fund the tools students need in the twenty-first century.

Tuesday, September 13, 2011

ETD 2011 and the Library of the Future

This week, the Networked Digital Library of Theses and Dissertations (NDLTD) holds its annual international conference in Cape Town, South Africa. Founded by Prof. Edward Fox of Virginia Tech, NDLTD is dedicated to making theses and dissertations available on the web. NDLTD is an organization where library and academic-computing professionals coordinate their activities and support each other as they develop programs to improve the quality of multimedia theses and the repositories that hold them.

The good news is that universities from across the globe are adopting electronic-theses mandates at an astonishing rate. Right now, over two million theses are available with a few mouse clicks. Check out the VTLS Visualizer or the SCIRUS ETD Search. By making their research available online, universities increase its impact. This is especially important for developing nations, who are in dire need of thinkers that solve local problems and contribute to global knowledge. That makes the location of this year’s NDLTD conference crucially important, both from a practical and a symbolic point of view.

The bad news is that thesis repositories are underfunded. Often, a thesis repository is thought of as just an affordable digital service with a fast payoff in research visibility. In fact, it is much more: it is a paradigm shift for the business of university libraries. Paper-era libraries collect information from around the world to be consumed by their communities. This paradigm is largely obsolete and must be turned upside down. As discussed in a previous blog post, “The Fourth Branch Library”, digital-era libraries should focus on the information produced by their communities, collect it, manage it, and make it widely available. Setting up an electronic thesis repository, helping students and faculty develop best practices, and helping universities through policy issues are exactly the kind of activities at the core of the digital library mission.

Repositories should be funded at a level commensurate with their importance to the future of libraries. We need to redouble our efforts to get out of PDF and into structured text, to enable full-text search, to improve reference linking, and to connect scientific formulas and equations to appropriate software for manipulation. We must capture all data underlying thesis research and make it available in raw form as well as through interactive visualizations. We must standardize when appropriate and allow maximum flexibility when feasible. A lot of work is ahead.

I congratulate the organizers of ETD 2011 for putting together a fantastic program. I hope the attendees of ETD 2011 will be inspired to build the foundations for the library of the future.

Sunday, September 4, 2011

The Publisher’s Dilemma

The stinging critique of scholarly publishers by George Monbiot in The Guardian and on his blog describes the symptoms accurately, but misses the diagnosis of the problem. As commercial enterprises, publishers have a duty to their shareholders and to their employees to extract as much value as possible out of the information they own. If you think they should not own the scholarly record, blame the academics that signed over copyright. If you think site licenses for scholarly journals are too expensive, blame universities for continuing to buy into the system. Scholarly publishers are neither evil nor dishonest. They are capitalists exploiting a market they have created with eager participation of academia. Academics and librarians have been whining about the cost of scholarly journals for the last twenty years. One more yammering op-ed piece, or a thousand, will not change a dysfunctional scholarly-information market. Only economically meaningful actions can do that. Change the market, and the capitalists will follow.

By making buying decisions on behalf of a community, libraries eliminate competition between journals and create a distorted market. (See my previous blog post “What if Libraries were the Problem?”) The last twenty years were a chaotic period that included inflating and bursting economic bubbles, the worst financial crisis since the Great Depression, several wars, and unprecedented technological advances in the delivery of information. In line with normal expectations under these conditions, most publishers faced an existential crisis. Amazingly, most scholarly publishers thrived. Is it just a coincidence their main revenue source is libraries?

Researchers need access to scholarly research. This legitimate need is conflated with the necessity of buying site licenses. A site license merely extends a rigid paper-era business model that ignores the unlimited flexibility of digital information. As digital-music consumers, students and faculty will not even buy an album of ten songs if they are interested in only one or two. Yet, for this community, their library subscribes to bundles of journals and joins consortia to buy even greater bundles of journals. Pay-per-view systems are expensive and painfully slow, particularly when handled through interlibrary loan. This information-delivery system is out of step with current expectations. The recording industry serves as an example of what happens in these circumstances.

It’s time to face the music. (I could not resist.) For an author, the selection of an appropriate journal and/or publisher is crucially important. For a reader, citations and peer recommendations trump journals’ tables of content, and book reviews trump publishers’ catalogs. I call on publishers to partner with Apple, Amazon, Thomson Reuters (Web of Knowledge), EBSCO, and others to develop convenient and affordable gateways that provide access to any scholarly article or book, from any publisher, whether open or paid access. Such an initiative might eat into site-license revenue, but it just might prevent the system from collapse and provide a platform for sustainable reader-pays models or hybrid models. Publishers have already hedged their bets with sincere, but timid, open-access initiatives. This is just one additional hedge, just in case...

In fact, I suspect many publishers have mixed feelings about site licenses. They generate high revenue, but they also come with high fixed costs. An extensive sales staff keeps track of thousands of libraries and conducts endless negotiations. Middlemen take a bite out of most proceeds. Every special deal must pass through an internal approval process, taking executives’ time and energy. There are serious technical complications in controlling access to journals covered by site licenses, because publishers must cede authentication processes to libraries and because they have no direct relationship with their readership. Publishers are caught in a vicious circle of increasing costs, more difficult negotiations, more cancellations, and increasing prices. I suspect they want a better system, one in which they can offer more services to more users. Yet, they find it impossible to abandon their only significant business model, even one at danger of collapsing under its own weight.

Change will happen only if universities take economically meaningful actions. Stop buying site licenses, let students and faculty decide their personal information requirements, subsidize them where appropriate, and let the free market run its course. (See my previous blog post “Libraries: Paper Tigers in a Digital World”.) In future blog posts, I intend to discuss methods to subsidize information that are more effective than buying site licenses and gradual approaches to get us there. Just as a thought experiment, consider the following: Cancel all site licenses, and use the savings to lower student tuition and raise faculty salaries. How long would it take for alternative distribution channels develop? How would prices evolve? How popular would open access be?

In a web-connected world, the role of libraries as intermediaries between information providers and readers is obsolete. As discussed in “The Fourth Branch Library”, libraries should increase their focus on collecting, managing, and broadcasting the information their communities generate. They should not be concerned with the information their communities consume.

Friday, August 26, 2011

Steve Jobs, An Appreciation

Steve Jobs changed our world, and his genius deserves a celebration. He gave us hardware that is a joy to hold in one’s hand, run by software that is a joy to use and wrapped in packaging that is a joy to open. Worldwide joy would be an appropriate expression of gratitude, were it not for his failing health.

As many serious analysts, Apple, and Steve Jobs himself have pointed out, Apple has a very deep bench and a long product pipeline. It has attracted great scientists, engineers, designers, and business people. Even its innovation pipeline is innovative: the App store will bring innovation to Apple doorstep for years to come. For all these reasons, Apple seems prepared for business with Steve Jobs as a “mere” Chairman of the Board now and without Steve Jobs in the future.

All protestations aside, the character of Apple will change. Gradually. It could be for the better. Odds are it will be for the worse. In fact, we have seen such a movie before. I remember unwrapping my first programmable calculator in the seventies. It was an HP 45C. You had to feel the perfection of the buttons. The oh-so-logical reverse Polish notation got the job done twice as fast. The quality was nothing short of amazing. My HP 45C was still working flawlessly thirty years later. Then, tucked away in the back of a desk drawer unused but still loved, a battery leaked. Over time, HP became passionless, produced commodity items, and provided commodity services. Under the disastrous Carly Fiorina, “the HP way” became a four-letter word, product innovation was an empty marketing concept, and financial engineering took center stage.

Yesterday, I went to Best Buy to re-verify something. The bottoms of most Windows notebooks are plastered with stickers and printed text. The keyboards are sold-out advertizing venues. Apple notebooks are pristine, except for four lines of barely distinguishable grey text on the aluminum body. How many lawyers, marketing people, committees, suppliers, and others did Steve Jobs have to overrule just for beauty? How come others did not adopt this simple idea?

Business leaders may point to Steve Jobs to justify any control-freak tendency. Without the kind of genius that it takes, they are bound to fail, and they are probably better off in the mediocrity-guaranteeing committee-driven world of conventional business. I have a suggestion for them. In one fell swoop they can accomplish three things: They can, in their own minds, equate themselves with Steve Jobs. They can satisfy their cravings for control. And, they can provide a valuable service: Get rid of ugly stickers and ridiculous warning labels.

Monday, August 22, 2011

The Fourth Branch Library

Today’s library is the result of twenty years of incremental changes: an institution buying access to information wholesale and restricting that access retail. As discussed here and here, the wholesale market for information is distorted and creates artificially high site license prices. Another expense is the inordinate amount of time staff spend on usage studies and community outreach to gauge collective information needs, negotiations with consortia that pool resources to obtain imaginary discounts from inflated list prices, negotiations with publishers and their agents, and internal library discussions. After they are acquired, site licenses remain expensive. As protectors of publishers’ digital rights, libraries spend significant resources restricting access at the retail level.

The time for incremental change is over. We must rebuild the library from scratch on a foundation of traditional library values. Here is my attempt.

The mission of the library is to serve the members of its community by:
1. Helping them create high-quality information,
2. Collecting, organizing, and archiving that information, and
3. Making that information widely available, subject to legal and ethical constraints.

This mission is steeped in tradition. Libraries of the antiquity were more about secrecy than openness, but their primary purpose was to archive locally produced information. The purest modern implementation of the vision I am proposing is the American presidential library, which collects, manages, and makes available the information from one administration. Public libraries routinely accept manuscripts and personal correspondence of authors and other luminaries. University archives preserve scholarly history. Many academic libraries have implemented various open-access initiatives and have set up databases containing publications of faculty and students (scholarly articles, books, theses, and dissertations).

This mission allows for specialization. Libraries are ideally positioned to add value to information produced by the communities they serve. A public library that serves a particular location may help its constituents with educational programs in information literacy. Other libraries may specialize in particular disciplines and serve communities that are dispersed worldwide. This is particularly the case for data archives, which require deep specialization.

This mission includes nonprofit and for-profit organizations. In this view, publishers are for-profit libraries. As such, they shoulder all the responsibilities of a library, including archiving the information under its purview.

This mission exploits the network effect. Through collaboration, libraries can create a worldwide network of high-quality information that is more than the sum of its parts.

This mission is critically important. We produce an exponentially rising amount of information that is poorly managed and in danger of being irretrievably lost.

For concrete examples, I could point to existing open access initiatives. Peter Suber’s The Open Access Overview is a good place to start. Most of these initiatives share the trait of being focused on disseminating information from a community to the world and letting the web take care of bringing the world to individuals. These are great initiatives, but I want to push the limits. I do not want to be boxed in by what is feasible today.

The largest producer of public-domain information is the government. Legal information, legislative records, and official government reports are readily available through established channels. Other government records, however, are more problematic. As a matter of expediency, officials tend to have a bias towards opaqueness. Impenetrable government records are managed by a hodgepodge of government agencies. The system hides problems ranging from bad judgment to corruption and complicates good governance.

What if we had an independent agency to manage the government’s records? This agency would create the systems to gather this information. It would decide the appropriate level of public access. By imposing standards, it would ensure that government records were machine-readable and discoverable. The infrastructure for such an independent agency is already in place: the public library system at the local, county, state, and federal levels. In its most extreme form, this independent agency could evolve into a fourth branch of government, one dedicated to transparency of the other branches.

As a practical matter, this may be an overreach, and more modest initiatives are more realistic starting points. However, considering the profound impact of digital information on our lives and considering that the information age is here to stay, we are forced to think big.

On the other hand, thinking small comes naturally. The latest innovation of the Los Angeles Public Library sets free the all-important Sony Music catalog, saving Los Angeles residents from the unspeakable burden of $1 song downloads. The Librarian in Black has a detailed critique.

<Note: edited title 8/26/2011>