Showing posts with label copyright. Show all posts
Showing posts with label copyright. Show all posts

Tuesday, June 27, 2023

The University Library: Closing the Book

At the memorial service, the eulogies expressed deep sadness at the loss of a great institution, once a cornerstone of academia. Everyone blamed The Shelfless Revolution for this sad death. In fact, The University Library had been weak for a long time, and it could not survive any shock.

The Transition from Print to Digital

The transition from print to digital was swift, particularly for scholarly journals. In the 1990s, The University Library, publishers, and the middlemen of the supply chain ramped up their IT infrastructure and adapted their business relationships. The switch to digital was achieved quickly and with little interruption.

Print vs. Digital Lending

Lending books and journals, whether print or digital, is a high overhead enterprise. Since print and digital lending involve different kinds of work, it is obvious that their overheads are quantitatively different. It is less obvious and easily ignored that they are qualitatively different: Print overhead is an investment. Digital overhead is waste.

Consider print lending. The overhead builds a valuable collection housed in community-owned real estate. Barring disasters, the value of the collection and the infrastructure increases over time. The cumulative effect is most obvious in old libraries, which are showcases of accumulated treasure.

Contrast this with digital lending. Digital overhead pays for short-term operational expenses to acquire site licenses whose value is zero when they expire. Even infrastructure spending has only short-term benefits. Computing and networking hardware must be replaced every few years. Site-licensed software to manage the digital lending library, like site licenses for content, have zero value upon expiration.

The digital lending library never accumulates value. It does not contribute anything to future generations. It only provides services here and now. It just needs to perform current responsibilities in a cost effective manner. Evaluating The University Library as a digital lender boiled down to a few simple questions: Was The University Library a cost-effective negotiator and content provider? Did it provide a user friendly service? Could others do better?

The Ineffective Negotiator

While other publishers suffered years of disruption and catastrophic downsizing, scholarly publishers thrived throughout the digital revolution and afterwards. Their profit margins remained sky high. Their new business model was even better than the old. By selling site licenses, they retained control of the content forever. New content provided an immediate revenue stream, and accumulated old content ensured an ever increasing future revenue stream.

The University Library was a predictable customer with a budget that kept pace with inflation. Not satisfied with this, publishers increased their prices at a rate well above inflation. Every so often, The University Library and its funders pressed the panic button. This would start a round of negotiations. Librarians were caught between scholars who wanted to maximize content and administrators who wanted to reduce costs. They negotiated with publishers, each of whom had a monopoly over their island of the literature. Predictably, most negotiations ended with some performative cutbacks by The University Library and a few temporary price concessions by the publishers. Then, the cycle started all over again.

The Market Distorter

The University Library distorted the scholarly communication market, merely by being present in it. Normal economic forces did not apply.

To maintain quality, The University Library acquired content from publishers with a track record. The barriers against new unproven publishers created an oligarchy of publishers that kept prices artificially high.

The University Library also eliminated competition between established publishers. Imagine two competing journals, A and B. A survey among the relevant scholars reveals that 60% prefer A, 40% prefer B, and 20% adamantly insist that they need both A and B. The University Library had no choice but to license both journals for all scholars. By erasing individual preferences, it eliminated competition.

For most textbooks, publishers knew well in advance how many copies The University Library would buy. Given this information, publishers inflated their textbook prices to a level where library sales covered their production costs. All other sales were pure profit from a riskless enterprise.

Providing Access

Under the terms of the site licenses, only authorized users were allowed access, and systematic downloading was prohibited. It was the responsibility of The University Library to protect the content against inappropriate users and use. This work on behalf of publishers was a significant part of digital overhead paid for by The University Library.

Aside from being costly, access controls inconvenienced users. Links to content might stop working without notice because of miscommunication between publishers and library systems. When visiting another campus or when changing jobs, scholars had to adapt to new user interfaces. What had been an asset in the print era, a library built for a local community, had become a liability in the digital era.

Personal Digital Libraries

In their personal lives, scholars subscribed to online newspapers and magazines, to movie and music streaming services, and to various social networks where they posted and consumed content. They easily managed these personal subscriptions. What was so different about scholarly subscriptions? What exactly did The University Library do that they could not do themselves faster and more efficiently?

The University Library no longer accumulated long-term value. It was an ineffective negotiator unable to control costs. It blocked competition from new publishers. It eliminated competition between established publishers. It spent considerable overhead to control access on behalf of publishers while inconveniencing users.

The Shelfless Revolution changed all that. Overnight, scholars were in charge of acquiring their own information needs. During the initial period of chaos, scholars were forced to subscribe to each journal individually. Publishers quickly adapted by bundling books and journals into various packages. Third-party service providers, working with all publishers, offered custom personal libraries. The undergraduate pre-med student who loved mystery novels and the assistant professor in chemistry who hiked wilderness trails no longer shared the same library. Their competing interests no longer needed to be balanced.

Many journals did not survive the suddenly competitive market. With fewer journals, publishing a paper became more competitive. Over time, the typical scholar published fewer papers of higher quality. With fewer opportunities to publish in classical peer-reviewed journals, scholars had an incentive to create and/or try out new forms of scholarly communication.

Sticky Digital Lending

Looking back, it is difficult to grasp how controversial a step it was to switch to personal libraries.

Before The Shelfless Revolution, academic administrators would have committed career suicide if they proposed such an outrageous idea. The backlash would have been harsh and immediate. The opposition message would have written itself: They are outsourcing The University Library to the publishers who have been extorting the scholarly community for years. This slogan would have had the benefit of being true. The counterargument would have been the idea that publishers lose their price-setting power when scholars make their own individual purchasing decisions. While standard capitalist theory, the idea was untested in scholarly communication.

The unlikely university where the faculty approved the outrageous proposal would be mired in endless debate. How should the library subscription budget be divided? How much should go to undergraduate students? to graduate students? to postdocs? to faculty? Should they receive these funds in the form of tuition rebates and salary increases or in the form of university accounts? What would be allowable purchases on such accounts?

No single university could have implemented such a change on its own. Accreditation authorities would have expressed doubts or outright opposition. Publishers would not have changed their business models to accommodate one university. It would have required a large coalition of universities.

It took a catastrophic shock to the system, The Shelfless Revolution, to cut this Gordian knot.


Open Access

Many years before The Shelfless Revolution, a few academics started a project to kickstart a revolution in scholarly communication. As this grew into The Open Access Movement, The University Library was called upon to support some of the infrastructure. Many librarians considered this a promising opportunity for a digital future.

The Open Access Movement coalesced around three goals: Provide free access to scholarly works, Reduce the cost of scholarly communication, and Create innovative forms of scholarly communication.

The first goal was quite successful. Three mechanisms were developed to provide free access to scholarly works: institutional repositories, disciplinary repositories, and open access journals. The University Library was primarily responsible for institutional repositories, which contained author-formatted versions of conventionally published papers, unpublished technical reports, theses and dissertations, data sets, and other scholarly material. Several groups of scholars developed disciplinary repositories to collect works in specific areas of research and make them freely available. Finally, various entities created open access journals, which relied on alternative funding mechanisms and did not charge subscription fees.

The second goal, reducing the cost of scholarly communication, was an utter failure. The Open Access Movement had assumed that making a large part of the scholarly literature available for free would put downward pressure on the price of subscription journals. This assumption was proved wrong. Scholars continued to publish in the same journals. The familiar cycle of site license price increases and performative negotiations continued. Repositories were never a threat. Open-access journals were never competition.

Institutional repositories were particularly valuable for scholarly works that were previously hard to find, such as theses, technical reports, data, etc. For author-formatted papers, they evolved into a costly backup for conventional scholarly publishing. They provided a valuable service for those without access to journals. Most scholars would not risk their research by relying on pre-published unofficial versions, and they required the version of record. Besides, repositories were too cumbersome to use.

Disciplinary repositories were more user friendly, but they needed outside funding. Occasionally, the priorities of the funders would change, and the repository would have to find a new source for funding. Each funding crisis was an opportunity for publishers to buy the repository. To keep the repository under scholars’ control, an interested government agency or philanthropic organization had to step forward every time. To control the repository, publishers had to be lucky just once.

Open access journals just increased the number of scholarly journals. Subscription journals did not suddenly fail because of competing open access journals. At most, subscription journals responded by introducing an open access option. Authors could choose to pay a fee to put their papers outside of the paywall. These authors just trusted publishers not to include these open access papers in the calculation of subscription prices. The publisher’s promise was impossible to verify. This was the level of dysfunction of the scholarly communication market at that time.

The University Library paid ever increasing prices for site licenses and their maintenance. It also paid for the maintenance of institutional repositories. Government and philanthropic funding agencies paid for disciplinary repositories. Scholars used a combination of library funding, research accounts, departmental accounts, and personal resources to pay for open access charges. The scholarly community was spending more than ever on scholarly communication, and no one knew how much.

The Open Access Movement also failed to deliver on its third goal, innovations in scholarly communication. Early stage ventures were too risky for responsible organizations like The University Library. Most ideas failed or remained unexecuted. The Shelfless Revolution changed the environment. Individual scholars in charge of their own budget and confronted with the actual costs of scholarly communication were willing to fund risky but promising experiments.


The Fallout

The Shelfless Revolution killed the digital lending library. This started a chain reaction that affected every service offered by The University Library.

It was immediately obvious that archives had to survive. The print archive was scanned and stored in repositories. In spite of their limitations, repositories became the primary portal into the print archive. Print volumes became museum artifacts virtually untouched by humans. The digital archive mostly contains university-owned scholarly material. Copyright issues created too many obstacles to archive publisher-owned content. New legislative proposals would put the burden on publishers to preserve digital collections of significant cultural, scientific, and/or historical value. This is similar to how we treat protected historical buildings. Publishers will have to store such digital collections in audited standardized archives with government-backed protections against all kinds of calamity.

Print lending died out when most books contained multimedia illustrations and interactive components. Print material of historical importance was moved from the lending library to the nonlending print archive. This killed interlibrary loan services of printed material. Digital interlibrary loans all but disappeared with custom personal libraries.

After losing collection development staff, the reference desk could no longer cover a broad cross-section of scholarly disciplines. It got caught in a downward spiral of decreasing usefulness and declining use.

Long ago, librarians controlled what information was readily available. As technology advanced, their gatekeeping power evaporated. They still nudged publishers towards quality using the power of the purse. This too is now gone. The battle against disinformation seems lost. The profound political differences on where fighting disinformation ends and censorship begins are nowhere near being resolved.

After wreaking havoc on public school libraries, The University Library was braced against attempts at censorship. Before it could engage in that fight, The Shelfless Revolution happened. The switch to personal digital libraries reduced the political heat as universities no longer directly paid for controversial content. Censorship lost the battle, but The University Library lost the war.

Thousands of library projects got caught in the turmoil. Some survived by being moved to other organizations. Most did not. We will never know how much destruction was caused by The Shelfless Revolution.

Conclusion

The University Library made all the right moves. It embraced new technology. It executed the transition from print to digital without major disruption. It was open to new opportunities.

Yet, things went wrong. Open access repositories were supposed to be subversive weapons. Open access journals were supposed to be deadly competitors. Instead, they turned out to be paper tigers, powerless against the oligarchy of the scholarly communication market.

Publishers of newspapers, magazines, music, and video barely survived the disruptive transition to digital. As they rebuilt their businesses from the ruins, they developed business models for the new reality. In contrast, the smooth transition of the scholarly communication market protected existing organizations. It also perpetuated the flaws of old business models, and it let the distorted market grow more dysfunctional every day.

With the benefit of hindsight, the necessary changes could have been implemented more humanely. This was never a realistic option, however. The chaotic and disruptive change of The Shelfless Revolution was inevitable.





#scholcomm #AcademicTwitter #ScienceTwitter #scicomm

Monday, March 31, 2014

Creative Problems

The open-access requirement for Electronic Theses and Dissertations (ETDs) should be a no-brainer. At virtually every university in the world, there is a centuries-old public component to the doctoral-degree requirement. With digital technology, that public component is implemented more efficiently and effectively. Yet, a small number of faculty fight the idea of Open Access for ETDs. The latest salvo came from Jennifer Sinor, an associate professor of English at Utah State University.
[One Size Doesn't Fit All, Jennifer Sinor, The Chronicle of Higher Education, March 24, 2014]

According to Sinor, Creative Writing departments are different and should be exempted from open-access requirements. She illustrates her objection to Open Access ETDs with an example of a student who submitted a novel as his masters thesis. He was shocked when he found out his work was for sale online by a third party. Furthermore, according to Sinor, the mere existence of the open-access thesis makes it impossible for that student to pursue a conventional publishing deal.


Sinor offers a solution to these problems, which she calls a middle path: Theses should continue to be printed, stored in libraries, accessible through interlibrary loan, and never digitized without the author's approval. Does anyone really think it is a common-sense middle path of moderation and reasonableness to pretend that the digital revolution never happened?

Our response could be brief. We could just observe that it does not matter whether or not Sinor's Luddite approach is tenable, and it does not matter whether or not her arguments hold water. Society will not stop changing because a small group of people pretend reality does not apply to them. Reality will, eventually, take over. Nevertheless, let us examine her arguments.

Multiyear embargoes are a routine part of Open Access policies for ETDs. I do not know of a single exception. After a web search that took less than a minute, I found the ETD policy of Sinor's own institution. The second and third sentence of USU's ETD policy reads as follows [ETD Forms and Policy, DigitalCommons@usu.edu]:
“However, USU recognizes that in some rare situations, release of a dissertation/thesis may need to be delayed. For these situations, USU provides the option of embargoing (i.e. delaying release) of a dissertation or thesis for five years after graduation, with an option to extend indefinitely.”
How much clearer can this policy be?

The student in question expressly allowed for third parties to sell his work by leaving a checkbox unchecked in a web form. Sinor excuses the student for his naïveté. However, anyone who hopes to make a living of creative writing in a web-connected world should have advanced knowledge of the business of selling one's works, of copyright law, and of publishing agreements. Does Sinor imply that a masters-level student in her department never had any exposure to these issues? If so, that is an inexcusable oversight in the department's curriculum.

This leads us to Sinor's final argument: that conventional publishers will not consider works that are also available as an Open Access ETDs. This has been thoroughly studied and debunked. See:
"Do Open Access Electronic Theses and Dissertations Diminish Publishing Opportunities in the Social Sciences and Humanities?" Marisa L. Ramirez, Joan T. Dalton, Gail McMillan, Max Read, and Nan Seamans. College & Research Libraries, July 2013, 74:368-380.

This should put to rest the most pressing issues. Yet, for those who cannot shake the feeling that Open Access robs students from an opportunity to monetize their work, there is another way out of the quandary. It is within the power of any Creative Writing department to solve the issue once and for all.

All university departments have two distinct missions: to teach a craft and to advance scholarship in their discipline. As a rule of thumb, the teaching of craft dominates up to the masters-degree level. The advancement of scholarship, which goes beyond accepted craft and into the new and experimental, takes over at the doctoral level.

When submitting a novel (or a play, a script, or a collection of poetry) as a thesis, the student exhibits his or her mastery of craft. This is appropriate for a masters thesis. However, when Creative Writing departments accept novels as doctoral theses, they put craft ahead of scholarship. It is difficult to see how any novel by itself advances the scholarship of Creative Writing.

The writer of an experimental masterpiece should have some original insights into his or her craft. Isn't it the role of universities to reward those insights? Wouldn't it make sense to award the PhD, not based on a writing sample, but based on a companion work that advances the scholarship of Creative Writing? Such a thesis would fit naturally within the open-access ecosystem of other scholarly disciplines without compromising the work itself in any way.

This is analogous to any number of scientific disciplines, where students develop equipment or software or a new chemical compound. The thesis is a description of the work and the ideas behind it. After a reasonable embargo to allow for patent applications, any such thesis may be made Open Access without compromising the commercial value of the work at the heart of the research.

A policy that is successful for most may fail for some. Some disciplines may be so fundamentally different that they need special processes. Yet, Open Access is merely the logical extension of long-held traditional academic values. If this small step presents such a big problem for one department and not for others, it may be time to re-examine existing practices at that department. Perhaps, the Open Access challenge is an opportunity to change for the better.

Monday, March 17, 2014

Textbook Economics

The impact of royalties on a book's price, and its sales, is greater than you think. Lower royalties often end up better for the author. That was the publisher's pitch when I asked him about the details of the proposed publishing contract. Then, he explained how he prices textbooks.

It was the early 1990s, I had been teaching a course on Concurrent Scientific Computing, a hot topic then, and several publishers had approached me about writing a textbook. This was an opportunity to structure a pile of course notes. Eventually, I would sign on with a different publisher, a choice that had nothing to do with royalties or book prices. [Concurrent Scientific Computing, Van de Velde E., Springer-Verlag New York, Inc., New York, NY, 1994.]

He explained that a royalty of 10% increases the price by more than 10%. To be mathematical about it: With a royalty rate r, a target revenue per book C, and a retail price P, we have that C = P-rP (retail price minus royalties). Therefore, P = C/(1-r). With a target revenue per book of $100, royalties of 10%, 15%, and 20% lead to retail prices of $111.11, $117.65, and $125.00, respectively.

In a moment of candor, he also revealed something far more interesting: how he sets the target revenue C. Say the first printing of 5000 copies requires an up-front investment of $100,000. (All numbers are for illustrative purposes only.) This includes the cost of editing, copy-editing, formatting, cover design, printing, binding, and administrative overhead. Estimating library sales at 1000 copies, this publisher would set C at $100,000/1,000 = $100. In other words, he recovered his up-front investment from libraries. Retail sales were pure profit.

The details are, no doubt, more complicated. Yet, even without relying on a recollection of an old conversation, it is safe to assume that publishers use the captive library market to reduce their business risk. In spite of increasingly recurrent crises, library budgets remain fairly predictable, both in size and in how the money is spent. Any major publisher has reliable advance estimates of library sales for any given book, particularly if published as part of a well-known series. It is just good business to exploit that predictability.

The market should be vastly different now, but textbooks have remained stuck in the paper era longer than other publications. Moreover, the first stage of the move towards digital, predictably, consists of replicating the paper world. This is what all constituents want: Librarians want to keep lending books. Researchers and students like getting free access to quality books. Textbook publishers do not want to lose the risk-reducing revenue stream from libraries. As a result, everyone implements the status quo in digital form. Publishers produce digital books and rent their collections to libraries through site licenses. Libraries intermediate electronic-lending transactions. Users get the paper experience in digital form. Universities pay for site licenses and the maintenance of the digital-lending platforms.

After the disaster of site licenses for scholarly journals, repeating the same mistake with books seems silly. Once again, take-it-or-leave-it bundles force institutions into a false choice between buying too much for everyone or nothing at all. Once again, site licenses eliminate the unlimited flexibility of digital information. Forget about putting together a personal collection tailored to your own requirements. Forget about pricing per series, per book, per chapter, unlimited in time, one-day access, one-hour access, readable on any device, or tied to a particular device. All of these options are eliminated to maintain the business models and the intermediaries of the paper era.

Just by buying/renting books as soon as they are published, libraries indirectly pay for a significant fraction of the initial investment of producing textbooks. If libraries made that initial investment explicitly and directly, they could produce those same books and set them free. Instead of renting digital books (and their multimedia successors), libraries could fund authors to write books and contract with publishers to publish those manuscripts as open-access works. Authors would be compensated. Publishers would compete for library funds as service providers. Publishers would be free to pursue the conventional pay-for-access publishing model, just not with library dollars. Prospective authors would have a choice: compete for library funding to produce an open-access work or compete for a publishing contract to produce a pay-for-access work.

The Carnegie model of libraries fused together two distinct objectives: subsidize information and disseminate information by distributing books to many different locations. In web-connected communities, spending precious resources on dissemination is a waste. Inserting libraries in digital-lending transactions only makes those transactions more inconvenient. Moreover, it requires expensive-to-develop-and-maintain technology. By reallocating these resources towards subsidizing information, libraries could set information free without spending part of their budget on reducing publishers' business risk. The fundamental budget questions that remain are: Which information should be subsidized? What is the most effective way to subsidize information?

Libraries need not suddenly stop site licensing books tomorrow. In fact, they should take a gradual approach, test the concept, make mistakes, and learn from them. A library does not become a grant sponsor and/or publisher overnight. Several models are already available: from grant competition to crowd-funded ungluing. [Unglue.it for Libraries] By phasing out site licenses, any library can create budgetary space for sponsoring open-access works.

Libraries have a digital future with almost unlimited opportunities. Yet, they will miss out if they just rebuild themselves as a digital copy of the paper era.

Wednesday, January 1, 2014

Market Capitalism and Open Access

Is it feasible to create a self-regulating market for Open Access (OA) journals where competition for money is aligned with the quest for scholarly excellence?

Many proponents of the subscription model argue that a competitive market provides the best assurance for quality. This ignores that the relationship between a strong subscription base and scholarly excellence is tenuous at best. What if we created a market that rewards journals when a university makes its most tangible commitment to scholarly excellence?

While role of journals in actual scholarly communication has diminished, their role in academic career advancement remains as strong than ever. [Paul Krugman: The Facebooking of Economics] The scholarly-journal infrastructure streamlines the screening, comparing, and short-listing of candidates. It enables the gathering of quantitative evidence in support of the hiring decision. Without journals, the work load of search committees would skyrocket. If scholarly journals are the headhunters of the academic-job market, let us compensate them as such.

There are many ways to structure such compensation, but we only need one example to clarify the concept. Consider the following scenario:

  • The new hire submitted a bibliography of 100 papers.
  • The search committee selected 10 of those papers to argue the case in favor of the appointment. This subset consists of 6 papers in subscription journals, 3 papers in the OA journal Theoretical Approaches to Theory (TAT), and 1 paper in the OA journal Practical Applications of Practice (PAP).
  • The university's journal budget is 1% of its budget for faculty salaries. (In reality, that percentage would be much lower.)

Divide the new faculty member's share of the journal budget, 1% of his or her salary, into three portions:

  • (6/10) x 1% = 0.6% of salary to subscription journals,
  • (3/10) x 1% = 0.3% of salary to the journal TAT, and
  • (1/10) x 1% = 0.1% of salary to the journal PAP.

The first portion (0.6%) remains in the journal budget to pay for subscriptions. The second (0.3%) and third (0.1%) portion are, respectively, awarded yearly to the OA journals TAT and PAP. The university adjusts the reward formula every time a promotion committee determines a new list of best papers.

To move beyond a voluntary system, universities should give headhunting rewards only to those journals with whom they have a contractual relationship. Some Gold OA journals are already pursuing institutional-membership deals that eliminate or reduce author page charges (APCs). [BioMed Central] [PeerJ][SpringerOpen] Such memberships are a form of discounting for quantity. Instead, we propose a pay-for-performance contract that eliminates APCs in exchange for headhunting rewards. Before signing such a contract, a university would conduct a due-diligence investigation into the journal. It would assess the publisher's reputation, the journal's editorial board, its refereeing, editing, formatting, and archiving standards, its OA licensing practices, and its level of participation in various abstracting-and-indexing and content-mining services. This step would all but eliminate predatory journals.

Every headhunting reward would enhance the prestige (and the bottom line) of a journal. A reward citing a paper would be a significant recognition of that paper. Such citations might be even more valuable than citations in other papers, thereby creating a strong incentive for institutions to participate in the headhunting system. Nonparticipating institutions would miss out on publicly recognizing the work of their faculty, and their faculty would have to pay APCs. There is no Open Access free ride.

Headhunting rewards create little to no extra work for search committees. Academic libraries are more than capable to perform due diligence, to negotiate the contracts, and to administer the rewards. Our scenario assumed a base percentage of 1%. The actual percentage would be negotiated between universities and publishers. With rewards proportional to salaries, there is a built-in adjustment for inflation, for financial differences between institutions and countries, and for differences in the sizes of various scholarly disciplines.

Scholars retain the right to publish in the venue of their choice. The business models of journals are used when distributing rewards, but this occurs well after the search process has concluded. The headhunting rewards gradually reduce the subscription budget in proportion to the number of papers published in OA journals by the university's faculty. A scholar who wishes to support a brand-new journal should not pay APCs, but lobby his or her university to negotiate a performance-based headhunting contract.

The essence of this proposal is the performance-based contract that exchanges APCs for headhunting rewards. All other details are up for discussion. Every university would be free to develop its own specific performance criteria and reward structures. Over time, we would probably want to converge towards a standard contract.

Headhunting contracts create a competitive market for OA journals. In this market, the distributed and collective wisdom of search/promotion committees defines scholarly excellence and provides the monetary rewards to journals. As a side benefit, this free-market system creates a professionally managed open infrastructure for the scholarly archive.

Monday, December 16, 2013

Beall's Rant

Jeffrey Beall of Beall's list of predatory scholarly publishers recently made some strident arguments against Open Access (OA) in the journal tripleC (ironically, an OA journal). Beall's comments are part of a non-refereed section dedicated to a discussion on OA.

Michael Eisen takes down Beall's opinion piece paragraph by paragraph. Stevan Harnad responds to the highlights/lowlights. Roy Tennant has a short piece on Beall in The Digital Shift.

Beall's takes a distinctly political approach in his attack on OA:
“The OA movement is an anti-corporatist movement that wants to deny the freedom of the press to companies it disagrees with.”
“It is an anti-corporatist, oppressive and negative movement, [...]”
“[...] a neo-colonial attempt to cast scholarly communication policy according to the aspirations of a cliquish minority of European collectivists.”
“[...] mandates set and enforced by an onerous cadre of Soros-funded European autocrats.”
This is the rhetorical style of American extremist right-wing politics that casts every problem as a false choice between freedom and – take your pick – communism or totalitarianism or colonialism or slavery or... European collectivists like George Soros (who became a billionaire by being a free-market capitalist).

For those of us more comfortable with technocratic arguments, politics is not particularly welcome. Yet, we cannot avoid the fact that the OA movement is trying to reform a large socio-economic system. It would be naïve to think that that can be done without political ideology playing a role. But is it really too much to ask to avoid the lowest level of political debate, politics by name-calling?

The system of subscription journals has an internal free-market logic to it that no proposed or existing OA system has been able to replace. In a perfect world, the subscription system uses an economic market to assess the quality of editorial boards and the level of interest in a particular field. Economic viability acts as a referee of sorts, a market-based minimum standard. Some editorial boards deserve the axe for doing poor work. Some fields of study deserve to go out of business for lack of interest. New editorial boards and new fields of study deserve an opportunity to compete. Most of us prefer that these decisions are made by the collective and distributed wisdom of free-market mechanisms.

Unfortunately, the current scholarly-communication marketplace is far from a free market. Journals hardly compete directly with one another. Site licenses perpetuate a paper-era business model that forces universities to buy all content for 100% of the campus community, even those journals that are relevant only to a sliver of the community. Site licenses limit competition between journals, because end users never get to make the price/value trade-offs critical to a functional free market. The Big Deal exacerbates the problem. Far from providing a service, as Beall contends, the Big Deal gives big publishers a platform to launch new journals without competition. Consortial deals are not discounts; they introduce peer networks to make it more difficult to cancel existing subscriptions. [What if Libraries were the Problem?] [Libraries: Paper Tigers in a Digital World]

If Beall believes in the free market, he should support competition from new methods of dissemination, alternative assessment techniques, and new journal business models. Instead, he seems to be motivated more by a desire to hold onto his disrupted job description:
“Now the realm of scholarly communication is being removed from libraries, and a crisis has settled in. Money flows from authors to publishers rather than from libraries to publishers. We've disintermediated libraries and now find that scholarly system isn't working very well.”
In fact, it is the site-license model that reduced the academic library to the easy-to-disintermediate dead-end role of subscription manager. [Where the Puck won't Be] Most librarians are apprehensive about the changes taking place, but they also realize that they must re-interpret traditional library values in light of new technology to ensure long-term survival of their institution.

Thus far, scholarly publishing has been the only type of publishing not disrupted by the Internet. In his seminal work on disruption [The Innovator's Dilemma], Clayton Christensen characterizes the defenders of the status quo in disrupted industries. Like Beall, they are blinded by traditional quality measures, dismiss and/or denigrate innovations, and retreat into a defense of the status quo.

Students, researchers, and the general public deserve a high-quality scholarly-communication system that satisfies basic minimum technological requirements of the 21st century. [Peter Murray-Rust, Why does scholarly publishing give me so much technical grief?] In the last 20 years of the modern Internet, we have witnessed innovation after innovation. Yet, scholarly publishing is still tied to the paper-imitating PDF format and to paper-era business models.

Open Access may not be the only answer [Open Access Doubts], but it may very well be the opportunity that this crisis has to offer. [Annealing the Library] In American political terms, Green Open Access is a public option. It provides free access to author-formatted versions of papers. Thereby, it serves the general public and the scholarly poor. It also serves researchers by providing a platform for experimentation without having to go through onerous access negotiations (for text mining, for example). It also serves as an additional disruptive trigger for free-market reform of the scholarly market. Gold Open Access in all its forms (from PLOS to PEERJ) is a set of business models that deserve a chance to compete on price and quality.

The choice is not between one free-market option and a plot of European collectivists. The real choice is whether to protect a functionally inadequate system or whether to foster an environment of innovation.

Tuesday, May 21, 2013

Turow vs Everyone

According to celebrated author, lawyer, and president of the Author's Guild Scott Turow, the legal and technological erosion of copyright endangers writers. (New York Times, April 7th, 2013) His enemy list is conspiratorial in length and breadth. It includes the Supreme Court, publishers, search engines, the Hathi trust, Google, academics, libraries, and Amazon. Nevertheless, Turow makes compelling arguments that deserve scrutiny.

The Supreme Court decision on re-importation. (Kirtsaeng v. John Wiley & Sons, Inc.)
This 6-3 decision merely reaffirmed the first sale doctrine. It is highly unlikely that this will significantly affect book prices in the US. If it does, any US losses will be offset by price increases in foreign markets. More importantly, the impact will be negligible because paper books will soon be a niche market in the US.

Publishers restrict royalties on e-books.
Publishers who manage the technology shift by making minor business adjustments, such as transferring costs to authors, libraries, and consumers, underestimate the nature of current changes. Traditional publishers built their business when disseminating information was difficult. Once they built their dissemination channels, making money was relatively easy. In our current world, building dissemination channels is easy and cheap. Making money is difficult. Authors may need new partners who built their business in the current environment; there are some in his list of enemies.

Search engines make money of referring users to pirate sites.
Turow has a legitimate moral argument. However, politicizing search engines by censoring search results is as wrong as it is ineffective. Pirate sites also spread through social networks. Cutting off pirate sites from advertizing networks, while effective, is difficult to achieve across international borders and requires unacceptable controls on information exchange. iTunes and its competitors have shown it is possible to compete with pirate sites by providing a convenient user interface, speed, reliability, quality, and protection against computer viruses.

The Hathi trust and Google scanned books without authorization.
Hathi and Google were careless. Authors and publishers were rigid. Experimentation gave way to litigation.

Some academics want to curtail copyright.
Scholarly publishers like Elsevier have profit margins that exceed 30%. Yet, Turow claims that “For many academics today, their own copyrights hold little financial value because scholarly publishing has grown so unprofitable.”

Academics' research is often funded in part by government, and it is always supported by universities. Universities have always been committed to research openness, and they use published research as means for assessment. This is why academics forego royalties when they publish research. The concept of research openness is changing, and many academics are lobbying for the idea that research should be freely available to all. The idea of Open Access was recently embraced by the White House. Open Access applies only to researchers funded by the government and/or employed by participating universities and research labs. It only covers research papers, not books. It does not apply to independent authors. Open Access does not curtail copyright.

Legal academics like Prof. Lawrence Lessig have argued for stricter limits on traditional copyright and alternative copyrights. Pressured by industry lobbyists, Congress has repeatedly increased the length of copyright. If this trend continues, recent works may never enter into the public domain. Legislation must balance authors' intellectual property rights and everyone's (including authors') freedom to produce derivative works, commentaries, parodies, etc.

Amazon patents a scheme to re-sell used e-books.
This patent is a misguided attempt to monetize the human frailty of carrying familiar concepts from old technology senselessly into the new. It is hardly the stuff that made this forward-looking company formidable.

Libraries expand paper lending into digital lending.
Turow demands more money from libraries for digital lending privileges. He is too modest; he should demand their whole budget.

When a paper-based library acquires a book, it permanently increases the value of its collection. This cumulative effect over many years created the world's great collections. When a community spends resources on a digital-lending library, it rents information from publishers and provides a fleeting service for only as long as the licenses last. When the license ends, the information disappears. There is no cumulative effect. That digital-lending library only adds overhead. It will never own or contribute new information. It is an empty shell.

Digital lending is popular with the public. It gives librarians the opportunity to transition gradually into digital space. It continues the libraries' billion-dollar money stream to publishers. Digital lending have a political constituency, but it does not stand up to rational scrutiny. Like Amazon's scheme to resell used e-books, digital-lending programs are desperate attempts to hang on to something that simulates the status quo.

Lending is the wrong paradigm for the digital age. Instead, libraries should use their budgets to accumulate quality open-access information. They should sponsor qualified authors to produce open-access works of interest to the communities they serve. This would give authors a choice. They could either produce their work commercially behind a pay wall, or they could produce library-funded open-access works.

Friday, June 29, 2012

On Becoming Unglued...

On June 20th, the e-book world changed: One innovation cut through the fog of the discussions on copyright, digital rights management (DRM), and various other real and perceived problems of digital books. It did not take a revolution, angry protests, lobbying of politicians, or changes in copyright law. All it took was a simple idea, and the talent and determination to implement it.

Gluejar is a company that pays authors for the digital rights to their books. When it acquires those rights, Gluejar produces the e-book and makes it available under a suitable open-access license. Gluejar calls this process the ungluing of the book.

Handing out money, while satisfying, is not much of a business model. So, Gluejar provides a platform for the necessary fundraising. When proposing to unglue a book, an author sets a price level for the digital rights, and the public is invited to donate as little or as much as they see fit. If the price level is met, the pledged funds are collected from the sponsors, and the book is unglued.

Why would the public contribute? First and foremost, this is small-scale philanthropy: the sponsors pay an author to provide a public benefit. The ever increasing term of copyright, now 70 years beyond the death of the author, has long been a sore point for many of us. Here is a perfectly valid free-market mechanism to release important works from its copyright shackles, while still compensating authors fairly. Book readers that devote a portion of their book-buying budget to ungluing build a lasting free public electronic library that can be enjoyed by everyone.

The first ungluing campaign, “Oral Literature In Africa” by Ruth H. Finnegan (Oxford University Press, 1970), raised the requisite $7,500 by its June 20th deadline. Among the 271 donors, there were many librarians. Interestingly, two libraries contributed as institutions: the University of Alberta Library and the University of Windsor Leddy Library. The number of participating institutions is small, but any early institutional recognition is an encouraging leading indicator.

I hope these pioneers will now form a friendly network of lobbyists for the idea that all libraries contribute a portion of their book budget to ungluing books. I propose a modest target: within one year, every library should set aside 1% of its book budget for ungluing. This is large enough to create a significant (distributed) fund, yet small enough not to have a negative impact on operations, even in these tough times. Encourage your library to try it out now by contributing to any of four open campaigns. Once they see it in action and participate, they'll be hooked.

Special recognition should go to Eric Hellman, the founder of Gluejar. I have known Eric many years and worked with him when we were both on the NISO Committee that produced the OpenURL standard. Eric has always been an innovator. With Gluejar, he is changing the world... one book at a time.

Thursday, June 21, 2012

The PeerJ Disruption


The Open Access movement is not ambitious enough. That is the implicit message of the PeerJ announcement.

PeerJ distills a journal to what it really is: a social network. For a relatively small lifetime membership fee ($99 to $249 depending on the level an author chooses), authors get access to the social network, whose mission it is to disseminate and archive scholarly work. The concept is brilliant. It cuts through the clutter. Anyone who has ever published a paper understands it immediately. It makes sense.

The idea seems valid, but how can they execute it with membership fees that are so  low? When I see this level of price discrepancy between a new and an old product, I recall the words of the Victorian-era critic John Ruskin:

“It is unwise to pay too much, but it’s worse to pay too little. When you pay too much, you lose a little money — that’s all. When you pay too little, you sometimes lose everything, because the thing you bought is incapable of doing the thing it was bought to do.”
“There is hardly anything in the world which someone can’t make a little worse and sell a little cheaper — and people who consider price alone are this man’s lawful prey.”

On the other hand, we lived through fifty years of one disruptive idea after another proving John Ruskin wrong. Does the PeerJ team have a disruptive idea up their sleeve to make a quality product possible at the price level they propose?

In one announcement, the PeerJ founders state that “publication fees of zero were the thing we should ultimately aim for”. They hint at how they plan to publish the scholarly literature at virtually no cost:

“As a result, PeerJ plans to introduce additional products and services down the line, all of which will be aligned with the goals of the community that we serve. We will be introducing new and innovative B2B revenue streams as well as exploring the possibility of optional author or reader services working in conjunction with the community.”

In the age of Facebook, Flickr, Tumblr, LinkedIn, Google Plus etc., we all know there is value in the social network and in services built on top of content. The question is whether PeerJ has found the key to unlocking that value in the case of the persnickety academic social network.

For now, all we have to go on is the PeerJ team's credibility, which they have in abundance. For an introduction to the team and insight on how it might all work, read Bora Zivkovic's blog. Clearly, this team understands scholarly publishing and have successfully executed business plans. The benefit of the doubt goes to them. I can't wait to see the results.

I wish them great success.

PS: Peter Murray-Rust just posted a blog enthusiastically supporting the PeerJ concept.

Tuesday, June 5, 2012

The Day After


On Sunday, the Open Access petition to the White House reached the critical number of 25,000 signatures: President Obama will take a stand on the issue. Yesterday was Open Access Monday, a time to celebrate an important milestone. Today is a time for libraries to reflect on their new role in a post-site-licensed world.

Imagine success beyond all expectations: The President endorses Open Access. There is bipartisan support in Congress. Open Access to government-sponsored research is enacted. The proposal seeks only Green Open Access: the deposit in an open repository of scholarly articles that are also conventionally published. With similar legislation being enacted world-wide, imagine all scholarly publishers deciding that the best way forward for them is to convert all journals to the Gold Open Access model. In this model, authors or their institutions pay publishing costs up front to publish scholarly articles under an open license.

Virtually overnight, universal Open Access is a reality.

9:00am

When converting to Gold Open Access, publishers replace site-license revenue with author-paid page charges. They use data from the old business model to estimate revenue-neutral page charges. The estimate is a bit rough, but as long as scholars keep publishing at the same rate and in the same journals as before, the initial revenue from page charges should be comparable to that from site licenses. Eventually, the market will settle around a price point influenced by the real costs of open-access publishing, by publishing behavior of scholars who must pay to get published, and by publishers deciding to get in or get out of the scholarly-information market.

10:00am

Universities re-allocate the libraries' site-license budgets and create accounts to pay for author page charges. Most universities assign the management of these accounts to academic departments, which are in the best position to monitor expenses charged by faculty.

11:00am

Publishers make redundant their sales teams catering to libraries. They cancel vendor exhibits at library conferences. They terminate all agreements with journal aggregators and other intermediaries between libraries and publishers.

12:00pm

Libraries eliminate electronic resource management, which includes everything involved in the acquisition and maintenance of site licenses. No more tracking of site licenses. No more OpenURL servers. No more proxy servers. No more cataloging electronic journals. No more maintaining databases of journals licensed by the library.

1:00pm

For publishers, the editorial boards and the authors they attract are more important than ever. These scholars have always created the core product from which publishers derived their revenue streams. Now, these same scholars, not intermediaries like libraries and journal aggregators, are the direct source of the revenue. Publishers expand the marketing teams that target faculty and students. They also strengthen the teams that develop editorial boards.

2:00pm

Publishers' research portals like Elsevier's Scopus start incorporating full-text scholarly output from all of their competitors.

Scholarly societies provide specialized digital libraries for every niche imaginable.

Some researchers develop research tools that data mine the open scholarly literature. They create startup ventures and commercialize these tools.

Google Scholar and Microsoft Academic Search each announce comprehensive academic search engines that have indexed the full text of the available open scholarly literature.

3:00pm

While some journal aggregators go out of business, others retool and develop researcher-oriented products.

ISI's World of Knowledge, EBSCO,  OCLC, and others create research portals catering to individual researchers. Of course, these new portals incorporate full-text papers, not just abstracts or catalog records.

Overnight, full-text scholarly search turned into a competitive market. Developing viable business models proves difficult, because juggernauts Google and MicroSoft are able to provide excellent search services for free. Strategic alliances are formed.

4:00pm

No longer tied to their institutions' libraries by site licenses, researchers use whichever is the best research portal for each particular purpose. Web sites of academic libraries experience a steep drop-off in usage. The number of interlibrary loan requests tumbles: only requests for nondigital archival works remain.

5:00pm

Libraries lose funding for those institutional repositories that duplicate scholarly research available through Gold Open Access. Faculty are no longer interested in contributing to these repositories, and university administrators do not want to pay for this duplication.

Moral

By just about any measure, this outcome would be far superior to the current state of scholarly publishing. Scholars, researchers, professionals in any discipline, students, businesses, and the general population would benefit from access to original scholarship unfettered by pay walls. The economic benefit of commercializing research faster would be immense. Tuition increases may not be as steep because of savings in the library budget.

If librarians fear a steadily diminishing role for academic libraries (and they should), they must make a compelling value proposition for the post-site-licensed world now. The only choice available is to be disruptive or to be disrupted. The no-disruption option is not available. Libraries can learn from Harvard Business School Professor Clayton M. Christensen, who has analyzed scores of disrupted industries. They can learn from the edX project or Udacity, major initiatives of large-scale online teaching. These projects are designed to disrupt the business model of the very institutions that incubated them. But if they succeed, they will be the disrupting force. Those on the sidelines will be the disrupted victims.

Libraries have organized or participated in Open Access discussions, meetings, negotiations, petitions, boycotts... Voluntary submission to institutional repositories has been proven insufficient. Enforced open-access mandates are a significant improvement. Yet, open-access mandates are not a destination. They are, at most, a strategy for creating change. The current scholarly communication system, even if complemented with open repositories that cover 100% of the scholarly literature, is hopelessly out of step with current technology and society.

In the words of Andy Grove, former chairman and chief executive officer of Intel: “To understand a company’s strategy, look at what they actually do rather than what they say they will do.” Ultimately, only actions that involve significant budget reallocations are truly credible. As long as pay walls are the dominant item in library budgets, libraries retain the organizational structure appropriate for a site-licensed world. As long as pay-wall management dominates the libraries' day-to-day operations, libraries hire, develop, and promote talent for a site-licensed world. This is a recipe for success for only one scenario: the status-quo.

Friday, April 27, 2012

Annealing the Library: Follow up


Here are responses to some of the off-line reactions to the previous blog.


-

“Annealing the Library” did not contain any statements about abandoning paper books (or journals). Each library needs to assess the value of paper for its community. This value assessment is different from one library to the next and from one collection to the next.

The main point of the post is that the end of paper acquisitions should NOT be the beginning of digital licenses. E-lending is not an adequate substitute for paper-based lending. E-lending is not a long-term investment. Libraries will not remain relevant institutions by being middlemen in digital-lending operations.

I neglected to concede the point that licensing digital content could be a temporary bandaid during the transition from paper to digital.

-

In the case of academic libraries, the bandaid of site licensing scholarly journals is long past its due expiration date. It is time to phase out of the system.

If the University of California and California State University jointly announced a cancellation of all site licenses over the next three to five years, the impact would be felt immediately. The combination of the UC and Cal State systems is so big that publishers would need to take immediate and drastic actions. Some closed-access publishers would convert to open access. Others would start pricing their products appropriate for the individual-subscription market. Some publishers might not survive. Start-up companies would find a market primed to accept innovative models.

Unfortunately, most universities are too small to have this kind of immediate impact. This means that some coordinated action is necessary. This is not a boycott. There are no demands to be met. It is the creation of a new market for open-access information. It is entirely up to the publishers themselves how to decide how to respond. There is no need for negotiations. All it takes is the gradual cancellation of all site licenses at a critical mass of institutions.

-

Annealing the Library does not contradict an earlier blog post, in which I expressed three Open Access Doubts. (1) I expressed disappointment in the quality of existing Open Access repositories. The Annealing proposal pumps a lot of capital into Open Access, which should improve quality. (2) I doubted the long-term effectiveness of institutional repositories in bringing down the total cost of access to scholarly information. Over time, the Annealing proposal eliminates duplication between institutional repositories and the scholarly literature, and it invests heavily into Open Access. (3) I wondered whether open-access journals are sufficiently incentivized to maintain quality over the long term. This doubt remains. Predatory open-access journals without discernible quality standards are popping up right and left. This is an alarming trend to serious open-access innovators. We urgently need a mechanism to identify and eliminate underperforming open-access journals.

-

If libraries cut off subsidies to pay-walled information, some information will be out of reach. By phasing in the proposed changes gradually, temporary disruption of access to some resources will be minimal. After the new policies take full effect, they will create many new beneficiaries, open up many existing information resources, and create new open resources.


Tuesday, April 17, 2012

Annealing the Library


The path of least resistance and least trouble is a mental rut already made. It requires troublesome work to undertake the alternation of old beliefs.
John Dewey

What if a public library could fund a blogger of urban architecture to cover in detail all proceedings of the city planning department? What if it could fund a local historian to write an open-access history of the town? What if school libraries could fund teachers to develop open-access courseware? What if libraries could buy the digital rights of copyrighted works and set them free? What if the funds were available right now?

Unfortunately, by not making decisions, libraries everywhere merely continue to do what they have always done, but digitally. The switch from paper-based to digital lending is well under way. Most academic libraries already converted to digital lending for virtually all scholarly journals. Scores of digital-lending services are expanding digital lending to books, music, movies, and other materials. These services let libraries pretend that they are running a digital library, and they can do so without disrupting existing business processes. Publishers and content distributors keep their piece of the library pie. The libraries' customers obtain legal free access to quality content. The path of least resistance feels good and buries the cost of lost opportunity under blissful ignorance.

The value propositions of paper-based and digital lending are fundamentally different. A paper-based library builds permanent infrastructure: collections, buildings, and catalogs are assets that continue to pay dividends far into the future. In contrast, resources spent on digital lending are pure overhead. This includes staff time spent on negotiating licenses, development and maintenance of authentication systems, OpenURL, proxy, and web servers, and the software development to give a unified interface to disparate systems of content distributors. (Some expenses are hidden in higher fees for the Integrated Library System.) These expenses do not build permanent infrastructure and merely increase the cost of every transaction.

Do libraries add value to the process? If so, do libraries add value in excess of their overhead costs? In fact, library-mediated lending is more cumbersome and expensive than direct-to-consumer lending, because content distributors must incorporate library business processes in their lending systems. If the only real value of the library's meddling is to subsidize the transactions, why not give the money to users directly? These are the tough questions that deserve an answer.

Libraries cannot remain relevant institutions by being meaningless middlemen who serve no purpose. Libraries around the world are working on many exciting digital projects. These include digitization projects and the development of open archives for all kinds of content. Check out this example. Unfortunately, projects like these will be underfunded or cannot grow to scale as long as libraries remain preoccupied with digital lending.

Libraries need a different vision for their digital future, one that focuses on building digital infrastructure. We must preserve traditional library values, not traditional library institutions, processes, and services. The core of any vision must be long-term preservation of and universal open access to important information. Yet, we also recognize that some information is a commercial commodity, governed by economic markets. Libraries have never covered all information needs of everyone. Yet, independent libraries serving their respective communities and working together have established a great track record of filling global information needs. This decentralized model is worth preserving.

Some information, like most popular music and movies, is obviously commercial and should be governed by copyright, licenses, and prices established by the free market. Other information, like many government records, belongs either in the public domain or should be governed by an open license (Creative Commons, for example). Most information falls somewhere in between, with passionate advocates on both sides of the argument for every segment of the information market. Therefore, let us decentralize the issue and give every creator a real choice.

By gradually converting acquisition budgets into grant budgets, libraries could become open-access patrons. They could organize grant competitions for the production of open-access works. By sponsoring works and creators that further the goals of its community, each library contributes to a permanent open-access digital library for everyone. Publishers would have a role in the development of grant proposals that cover all stages of the production and marketing of the work. In addition to producing the open-access works, publishers could develop commercial added-value services. Finally, innovative markets like the one developed by Gluejar allow libraries (and others) to acquire the digital rights of commercial works and set them free.

The traditional commercial model will remain available, of course. Some authors may not find sponsors. Others may produce works of such potential commercial value that open access is not a realistic option. These authors are free to sell their work with any copyright restrictions deemed necessary. They are free to charge what the market will bear. However, they should not be able to double-dip. There is no need to subsidize closed-access works when open access is funded at the level proposed here. Libraries may refer customers to closed-access works, but they should not subsidize access. Over time, the cumulative effect of committing every library budget to open access would create a world-changing true public digital library.

Other writers have argued the case against library-mediated digital lending. No one is making the arguments in support of the case. The path of least resistance does not need arguments. It just goes with the flow. Into oblivion.

Friday, March 16, 2012

Annealing Elsevier

Through a bipartisan pair of shills, Elsevier introduced a bill that would have abolished the NIH open-access mandate and prevented other government research-funding agencies from requiring open access to government-sponsored research. In this Research Works Act (RWA) episode, Elsevier showed its hand. Twice. When it pushed for this legislation, and when it withdrew.

Elsevier was one of the first major publishers to support green open access. By pushing RWA, Elsevier confirmed the suspicion that this support is, at most, a short-term tactic to appease the scholarly community. Its real strategy is now in plain sight. RWA was not done on a whim. They cultivated at least two members of the House of Representatives and their staff. Just to get it out of committee, they would have needed several more. No one involved could possibly have thought they could sneak in RWA without anyone noticing. Yet, after an outcry from the scholarly community, they dropped the legislation just as suddenly as they introduced it. If Elsevier executives had a strategy, it is in tatters.

Elsevier’s RWA move and its subsequent retrenchment have more than a whiff of desperation. I forgive your snickering at this suggestion. After all, by its own accounting, Elsevier’s adjusted operating margin for 2010 was 35.7% and has been growing monotonously at least since 2006. These are not trend lines of a desperate company. (Create your own Elsevier reports here. Thanks to Nalini Joshi, @monsoon0, for tweeting the link and the graph!)

Paradoxically, its past success is a problem going forward. Elsevier’s stock-market shares are priced to reflect the company’s consistently high profitability. If it were to deteriorate, even by a fraction, share prices would tumble. To prevent that, Elsevier must raise revenue from a client base of universities that face at least several more years of extremely challenging budgets. For universities, the combination of price increases and budget cuts puts options on the table once thought unthinkable. Consider, for example, the University of California and the California State University systems. These systems have already cut to the bone, and they may face even more dire cuts, unless voters approve a package of tax increases. Because of their size, just these two university systems by themselves have a measurable impact on Elsevier’s bottom line. This is repeated across the country and the world.

Clearly, RWA was intended to make cancelling site licenses a less viable option for universities, now and in the future. When asked to deposit their publications in institutional repositories, it is an unfortunate fact that most scholars ignore their own institutions. They cannot ignore their funding agencies. Over time, funder-mandated repositories will become a fairly comprehensive compilation of the scholarly record. They may also erode the prestige factor of journals. After all, what is more prestigious? That two anonymous referees and an editor approved the paper or that the NIH funded it to the tune of a few million dollars? Advanced web-usage statistics of the open-access literature may further erode the value of impact factor and other conventional measures. Recently, I expressed some doubts that the open access movement could contribute to reining in journal prices. I may rethink some of this doubt, particularly with respect to funder-mandated open access.

Elsevier’s quick withdrawal from RWA is quite remarkable. Tim Gowers was uniquely effective, and deserves a lot of credit. When planning for RWA, Elsevier must have anticipated significant push back from the scholarly community. It has experience with boycotts and protests, as it has survived several. Clearly, the size and vehemence of the reaction was way beyond Elsevier's expectations. One can only speculate how many of its editors were willing to walk away over this issue.

Long ago, publishers figured out how to avoid becoming a low-profit commodity-service business: they put themselves at the hub of a system that establishes a scholarly pecking order. As beneficiaries of this system, current academic leaders and the tenured professoriate assign great value to it. Given the option, they would want everything the same, except cheaper, more open, without restrictive copyrights, and available for data mining. Of course, it is absurd to think that one could completely overhaul scholarly publishing by tweaking the system around the edges and without disrupting scholars themselves. Scholarly publishers survived the web revolution without disruption, because scholars did not want to be disrupted. That has changed.

Because of ongoing budget crises, desperate universities are cutting programs previously considered untouchable. To the dismay of scholars everywhere, radical options are on the table as a matter of routine. Yet, in this environment, publishers like Elsevier are chasing revenue increases. Desperation and anger are creating a unique moment. In Simulated Annealing terms (see a previous blog post): there is a lot of heat in the system, enabling big moves in search of a new global minimum.

Disruption: If not now, when?


Wednesday, February 22, 2012

Annealing the Information Market




When analyzing complex systems, applied mathematicians often turn to Monte Carlo simulations. The concept is straightforward. Change the state of the system by making a random move. If the new state is an improvement, make a new random move in a direction suggested by extrapolation. Otherwise, make a random move in a different direction. Repeat until a certain variable is optimized.

A commodity market is a real-life concurrent Monte Carlo system. Market participants make sequences of moves. Each new move is random, though it incorporates experience gained from previous moves. The resulting system is a remarkably effective mechanism to produce commodities at the lowest possible cost while adjusting to changing market conditions. Adam Smith called it the invisible hand of the free market.

In severely disrupted markets, the invisible hand may take an unacceptably long time, because Monte Carlo systems may remain stuck in local minima. We may understand this point by visualizing a mountain range with many peaks and valleys. An observer inside one particular valley thinks the lowest point is somewhere on that valley’s floor. He is unaware of other valleys at lower altitudes. To see these, he must climb to the rim of the valley, far away from the observed local minimum. This takes a very long time with small random steps that are biased in favor of going towards the observed local minimum.

For this reason, Monte Carlo simulations use strategies that incorporate large random moves. One such strategy, Simulated Annealing, is inspired by a metallurgical technique that improves the crystallographic structure of metals. During the annealing process, the metal is heated and cooled in a controlled fashion. The heat provides energy to change large-scale crystal structures in the metal. As the metal cools, restructuring occurs only at gradually smaller scales. In Simulated Annealing, the simulation is run “hot” when large random moves are used to optimize the system at coarse granularity. When sufficiently near a global minimum, the system is “cooled“, and smaller moves are used for precision at fine granularity. Note that, from a Monte Carlo perspective, large moves are just as random as small moves. Each individual move may succeed or fail. What matters is the strategy that guides the sequence of moves.

When major market disruptions occur, resistance to change breaks down and large moves become possible. (The market to runs “hot” in the Simulated Annealing sense.) Sometimes, government leaders or tycoons of industry initiate large moves, because they believe, right or wrong, that they can take the market to a new global minimum. Politicians enact new laws, or they orchestrate bailouts. Tycoons make large bets that are risky by conventional measures. Sometimes, unforeseen circumstances force markets into making large moves.

The music industry experienced such an event in late 1999, when Napster, the illegal music-sharing site, suddenly became popular. Eventually, this disruption enabled then-revolutionary business models like iTunes, which could compete with illegal downloading. This stopped the hemorrhaging, though not without leaving a disastrous trail. Traditional music retailers, distributors, and other middlemen were forced out. Revenue streams never recovered. With the Stop Online Piracy Act (SOPA), the music industry, joined by the entertainment industry, was trying to undo some of the damage. If enacted, it would have caused significant collateral damage, but it would have done nothing to reduce piracy. This is covered widely in the blogosphere. For example, consider blog posts by Eric Hellman [1] [2] and David Post [3].

While SOPA is dead, other attempts at antipiracy legislation are in the works. Some may succeed legislatively and may be enacted. In the end, however, heavy-handed legislation will fail. The evolution towards ubiquitous information availability (pirated or not) is irreversible. Even the cruelest of dictators cannot contain the flow of information. Why would anyone think democracies could? Eventually, laws follow society’s major trends. They always do.

When Napster became popular, the music industry was unable to fight back, because its existing distribution channels had become technologically obsolete. Napster was the large random move that made visible a new valley at lower altitude. Without Napster, some other event, circumstance, or product would eventually have come along, caused havoc, and be blamed. Antipiracy legislation might have delayed the music industry’s problems in 1999, but it will not solve the entertainment industry’s problems in 2012.

In the new market, piracy may no longer be the problem it once was. Consumers are willing to pay for convenience, quality of service, and security (absence of malware). Piracy may still depress revenues, but there are at least three other reasons for declining revenues. (1) Revenues no longer support many middlemen, and this is reflected in lower music prices through free-market competition. (2) Some consumers are interested in discovering new artists themselves, not in listening to artists discovered on their behalf by record labels. (3) The recession has reduced discretionary income.

It is difficult to assess the relative importance of disintermediation, behavior change, recession, and piracy. But the effect of piracy on legal downloads is probably much less than thought. This may be good news for the music industry. After many large and disruptive moves, the music market may be near a new global minimum. Here, it can rebuild and find new profit-making ventures. These are the kind of conventional “small” moves for a normal, non-disrupted market.

Other information markets are not that lucky.