What will academic libraries look like in 2050?
In the early days of the web, librarians had to fight back against the notion that libraries would soon be obsolete. They had solid arguments. Information literacy would become more important. Archiving and managing information would become more difficult. In fact, academic libraries saw an opportunity to increase their role on campus. This opportunity did not materialize. Libraries remain stuck in a horseless-carriage era. They added an IT department. They made digital copies of existing paper services. They continued their existing business relationships with publishers and various intermediaries. They ignored the lessons of the web-connected knowledge economy. Thriving organizations create virtuous cycles of abundance by solving hard problems: better solutions, more users, more revenue, more content, more expertise, and better solutions.
Academic libraries seem incapable of escaping commodity-service purgatory, even when tackling their most ambitious projects. They are eager to manage data archives, but the paper-archive model produces an undifferentiated commodity preservation service. A more appropriate model would be the US National Virtual Astronomical Observatory, where preservation is a happy side effect of extracting maximum research out of existing data. Data archives should be centers of excellence. They focus on a specific field. They are operated by researchers who keep abreast of the latest developments, who adapt data sets to evolving best practices, who make data sets interoperable, who search for inconsistencies between different studies, who detect, flag, and correct errors, and who develop increasingly sophisticated services.
No university can take a center-of-excellence approach to data archiving for every field in which it is active. No archive serving just one university can grow to a sufficiently large scale for excellence. Each field has different needs. How many centers does the field need? How should centers divide the work? What are their long-term missions? Who should manage them? Where are the sustainable sources for funding? Libraries cannot answer these questions. Only researchers have the required expertise and the appropriate academic, professional, and governmental organizations for the decision-making process.
Looking back over the past twenty years, all development of digital library services has been limited by the institutional nature of academic libraries, which receive limited funding to provide limited information and limited services to a limited community. As a consequence, every major component of the digital library is flawed, and none has the foundation to rise to excellence.
General-purpose institutional repositories did not live up to their promise. [Let IR RIP] The center-of-excellence approach of disciplinary repositories, like ArXiv or PubMed, performed better in spite of less stable funding. Geographical distance between repository managers and scholars did not matter. Disciplinary proximity did.
Once upon a time, the catalog was the search engine. Today, it tells whether a printed item is checked out and/or where it is shelved. It is useless for digital information. It is often not even a good option to find information about print material. The catalog, bloated into an integrated library system, wastes resources that should be redirected towards innovation.
Libraries provide access to their site licenses through journal databases, OpenURL servers, and proxy servers. They pay for this expensive system so publishers can perpetuate a business model that eliminates competition, is rife with conflict of interest, and can impose almost unlimited price increases. Scholars should be able to subscribe to personal libraries as they do for their infotainment. [Hitler, Mother Teresa, and Coke] [Where the Puck won't be] [Annealing the Library] [What if Libraries were the Problem?]
In the paper era, the interlibrary-loan department was the gateway to the world's information. Today, it is mostly a buying agent for costly pay-per-view access to papers not covered by site licenses. Personal libraries would eliminate these requests. Digitization and open access can eliminate requests for out-of-copyright material.
Why is there no scholarly app store, where students and faculty can build their own libraries? By replacing site licenses with app-store subsidies, universities would create a competitive marketplace for subscription journals, open-access journals, experimental publishing platforms, and other scholarly services. A library making an institutional decision must be responsible and safe. One scholar deciding where to publish a paper, whether to cancel a journal, or which citation database to use can take a risk with minimal consequence. This new dynamic would kickstart innovation. [Creative Destruction by Social Network]
Libraries seem safe from disruption for now. There are no senior academics sufficiently masochistic to advocate this kind of change. There are none who are powerful enough to implement it. However, libraries that have become middlemen for outsourced mediocre information services are losing advocates within the upper echelons of academic administrations every day. The cost of site licenses, author page charges, and obsolete services are effectively cutting the innovation budget. Unable to attract or retain innovators, stagnating libraries will just muddle through while digital services bleed out. When some services fall apart, others become collateral damage. The print collection will shrink until it is a paper archive of rare and special items locked in a vault.
Postscript: I intended to write about transforming libraries into centers of excellence. This fell apart in the writing. I hesitated. I rewrote. I reconsidered. I started over again.
If I am right, libraries are on the wrong track, and there is no better track. Libraries cannot possibly remain relevant by replicating the same digital services on every campus. There is a legitimate need for advanced information services supported by centers of excellence. However, it is easier to build new centers from scratch than to transform libraries tied up in institutional straitjackets.
Perhaps, paper-era managers moved too slowly and missed the opportunity that seemed so obvious twenty years ago. Perhaps, that opportunity was just a mirage. Whatever the reason, rank-and-file library staff will be the unwitting victims.
Perhaps, I am wrong. Perhaps, academic libraries will carve out a meaningful digital future. If they do, it will be by taking big risks. The conventional options have been exhausted.
A blog looking at the world from a somewhat scientific and technological perspective.
Showing posts with label economy. Show all posts
Showing posts with label economy. Show all posts
Tuesday, June 27, 2017
Forward to the Past
Labels:
#disruption,
#openaccess,
cataloging,
economy,
education,
elsevier,
library,
metadata,
open access,
open archives,
publishing,
research,
scholar,
site license,
social network,
technology
Sunday, July 24, 2016
Let IR RIP
The Institutional Repository (IR) is
obsolete. Its flawed foundation cannot be repaired. The IR must be
phased out and replaced with viable alternatives.
Lack of enthusiasm. The number of
IRs has grown because of a few motivated faculty and administrators. After twenty years of
promoting IRs, there is no grassroots support. Scholars submit papers to an IR because they
have to, not because they want to. Too few IR users become recruiters. There is no network effect.
Local management. At most
institutions, the IR is created to support an Open Access (OA)
mandate. As part of the necessary approval and consensus-building
processes, various administrative and faculty committees impose local
rules and exemptions. After launch, the IR is managed by an academic
library accountable only to current faculty. Local concerns dominate those of the worldwide community of potential users.
Poor usability. Access-, copy-,
reuse, and data-mining rights are overly restrictive or left
unstated. Content consists of a mishmash of
formats. The resulting federation of IRs is useless
for serious research. Even the most basic queries cannot be
implemented reliably. National
IRs (like PubMed) and disciplinary repositories (like ArXiv) eliminate local idiosyncrasies and are far more useful. IRs were supposed to duplicate their success, while spreading the financial burden and immunizing the system against adverse political decisions. The sacrifice in usability is too high a price to pay.
Low use. Digital information improves with use. Unused, it remains stuck in obsolete formats. After extended
non-use, recovering information requires a digital version of
archaeology. Every user of a digital archive participates in its crowd-sourced quality control. Every access is an opportunity to discover, report,
and repair problems. To succeed at its archival mission, a digital archive must be an essential research tool that all scholars need every day.
High cost. Once upon a time, the
IR was a cheap experiment. Today's professionally managed IR costs
far too much for its limited functionality.
Fragmented control. Over the course of
their careers, most scholars are affiliated with several
institutions. It is unreasonable to distribute a scholar's
work according to where it was produced. At best, it is
inconvenient to maintain multiple accounts. At worst, it creates
long-term chaos to comply with different and conflicting policies of
institutions with which one is no longer affiliated. In a
cloud-computing world, scholars should manage their own personal
repositories, and archives should manage the repositories of scholars no longer willing or able.
Social interaction. Research
is a social endeavor. [Creating Knowledge] Let us be inspired by the titans of the
network effect: Facebook, Twitter, Instagram, Snapchat, etc.
Encourage scholars to build their personal repository in a
social-network context. Disciplinary repositories like ArXiv and SSRN
can expand their social-network services. Social networks
like Academia.edu, Mendeley, Zotero, and Figshare have the capability
to implement and/or expand IR-like services.
Distorted market. Academic
libraries are unlikely to spend money on services that compete with
IRs. Ventures that bypass libraries must
offer their services for free. In desperation, some have pursued (and dropped) controversial alternative methods of monetizing their services.
[Scholars Criticize Academia.edu Proposal to Charge Authors for Recommendations]
Many academics are suspicious of any commercial
interests in scholarly communication. Blaming publishers for the scholarly-journal crisis, they conveniently forget their own contribution to the dysfunction. Willing academics, with enthusiastic help from publishers, launch ever more journals.[Hitler, Mother Teresa, and Coke] They also pressure libraries to site license "their" journals, giving publishers a strong negotiation position. Without library-paid site licenses, academics would have flocked to alternative publishing models, and publishers would have embraced alternative subscription plans like an iTunes for scholarly papers. [Where the Puck won't be] [What if Libraries were the Problem?] Universities and/or governments must change how they fund scholarly communication to eliminate the marketplace distortions that preserve the status quo, protect publishers, and stifle innovation. In a truly open market of individual subscriptions, start-up ventures would thrive.
I believed in IRs. I advocated for IRs.
After participating in the First Meeting of the Open Archives Initiative (1999, Santa Fe, New Mexico), I started a project that would evolve into Caltech CODA. [The Birth of the Open Access Movement] We encouraged, then required,
electronic theses. We captured preprints and historical documents.
[E-Journals: Do-It-Yourself Publishing]
I was convinced IRs would disrupt
scholarly communication. I was wrong. All High Energy
Physics (HEP) papers are available in ArXiv. Being a disciplinary repository, ArXiv functions like an idealized version of a federation of IRs. It changed scholarly communication for
the better by speeding up dissemination and improving social
interaction, but it did not disrupt. On the contrary, HEP scholars organized what amounted to an an authoritarian take-over of the HEP scholarly-journal marketplace. While ensuring open access of all HEP research, this take-over also cemented the status quo for the foreseeable future. [A Physics Experiment]
The IR is not equivalent with Green Open Access. The IR is only one
possible implementation of Green OA. With the IR at a dead end, Green OA must pivot towards alternatives that have viable paths
forward: personal repositories, disciplinary repositories, social
networks, and innovative combinations of all three.
*Edited 7/26/2016 to correct formatting errors.
*Edited 7/26/2016 to correct formatting errors.
Labels:
#OAMonday,
#openaccess,
#scoap3,
cloud computing,
economy,
education,
elsevier,
information commons,
library,
metadata,
napster,
ndltd,
open access,
open archives,
publishing,
research,
scholar,
site license
Tuesday, January 20, 2015
Creating Knowledge
Every scholar is part wizard, part muggle.
As wizards, scholars are lone geniuses in search of original insight. They question everything. They ignore conventional wisdom and tradition. They experiment.
As muggles, scholars are subject to the normal rules of power and influence. They are limited by common sense and group think. They are ambitious. They promote and market their ideas. They have the perfect elevator pitch ready for every potential funder of research. They connect their research to hot fields. They climb the social ladder in professional societies. As muggles, they know that the lone voice is probably wrong.
The sad fate of the wizards is that their discoveries, no matter how significant, are not knowledge until accepted by the muggles.
Einstein stood on the shoulder of giants: he needed all of the science that preceded him. First, he needed it to develop special relativity theory. Then, he needed it as a starting point from where to lead the physics community on an intellectual journey. Without that base of prior shared knowledge, they would not have followed.
As a social construct, knowledge moves at a speed limited by the wisdom of the crowd. The real process by which scholarly research moves from the world of the wizard into the world of muggles is murky, complicated, longwinded, and ambiguous. Despising these properties, muggles created a clear and straightforward substitute: the peer-review process.
When only a small number of distinguished scholarly bodies published journals, publishing signaled that the research was widely accepted as valid and important. Today, thousands of scholarly groups and commercial entities publish as many as 28,000 scholarly journals, and publishing no longer functions as a serious proxy for wide acceptance.
Most journals are created when some researchers believe established journals ignore or do not sufficiently support a new field of inquiry. New journals give new fields the time and space to grow and to prove themselves. They also reduce the size of the referee pool. They avoid generalists critical of the new field. Gradually, peer review becomes a process in which likeminded colleagues distribute stamps of approval to each other.
Publishers thrive by amplifying scholarly fractures and by creating scholarly islands. As discussed in previous blog posts, normal free-market principles do not apply to the scholarly-journal market. [What if Libraries were the Problem] Without an effective method to kill off journals, their number and size keep increasing. Unfortunately, the damage to universities and to scholarship far exceeds the cost of journals.
Niche fields use their success in the scholarly-communication market to acquire departmental status, making the scholarly fracture permanent. The economic crisis may have stopped or reversed the trend of ever more specialized, smaller, university departments, but the increased cost structure inherited from the boom years lingers. Creating a new department should be an exceptional event. Universities went overboard, influenced and pressured by commercial interests.
As a quality-control system, the scholarly-communication system should be conservative and skeptical. As a communication system, it should give exposure to new ideas and give them a chance to develop. By simultaneously pursuing two contradictory goals, scholarly journals have become ineffective at both. They are too specialized to be credible validators. They are too slow and bureaucratic for growing new ideas.
Journals survive because universities use them for assessment. Not surprisingly, scholarly papers solidly reside in muggle world. Too many papers are written by Very Serious Intellectuals (VSIs) for VSIs. Too many papers are written in self-aggrandizing pompous prose, loaded with countless footnotes. Too many papers are written to flatter VSIs with too many irrelevant references. Too many papers are written to puff up a tidbit of incremental information. Too many papers are written. Too few papers detail negative results or offer serious critique, because that only makes enemies.
When given the opportunity, scholarly authors produce awe inspiring presentations. The edutainment universe of TED Talks may not be an appropriate forum for the daily grunt work of the scholar, but is it really too much to ask that the scholarly-communication system let the wizardry shine through?
Universities claim to be society's engines of innovation. They have preached the virtues of creative destruction brought on by technological innovation. Yet, the wizards of the ivory tower resist minor change as much as the muggles of the world.
Open Access is catalyzing reform on the business side of the scholarly-communication system. Will Open Access be enough to push universities into experimentation on the scholarly side?
That is an Open question.
As wizards, scholars are lone geniuses in search of original insight. They question everything. They ignore conventional wisdom and tradition. They experiment.
As muggles, scholars are subject to the normal rules of power and influence. They are limited by common sense and group think. They are ambitious. They promote and market their ideas. They have the perfect elevator pitch ready for every potential funder of research. They connect their research to hot fields. They climb the social ladder in professional societies. As muggles, they know that the lone voice is probably wrong.
The sad fate of the wizards is that their discoveries, no matter how significant, are not knowledge until accepted by the muggles.
Einstein stood on the shoulder of giants: he needed all of the science that preceded him. First, he needed it to develop special relativity theory. Then, he needed it as a starting point from where to lead the physics community on an intellectual journey. Without that base of prior shared knowledge, they would not have followed.
As a social construct, knowledge moves at a speed limited by the wisdom of the crowd. The real process by which scholarly research moves from the world of the wizard into the world of muggles is murky, complicated, longwinded, and ambiguous. Despising these properties, muggles created a clear and straightforward substitute: the peer-review process.
When only a small number of distinguished scholarly bodies published journals, publishing signaled that the research was widely accepted as valid and important. Today, thousands of scholarly groups and commercial entities publish as many as 28,000 scholarly journals, and publishing no longer functions as a serious proxy for wide acceptance.
Most journals are created when some researchers believe established journals ignore or do not sufficiently support a new field of inquiry. New journals give new fields the time and space to grow and to prove themselves. They also reduce the size of the referee pool. They avoid generalists critical of the new field. Gradually, peer review becomes a process in which likeminded colleagues distribute stamps of approval to each other.
Publishers thrive by amplifying scholarly fractures and by creating scholarly islands. As discussed in previous blog posts, normal free-market principles do not apply to the scholarly-journal market. [What if Libraries were the Problem] Without an effective method to kill off journals, their number and size keep increasing. Unfortunately, the damage to universities and to scholarship far exceeds the cost of journals.
Niche fields use their success in the scholarly-communication market to acquire departmental status, making the scholarly fracture permanent. The economic crisis may have stopped or reversed the trend of ever more specialized, smaller, university departments, but the increased cost structure inherited from the boom years lingers. Creating a new department should be an exceptional event. Universities went overboard, influenced and pressured by commercial interests.
As a quality-control system, the scholarly-communication system should be conservative and skeptical. As a communication system, it should give exposure to new ideas and give them a chance to develop. By simultaneously pursuing two contradictory goals, scholarly journals have become ineffective at both. They are too specialized to be credible validators. They are too slow and bureaucratic for growing new ideas.
Journals survive because universities use them for assessment. Not surprisingly, scholarly papers solidly reside in muggle world. Too many papers are written by Very Serious Intellectuals (VSIs) for VSIs. Too many papers are written in self-aggrandizing pompous prose, loaded with countless footnotes. Too many papers are written to flatter VSIs with too many irrelevant references. Too many papers are written to puff up a tidbit of incremental information. Too many papers are written. Too few papers detail negative results or offer serious critique, because that only makes enemies.
When given the opportunity, scholarly authors produce awe inspiring presentations. The edutainment universe of TED Talks may not be an appropriate forum for the daily grunt work of the scholar, but is it really too much to ask that the scholarly-communication system let the wizardry shine through?
Universities claim to be society's engines of innovation. They have preached the virtues of creative destruction brought on by technological innovation. Yet, the wizards of the ivory tower resist minor change as much as the muggles of the world.
Open Access is catalyzing reform on the business side of the scholarly-communication system. Will Open Access be enough to push universities into experimentation on the scholarly side?
That is an Open question.
Monday, June 30, 2014
Disruption Disrupted?
The professor who books his flights online, reserves lodging with Airbnb, and arranges airport transportation with Uber understands the disruption of the travel industry. He actively supports that disruption every time he attends a conference. When MOOCs threaten his job, when The Economist covers reinventing the university and titles it “Creative Destruction", that same professor may have second thoughts. With or without disruption, academia surely is in a period of immense change. There is the pressure to reduce costs and tuition, the looming growth of MOOCs, the turmoil in scholarly communication (subscription prices, open access, peer review, alternative metrics), the increased competition for funding, etc.
The term disruption was coined and popularized by Harvard Business School Professor Clayton Christensen, author of The Innovator's Dilemma. [The Innovator's Dilemma, Clayton Christensen, Harvard Business Review Press, 1997] Christensen created a compelling framework for understanding the process of innovation and disruption. Along the way, he earned many accolades in academia and business. In recent years, a cooling of the academic admiration became increasingly noticeable. A snide remark here. A dismissive tweet there. Then, The New Yorker launched a major attack on the theory of disruption. [The Disruption Machine, Jill Lepore, The New Yorker, June 23rd, 2014] In this article, Harvard historian Jill Lepore questions Christensen's research by attacking the underlying facts. Were Christensen's disruptive startups really startups? Did the established companies really lose the war or just one battle? At the very least, Lepore is implying that Christensen misled his readers.
As of this writing, Christensen has only responded in a brief interview. [Clayton Christensen Responds to New Yorker Takedown of 'Disruptive Innovation', Bloomberg Businessweek, June 20th, 2014] It is clear he is preparing a detailed written response.
Lepore's critique appears at the moment when disruption may be at academia's door, seventeen years after The Innovator's Dilemma was published, much of the research almost twenty years old. Perhaps, the article is merely a symptom of academics growing nervous. Yet, it would be wrong to dismiss Lepore's (or anyone other's) criticism based on any perceived motivation. Facts can be and should be examined.
In 1997, I was a technology manager tasked with dragging a paper-based library into the digital era. When reading (and re-reading) the book, I did not question the facts. When Christensen stated that upstart X disrupted established company Y, I accepted it. I assume most readers did. The book was based on years of research, all published in some of the most prestigious peer-reviewed journals. It is reasonable to assume that the underlying facts were scrutinized by several independent experts. Truth be told, I did not care much that his claims were backed by years of research. Christensen gave power to the simple idea that sticking with established technology can carry an enormous opportunity cost.
Established technology has had years, perhaps decades, to mitigate its weaknesses. It has a constituency of users, service providers, sales channels, and providers of derivative services. This constituency is a force that defends the status quo in order to maintain established levels of quality, profit margins, and jobs. The innovators do not compete on a level playing field. Their product may improve upon the old in one or two aspects, but it has not yet had the opportunity to mitigate its weaknesses. When faced with such innovations, all organizations tend to stick with what they know for as long as possible.
Christensen showed the destructive power of this mind set. While waiting until the new is good enough or better, organizations lose control of the transition process. While pleasing their current customers, they lose future customers. By not being ahead of the curve, by ignoring innovation, by not restructuring their organizations ahead of time, leaders may put their organizations at risk. Christensen told compelling disruption stories in many different industries. This allowed readers to observe their own industry with greater detachment. It gave readers the confidence to push for early adoption of inevitable innovation.
I am not about to take sides in the Lepore-Christensen debate. Neither needs my help. As an observer interested in scholarly communication, I cannot help but noting that Lepore, a distinguished scholar, launched her critique from a distinctly non-scholarly channel. The New Yorker may cater to the upper-crust of intellectuals (and wannabes), but it remains a magazine with journalistic editorial-review processes, quite distinct from scholarly peer-review processes.
Remarkably, the same happened only a few weeks ago, when the Financial Times attempted to take down Piketty's book. [Capital in the Twenty-First Century, Thomas Piketty, Belknap Press; 2014] [Piketty findings undercut by errors, Chris Giles, Financial Times, May 23rd, 2014] Piketty had a distinct advantage over Christensen. The Financial Times critique appeared a few weeks after his book came out. Moreover, he had made all of his data public, including all technical adjustments required to make data from different sources compatible. As a result, Piketty was able to respond quickly, and the controversy quickly dissipated. Christensen has the unenviable task of defending twenty-year old research. For his sake, I hope he was better at archiving data than I was in the 1990s.
What does it say about the status of scholarly journals when scholars use magazines to launch scholarly critiques? Was Lepore's article not sufficiently substantive for a peer-reviewed journal? Are scholarly journals incapable or unwilling to handle academic controversy involving one of its eminent leaders? Is the mainstream press just better at it? Would a business journal even allow a historian to critique business research in its pages? If this is the case, is peer review less about maintaining standards and more about protecting an academic tribe? Is the mainstream press just a vehicle for some scholars to bypass peer review and academic standards? What would it say about peer review if Lepore's arguments should prevail?
This detached observer pours a drink and enjoys the show.
PS (7/15/2014): Reposted with permission at The Impact Blog of The London School of Economics and Political Science.
The term disruption was coined and popularized by Harvard Business School Professor Clayton Christensen, author of The Innovator's Dilemma. [The Innovator's Dilemma, Clayton Christensen, Harvard Business Review Press, 1997] Christensen created a compelling framework for understanding the process of innovation and disruption. Along the way, he earned many accolades in academia and business. In recent years, a cooling of the academic admiration became increasingly noticeable. A snide remark here. A dismissive tweet there. Then, The New Yorker launched a major attack on the theory of disruption. [The Disruption Machine, Jill Lepore, The New Yorker, June 23rd, 2014] In this article, Harvard historian Jill Lepore questions Christensen's research by attacking the underlying facts. Were Christensen's disruptive startups really startups? Did the established companies really lose the war or just one battle? At the very least, Lepore is implying that Christensen misled his readers.
As of this writing, Christensen has only responded in a brief interview. [Clayton Christensen Responds to New Yorker Takedown of 'Disruptive Innovation', Bloomberg Businessweek, June 20th, 2014] It is clear he is preparing a detailed written response.
Lepore's critique appears at the moment when disruption may be at academia's door, seventeen years after The Innovator's Dilemma was published, much of the research almost twenty years old. Perhaps, the article is merely a symptom of academics growing nervous. Yet, it would be wrong to dismiss Lepore's (or anyone other's) criticism based on any perceived motivation. Facts can be and should be examined.
In 1997, I was a technology manager tasked with dragging a paper-based library into the digital era. When reading (and re-reading) the book, I did not question the facts. When Christensen stated that upstart X disrupted established company Y, I accepted it. I assume most readers did. The book was based on years of research, all published in some of the most prestigious peer-reviewed journals. It is reasonable to assume that the underlying facts were scrutinized by several independent experts. Truth be told, I did not care much that his claims were backed by years of research. Christensen gave power to the simple idea that sticking with established technology can carry an enormous opportunity cost.
Established technology has had years, perhaps decades, to mitigate its weaknesses. It has a constituency of users, service providers, sales channels, and providers of derivative services. This constituency is a force that defends the status quo in order to maintain established levels of quality, profit margins, and jobs. The innovators do not compete on a level playing field. Their product may improve upon the old in one or two aspects, but it has not yet had the opportunity to mitigate its weaknesses. When faced with such innovations, all organizations tend to stick with what they know for as long as possible.
Christensen showed the destructive power of this mind set. While waiting until the new is good enough or better, organizations lose control of the transition process. While pleasing their current customers, they lose future customers. By not being ahead of the curve, by ignoring innovation, by not restructuring their organizations ahead of time, leaders may put their organizations at risk. Christensen told compelling disruption stories in many different industries. This allowed readers to observe their own industry with greater detachment. It gave readers the confidence to push for early adoption of inevitable innovation.
I am not about to take sides in the Lepore-Christensen debate. Neither needs my help. As an observer interested in scholarly communication, I cannot help but noting that Lepore, a distinguished scholar, launched her critique from a distinctly non-scholarly channel. The New Yorker may cater to the upper-crust of intellectuals (and wannabes), but it remains a magazine with journalistic editorial-review processes, quite distinct from scholarly peer-review processes.
Remarkably, the same happened only a few weeks ago, when the Financial Times attempted to take down Piketty's book. [Capital in the Twenty-First Century, Thomas Piketty, Belknap Press; 2014] [Piketty findings undercut by errors, Chris Giles, Financial Times, May 23rd, 2014] Piketty had a distinct advantage over Christensen. The Financial Times critique appeared a few weeks after his book came out. Moreover, he had made all of his data public, including all technical adjustments required to make data from different sources compatible. As a result, Piketty was able to respond quickly, and the controversy quickly dissipated. Christensen has the unenviable task of defending twenty-year old research. For his sake, I hope he was better at archiving data than I was in the 1990s.
What does it say about the status of scholarly journals when scholars use magazines to launch scholarly critiques? Was Lepore's article not sufficiently substantive for a peer-reviewed journal? Are scholarly journals incapable or unwilling to handle academic controversy involving one of its eminent leaders? Is the mainstream press just better at it? Would a business journal even allow a historian to critique business research in its pages? If this is the case, is peer review less about maintaining standards and more about protecting an academic tribe? Is the mainstream press just a vehicle for some scholars to bypass peer review and academic standards? What would it say about peer review if Lepore's arguments should prevail?
This detached observer pours a drink and enjoys the show.
PS (7/15/2014): Reposted with permission at The Impact Blog of The London School of Economics and Political Science.
Monday, March 31, 2014
Creative Problems
The open-access requirement for Electronic Theses and Dissertations (ETDs) should be a no-brainer. At virtually every university in the world, there is a centuries-old public component to the doctoral-degree requirement. With digital technology, that public component is implemented more efficiently and effectively. Yet, a small number of faculty fight the idea of Open Access for ETDs. The latest salvo came from Jennifer Sinor, an associate professor of English at Utah State University.
[One Size Doesn't Fit All, Jennifer Sinor, The Chronicle of Higher Education, March 24, 2014]
Sinor offers a solution to these problems, which she calls a middle path: Theses should continue to be printed, stored in libraries, accessible through interlibrary loan, and never digitized without the author's approval. Does anyone really think it is a common-sense middle path of moderation and reasonableness to pretend that the digital revolution never happened?
Our response could be brief. We could just observe that it does not matter whether or not Sinor's Luddite approach is tenable, and it does not matter whether or not her arguments hold water. Society will not stop changing because a small group of people pretend reality does not apply to them. Reality will, eventually, take over. Nevertheless, let us examine her arguments.
Multiyear embargoes are a routine part of Open Access policies for ETDs. I do not know of a single exception. After a web search that took less than a minute, I found the ETD policy of Sinor's own institution. The second and third sentence of USU's ETD policy reads as follows [ETD Forms and Policy, DigitalCommons@usu.edu]:
The student in question expressly allowed for third parties to sell his work by leaving a checkbox unchecked in a web form. Sinor excuses the student for his naïveté. However, anyone who hopes to make a living of creative writing in a web-connected world should have advanced knowledge of the business of selling one's works, of copyright law, and of publishing agreements. Does Sinor imply that a masters-level student in her department never had any exposure to these issues? If so, that is an inexcusable oversight in the department's curriculum.
This leads us to Sinor's final argument: that conventional publishers will not consider works that are also available as an Open Access ETDs. This has been thoroughly studied and debunked. See:
"Do Open Access Electronic Theses and Dissertations Diminish Publishing Opportunities in the Social Sciences and Humanities?" Marisa L. Ramirez, Joan T. Dalton, Gail McMillan, Max Read, and Nan Seamans. College & Research Libraries, July 2013, 74:368-380.
This should put to rest the most pressing issues. Yet, for those who cannot shake the feeling that Open Access robs students from an opportunity to monetize their work, there is another way out of the quandary. It is within the power of any Creative Writing department to solve the issue once and for all.
All university departments have two distinct missions: to teach a craft and to advance scholarship in their discipline. As a rule of thumb, the teaching of craft dominates up to the masters-degree level. The advancement of scholarship, which goes beyond accepted craft and into the new and experimental, takes over at the doctoral level.
When submitting a novel (or a play, a script, or a collection of poetry) as a thesis, the student exhibits his or her mastery of craft. This is appropriate for a masters thesis. However, when Creative Writing departments accept novels as doctoral theses, they put craft ahead of scholarship. It is difficult to see how any novel by itself advances the scholarship of Creative Writing.
The writer of an experimental masterpiece should have some original insights into his or her craft. Isn't it the role of universities to reward those insights? Wouldn't it make sense to award the PhD, not based on a writing sample, but based on a companion work that advances the scholarship of Creative Writing? Such a thesis would fit naturally within the open-access ecosystem of other scholarly disciplines without compromising the work itself in any way.
This is analogous to any number of scientific disciplines, where students develop equipment or software or a new chemical compound. The thesis is a description of the work and the ideas behind it. After a reasonable embargo to allow for patent applications, any such thesis may be made Open Access without compromising the commercial value of the work at the heart of the research.
A policy that is successful for most may fail for some. Some disciplines may be so fundamentally different that they need special processes. Yet, Open Access is merely the logical extension of long-held traditional academic values. If this small step presents such a big problem for one department and not for others, it may be time to re-examine existing practices at that department. Perhaps, the Open Access challenge is an opportunity to change for the better.
[One Size Doesn't Fit All, Jennifer Sinor, The Chronicle of Higher Education, March 24, 2014]
According to Sinor, Creative Writing departments are different and should be exempted from open-access requirements. She illustrates her objection to Open Access ETDs with an example of a student who submitted a novel as his masters thesis. He was shocked when he found out his work was for sale online by a third party. Furthermore, according to Sinor, the mere existence of the open-access thesis makes it impossible for that student to pursue a conventional publishing deal.
Sinor offers a solution to these problems, which she calls a middle path: Theses should continue to be printed, stored in libraries, accessible through interlibrary loan, and never digitized without the author's approval. Does anyone really think it is a common-sense middle path of moderation and reasonableness to pretend that the digital revolution never happened?
Our response could be brief. We could just observe that it does not matter whether or not Sinor's Luddite approach is tenable, and it does not matter whether or not her arguments hold water. Society will not stop changing because a small group of people pretend reality does not apply to them. Reality will, eventually, take over. Nevertheless, let us examine her arguments.
Multiyear embargoes are a routine part of Open Access policies for ETDs. I do not know of a single exception. After a web search that took less than a minute, I found the ETD policy of Sinor's own institution. The second and third sentence of USU's ETD policy reads as follows [ETD Forms and Policy, DigitalCommons@usu.edu]:
“However, USU recognizes that in some rare situations, release of a dissertation/thesis may need to be delayed. For these situations, USU provides the option of embargoing (i.e. delaying release) of a dissertation or thesis for five years after graduation, with an option to extend indefinitely.”How much clearer can this policy be?
The student in question expressly allowed for third parties to sell his work by leaving a checkbox unchecked in a web form. Sinor excuses the student for his naïveté. However, anyone who hopes to make a living of creative writing in a web-connected world should have advanced knowledge of the business of selling one's works, of copyright law, and of publishing agreements. Does Sinor imply that a masters-level student in her department never had any exposure to these issues? If so, that is an inexcusable oversight in the department's curriculum.
This leads us to Sinor's final argument: that conventional publishers will not consider works that are also available as an Open Access ETDs. This has been thoroughly studied and debunked. See:
"Do Open Access Electronic Theses and Dissertations Diminish Publishing Opportunities in the Social Sciences and Humanities?" Marisa L. Ramirez, Joan T. Dalton, Gail McMillan, Max Read, and Nan Seamans. College & Research Libraries, July 2013, 74:368-380.
This should put to rest the most pressing issues. Yet, for those who cannot shake the feeling that Open Access robs students from an opportunity to monetize their work, there is another way out of the quandary. It is within the power of any Creative Writing department to solve the issue once and for all.
All university departments have two distinct missions: to teach a craft and to advance scholarship in their discipline. As a rule of thumb, the teaching of craft dominates up to the masters-degree level. The advancement of scholarship, which goes beyond accepted craft and into the new and experimental, takes over at the doctoral level.
When submitting a novel (or a play, a script, or a collection of poetry) as a thesis, the student exhibits his or her mastery of craft. This is appropriate for a masters thesis. However, when Creative Writing departments accept novels as doctoral theses, they put craft ahead of scholarship. It is difficult to see how any novel by itself advances the scholarship of Creative Writing.
The writer of an experimental masterpiece should have some original insights into his or her craft. Isn't it the role of universities to reward those insights? Wouldn't it make sense to award the PhD, not based on a writing sample, but based on a companion work that advances the scholarship of Creative Writing? Such a thesis would fit naturally within the open-access ecosystem of other scholarly disciplines without compromising the work itself in any way.
This is analogous to any number of scientific disciplines, where students develop equipment or software or a new chemical compound. The thesis is a description of the work and the ideas behind it. After a reasonable embargo to allow for patent applications, any such thesis may be made Open Access without compromising the commercial value of the work at the heart of the research.
A policy that is successful for most may fail for some. Some disciplines may be so fundamentally different that they need special processes. Yet, Open Access is merely the logical extension of long-held traditional academic values. If this small step presents such a big problem for one department and not for others, it may be time to re-examine existing practices at that department. Perhaps, the Open Access challenge is an opportunity to change for the better.
Monday, March 17, 2014
Textbook Economics
The impact of royalties on a book's price, and its sales, is greater than you think. Lower royalties often end up better for the author. That was the publisher's pitch when I asked him about the details of the proposed publishing contract. Then, he explained how he prices textbooks.
It was the early 1990s, I had been teaching a course on Concurrent Scientific Computing, a hot topic then, and several publishers had approached me about writing a textbook. This was an opportunity to structure a pile of course notes. Eventually, I would sign on with a different publisher, a choice that had nothing to do with royalties or book prices. [Concurrent Scientific Computing, Van de Velde E., Springer-Verlag New York, Inc., New York, NY, 1994.]
He explained that a royalty of 10% increases the price by more than 10%. To be mathematical about it: With a royalty rate r, a target revenue per book C, and a retail price P, we have that C = P-rP (retail price minus royalties). Therefore, P = C/(1-r). With a target revenue per book of $100, royalties of 10%, 15%, and 20% lead to retail prices of $111.11, $117.65, and $125.00, respectively.
In a moment of candor, he also revealed something far more interesting: how he sets the target revenue C. Say the first printing of 5000 copies requires an up-front investment of $100,000. (All numbers are for illustrative purposes only.) This includes the cost of editing, copy-editing, formatting, cover design, printing, binding, and administrative overhead. Estimating library sales at 1000 copies, this publisher would set C at $100,000/1,000 = $100. In other words, he recovered his up-front investment from libraries. Retail sales were pure profit.
The details are, no doubt, more complicated. Yet, even without relying on a recollection of an old conversation, it is safe to assume that publishers use the captive library market to reduce their business risk. In spite of increasingly recurrent crises, library budgets remain fairly predictable, both in size and in how the money is spent. Any major publisher has reliable advance estimates of library sales for any given book, particularly if published as part of a well-known series. It is just good business to exploit that predictability.
The market should be vastly different now, but textbooks have remained stuck in the paper era longer than other publications. Moreover, the first stage of the move towards digital, predictably, consists of replicating the paper world. This is what all constituents want: Librarians want to keep lending books. Researchers and students like getting free access to quality books. Textbook publishers do not want to lose the risk-reducing revenue stream from libraries. As a result, everyone implements the status quo in digital form. Publishers produce digital books and rent their collections to libraries through site licenses. Libraries intermediate electronic-lending transactions. Users get the paper experience in digital form. Universities pay for site licenses and the maintenance of the digital-lending platforms.
After the disaster of site licenses for scholarly journals, repeating the same mistake with books seems silly. Once again, take-it-or-leave-it bundles force institutions into a false choice between buying too much for everyone or nothing at all. Once again, site licenses eliminate the unlimited flexibility of digital information. Forget about putting together a personal collection tailored to your own requirements. Forget about pricing per series, per book, per chapter, unlimited in time, one-day access, one-hour access, readable on any device, or tied to a particular device. All of these options are eliminated to maintain the business models and the intermediaries of the paper era.
Just by buying/renting books as soon as they are published, libraries indirectly pay for a significant fraction of the initial investment of producing textbooks. If libraries made that initial investment explicitly and directly, they could produce those same books and set them free. Instead of renting digital books (and their multimedia successors), libraries could fund authors to write books and contract with publishers to publish those manuscripts as open-access works. Authors would be compensated. Publishers would compete for library funds as service providers. Publishers would be free to pursue the conventional pay-for-access publishing model, just not with library dollars. Prospective authors would have a choice: compete for library funding to produce an open-access work or compete for a publishing contract to produce a pay-for-access work.
The Carnegie model of libraries fused together two distinct objectives: subsidize information and disseminate information by distributing books to many different locations. In web-connected communities, spending precious resources on dissemination is a waste. Inserting libraries in digital-lending transactions only makes those transactions more inconvenient. Moreover, it requires expensive-to-develop-and-maintain technology. By reallocating these resources towards subsidizing information, libraries could set information free without spending part of their budget on reducing publishers' business risk. The fundamental budget questions that remain are: Which information should be subsidized? What is the most effective way to subsidize information?
Libraries need not suddenly stop site licensing books tomorrow. In fact, they should take a gradual approach, test the concept, make mistakes, and learn from them. A library does not become a grant sponsor and/or publisher overnight. Several models are already available: from grant competition to crowd-funded ungluing. [Unglue.it for Libraries] By phasing out site licenses, any library can create budgetary space for sponsoring open-access works.
Libraries have a digital future with almost unlimited opportunities. Yet, they will miss out if they just rebuild themselves as a digital copy of the paper era.
It was the early 1990s, I had been teaching a course on Concurrent Scientific Computing, a hot topic then, and several publishers had approached me about writing a textbook. This was an opportunity to structure a pile of course notes. Eventually, I would sign on with a different publisher, a choice that had nothing to do with royalties or book prices. [Concurrent Scientific Computing, Van de Velde E., Springer-Verlag New York, Inc., New York, NY, 1994.]
He explained that a royalty of 10% increases the price by more than 10%. To be mathematical about it: With a royalty rate r, a target revenue per book C, and a retail price P, we have that C = P-rP (retail price minus royalties). Therefore, P = C/(1-r). With a target revenue per book of $100, royalties of 10%, 15%, and 20% lead to retail prices of $111.11, $117.65, and $125.00, respectively.
In a moment of candor, he also revealed something far more interesting: how he sets the target revenue C. Say the first printing of 5000 copies requires an up-front investment of $100,000. (All numbers are for illustrative purposes only.) This includes the cost of editing, copy-editing, formatting, cover design, printing, binding, and administrative overhead. Estimating library sales at 1000 copies, this publisher would set C at $100,000/1,000 = $100. In other words, he recovered his up-front investment from libraries. Retail sales were pure profit.
The details are, no doubt, more complicated. Yet, even without relying on a recollection of an old conversation, it is safe to assume that publishers use the captive library market to reduce their business risk. In spite of increasingly recurrent crises, library budgets remain fairly predictable, both in size and in how the money is spent. Any major publisher has reliable advance estimates of library sales for any given book, particularly if published as part of a well-known series. It is just good business to exploit that predictability.
The market should be vastly different now, but textbooks have remained stuck in the paper era longer than other publications. Moreover, the first stage of the move towards digital, predictably, consists of replicating the paper world. This is what all constituents want: Librarians want to keep lending books. Researchers and students like getting free access to quality books. Textbook publishers do not want to lose the risk-reducing revenue stream from libraries. As a result, everyone implements the status quo in digital form. Publishers produce digital books and rent their collections to libraries through site licenses. Libraries intermediate electronic-lending transactions. Users get the paper experience in digital form. Universities pay for site licenses and the maintenance of the digital-lending platforms.
After the disaster of site licenses for scholarly journals, repeating the same mistake with books seems silly. Once again, take-it-or-leave-it bundles force institutions into a false choice between buying too much for everyone or nothing at all. Once again, site licenses eliminate the unlimited flexibility of digital information. Forget about putting together a personal collection tailored to your own requirements. Forget about pricing per series, per book, per chapter, unlimited in time, one-day access, one-hour access, readable on any device, or tied to a particular device. All of these options are eliminated to maintain the business models and the intermediaries of the paper era.
Just by buying/renting books as soon as they are published, libraries indirectly pay for a significant fraction of the initial investment of producing textbooks. If libraries made that initial investment explicitly and directly, they could produce those same books and set them free. Instead of renting digital books (and their multimedia successors), libraries could fund authors to write books and contract with publishers to publish those manuscripts as open-access works. Authors would be compensated. Publishers would compete for library funds as service providers. Publishers would be free to pursue the conventional pay-for-access publishing model, just not with library dollars. Prospective authors would have a choice: compete for library funding to produce an open-access work or compete for a publishing contract to produce a pay-for-access work.
The Carnegie model of libraries fused together two distinct objectives: subsidize information and disseminate information by distributing books to many different locations. In web-connected communities, spending precious resources on dissemination is a waste. Inserting libraries in digital-lending transactions only makes those transactions more inconvenient. Moreover, it requires expensive-to-develop-and-maintain technology. By reallocating these resources towards subsidizing information, libraries could set information free without spending part of their budget on reducing publishers' business risk. The fundamental budget questions that remain are: Which information should be subsidized? What is the most effective way to subsidize information?
Libraries need not suddenly stop site licensing books tomorrow. In fact, they should take a gradual approach, test the concept, make mistakes, and learn from them. A library does not become a grant sponsor and/or publisher overnight. Several models are already available: from grant competition to crowd-funded ungluing. [Unglue.it for Libraries] By phasing out site licenses, any library can create budgetary space for sponsoring open-access works.
Libraries have a digital future with almost unlimited opportunities. Yet, they will miss out if they just rebuild themselves as a digital copy of the paper era.
Labels:
#disruption,
#openaccess,
copyright,
economy,
elsevier,
Internet,
library,
open access,
open archives,
publishing,
research,
scholar,
school,
site license,
technology
Location:
Pasadena, CA, USA
Monday, January 20, 2014
A Cloud over the Internet
Cloud computing could not have existed without the Internet, but it may make Internet history by making the Internet history.
Organizations are rushing to move their data centers to the cloud. Individuals have been using cloud-based services, like social networks, cloud gaming, Google Apps, Netflix, and Aereo. Recently, Amazon introduced WorkSpaces, a comprehensive personal cloud-computing service. The immediate benefits and opportunities that fuel the growth of the cloud are well known. The long-term consequences of cloud computing are less obvious, but a little extrapolation may help us make some educated guesses.
Personal cloud computing takes us back to the days of remote logins with dumb terminals and modems. Like the one-time office computer, the cloud computer does almost all of the work. Like the dumb terminal, a not-so-dumb access device (anything from the latest wearable gadget to a desktop) handles input/output. Input evolved beyond keystrokes and now also includes touch-screen gestures, voice, image, and video. Output evolved from green-on-black characters to multimedia.
When accessing a web page with content from several contributors (advertisers, for example), the page load time depends on several factors: the performance of computers that contribute web-page components, the speed of the Internet connections that transmit these components, and the performance of the computer that assembles and formats the web page for display. By connecting to the Internet through a cloud computer, we bypass the performance limitations of our access device. All bandwidth-hungry communication occurs in the cloud on ultra-fast networks, and almost all computation occurs on a high-performance cloud computer. The access device and its Internet connection just need to be fast enough to process the information streams into and out of the cloud. Beyond that, the performance of the access device hardly matters.
Because of economies of scale, the cloud-enabled net is likely to be a highly centralized system dominated by a small number of extremely large providers of computing and networking. This extreme concentration of infrastructure stands in stark contrast to the original Internet concept, which was designed as a redundant, scalable, and distributed system without a central authority or a single point of failure.
When a cloud provider fails, it disrupts its own customers, and the disruption immediately propagates to the customers' clients. Every large provider is, therefore, a systemic vulnerability with the potential of taking down a large fraction of the world's networked services. Of course, cloud providers are building infrastructure of extremely high reliability with redundant facilities spread around the globe to protect against regional disasters. Unfortunately, facilities of the same provider all have identical vulnerabilities, as they use identical technology and share identical management practices. This is a setup for black-swan events, low-probability large-scale catastrophes.
The Internet is overseen and maintained by a complex international set of authorities. [Wikipedia: Internet Governance] That oversight loses much of its influence when most communication occurs within the cloud. Cloud providers will be tempted to deploy more efficient custom communication technology within their own facilities. After all, standard Internet protocols were designed for heterogeneous networks. Much of that design is not necessary on a network where one entity manages all computing and all communication. Similarly, any two providers may negotiate proprietary communication channels between their facilities. Step by step, the original Internet will be relegated to the edges of the cloud, where access devices connect with cloud computers.
Net neutrality is already on life support. When cloud providers compete on price and performance, they are likely to segment the market. Premium cloud providers are likely to attract high-end services and their customers, relegating the rest to second-tier low-cost providers. Beyond net neutrality, there may be a host of other legal implications when communication moves from public channels to private networks.
When traffic moves to the cloud, telecommunication companies will gradually lose the high-margin retail market of providing organizations and individuals with high-bandwidth point-to-point communication. They will not derive any revenue from traffic between computers within the same cloud facility. The revenue from traffic between cloud facilities will be determined by a wholesale market with customers that have the resources to build and/or acquire their own communication capacity.
The existing telecommunication infrastructure will mostly serve to connect access devices to the cloud over relatively low-bandwidth channels. When TV channels are delivered to the cloud (regardless of technology), users select their channel on the cloud computer. They do not need all channels delivered to the home at all times; one TV channel at a time per device will do. When phones are cloud-enabled, a cloud computer intermediates all communication and provides the functional core of the phone.
Telecommunication companies may still come out ahead as long as the number of access devices keeps growing. Yet, they should at least question whether it would be more profitable to invest in cloud computing instead of ever higher bandwidth to the consumer.
The cloud will continue to grow as long as its unlimited processing power, storage capacity, and communication bandwidth provide new opportunities at irresistible price points. If history is any guide, long-term and low-probability problems at the macro level are unlikely to limit its growth. Even if our extrapolated scenario never completely materializes, the cloud will do much more than increase efficiency and/or lower cost. It will change the fundamental character of the Internet.
Organizations are rushing to move their data centers to the cloud. Individuals have been using cloud-based services, like social networks, cloud gaming, Google Apps, Netflix, and Aereo. Recently, Amazon introduced WorkSpaces, a comprehensive personal cloud-computing service. The immediate benefits and opportunities that fuel the growth of the cloud are well known. The long-term consequences of cloud computing are less obvious, but a little extrapolation may help us make some educated guesses.
Personal cloud computing takes us back to the days of remote logins with dumb terminals and modems. Like the one-time office computer, the cloud computer does almost all of the work. Like the dumb terminal, a not-so-dumb access device (anything from the latest wearable gadget to a desktop) handles input/output. Input evolved beyond keystrokes and now also includes touch-screen gestures, voice, image, and video. Output evolved from green-on-black characters to multimedia.
When accessing a web page with content from several contributors (advertisers, for example), the page load time depends on several factors: the performance of computers that contribute web-page components, the speed of the Internet connections that transmit these components, and the performance of the computer that assembles and formats the web page for display. By connecting to the Internet through a cloud computer, we bypass the performance limitations of our access device. All bandwidth-hungry communication occurs in the cloud on ultra-fast networks, and almost all computation occurs on a high-performance cloud computer. The access device and its Internet connection just need to be fast enough to process the information streams into and out of the cloud. Beyond that, the performance of the access device hardly matters.
Because of economies of scale, the cloud-enabled net is likely to be a highly centralized system dominated by a small number of extremely large providers of computing and networking. This extreme concentration of infrastructure stands in stark contrast to the original Internet concept, which was designed as a redundant, scalable, and distributed system without a central authority or a single point of failure.
When a cloud provider fails, it disrupts its own customers, and the disruption immediately propagates to the customers' clients. Every large provider is, therefore, a systemic vulnerability with the potential of taking down a large fraction of the world's networked services. Of course, cloud providers are building infrastructure of extremely high reliability with redundant facilities spread around the globe to protect against regional disasters. Unfortunately, facilities of the same provider all have identical vulnerabilities, as they use identical technology and share identical management practices. This is a setup for black-swan events, low-probability large-scale catastrophes.
The Internet is overseen and maintained by a complex international set of authorities. [Wikipedia: Internet Governance] That oversight loses much of its influence when most communication occurs within the cloud. Cloud providers will be tempted to deploy more efficient custom communication technology within their own facilities. After all, standard Internet protocols were designed for heterogeneous networks. Much of that design is not necessary on a network where one entity manages all computing and all communication. Similarly, any two providers may negotiate proprietary communication channels between their facilities. Step by step, the original Internet will be relegated to the edges of the cloud, where access devices connect with cloud computers.
Net neutrality is already on life support. When cloud providers compete on price and performance, they are likely to segment the market. Premium cloud providers are likely to attract high-end services and their customers, relegating the rest to second-tier low-cost providers. Beyond net neutrality, there may be a host of other legal implications when communication moves from public channels to private networks.
When traffic moves to the cloud, telecommunication companies will gradually lose the high-margin retail market of providing organizations and individuals with high-bandwidth point-to-point communication. They will not derive any revenue from traffic between computers within the same cloud facility. The revenue from traffic between cloud facilities will be determined by a wholesale market with customers that have the resources to build and/or acquire their own communication capacity.
The existing telecommunication infrastructure will mostly serve to connect access devices to the cloud over relatively low-bandwidth channels. When TV channels are delivered to the cloud (regardless of technology), users select their channel on the cloud computer. They do not need all channels delivered to the home at all times; one TV channel at a time per device will do. When phones are cloud-enabled, a cloud computer intermediates all communication and provides the functional core of the phone.
Telecommunication companies may still come out ahead as long as the number of access devices keeps growing. Yet, they should at least question whether it would be more profitable to invest in cloud computing instead of ever higher bandwidth to the consumer.
The cloud will continue to grow as long as its unlimited processing power, storage capacity, and communication bandwidth provide new opportunities at irresistible price points. If history is any guide, long-term and low-probability problems at the macro level are unlikely to limit its growth. Even if our extrapolated scenario never completely materializes, the cloud will do much more than increase efficiency and/or lower cost. It will change the fundamental character of the Internet.
Wednesday, January 1, 2014
Market Capitalism and Open Access
Is it feasible to create a self-regulating market for Open Access (OA) journals where competition for money is aligned with the quest for scholarly excellence?
Many proponents of the subscription model argue that a competitive market provides the best assurance for quality. This ignores that the relationship between a strong subscription base and scholarly excellence is tenuous at best. What if we created a market that rewards journals when a university makes its most tangible commitment to scholarly excellence?
While role of journals in actual scholarly communication has diminished, their role in academic career advancement remains as strong than ever. [Paul Krugman: The Facebooking of Economics] The scholarly-journal infrastructure streamlines the screening, comparing, and short-listing of candidates. It enables the gathering of quantitative evidence in support of the hiring decision. Without journals, the work load of search committees would skyrocket. If scholarly journals are the headhunters of the academic-job market, let us compensate them as such.
There are many ways to structure such compensation, but we only need one example to clarify the concept. Consider the following scenario:
Divide the new faculty member's share of the journal budget, 1% of his or her salary, into three portions:
The first portion (0.6%) remains in the journal budget to pay for subscriptions. The second (0.3%) and third (0.1%) portion are, respectively, awarded yearly to the OA journals TAT and PAP. The university adjusts the reward formula every time a promotion committee determines a new list of best papers.
To move beyond a voluntary system, universities should give headhunting rewards only to those journals with whom they have a contractual relationship. Some Gold OA journals are already pursuing institutional-membership deals that eliminate or reduce author page charges (APCs). [BioMed Central] [PeerJ][SpringerOpen] Such memberships are a form of discounting for quantity. Instead, we propose a pay-for-performance contract that eliminates APCs in exchange for headhunting rewards. Before signing such a contract, a university would conduct a due-diligence investigation into the journal. It would assess the publisher's reputation, the journal's editorial board, its refereeing, editing, formatting, and archiving standards, its OA licensing practices, and its level of participation in various abstracting-and-indexing and content-mining services. This step would all but eliminate predatory journals.
Every headhunting reward would enhance the prestige (and the bottom line) of a journal. A reward citing a paper would be a significant recognition of that paper. Such citations might be even more valuable than citations in other papers, thereby creating a strong incentive for institutions to participate in the headhunting system. Nonparticipating institutions would miss out on publicly recognizing the work of their faculty, and their faculty would have to pay APCs. There is no Open Access free ride.
Headhunting rewards create little to no extra work for search committees. Academic libraries are more than capable to perform due diligence, to negotiate the contracts, and to administer the rewards. Our scenario assumed a base percentage of 1%. The actual percentage would be negotiated between universities and publishers. With rewards proportional to salaries, there is a built-in adjustment for inflation, for financial differences between institutions and countries, and for differences in the sizes of various scholarly disciplines.
Scholars retain the right to publish in the venue of their choice. The business models of journals are used when distributing rewards, but this occurs well after the search process has concluded. The headhunting rewards gradually reduce the subscription budget in proportion to the number of papers published in OA journals by the university's faculty. A scholar who wishes to support a brand-new journal should not pay APCs, but lobby his or her university to negotiate a performance-based headhunting contract.
The essence of this proposal is the performance-based contract that exchanges APCs for headhunting rewards. All other details are up for discussion. Every university would be free to develop its own specific performance criteria and reward structures. Over time, we would probably want to converge towards a standard contract.
Headhunting contracts create a competitive market for OA journals. In this market, the distributed and collective wisdom of search/promotion committees defines scholarly excellence and provides the monetary rewards to journals. As a side benefit, this free-market system creates a professionally managed open infrastructure for the scholarly archive.
Many proponents of the subscription model argue that a competitive market provides the best assurance for quality. This ignores that the relationship between a strong subscription base and scholarly excellence is tenuous at best. What if we created a market that rewards journals when a university makes its most tangible commitment to scholarly excellence?
While role of journals in actual scholarly communication has diminished, their role in academic career advancement remains as strong than ever. [Paul Krugman: The Facebooking of Economics] The scholarly-journal infrastructure streamlines the screening, comparing, and short-listing of candidates. It enables the gathering of quantitative evidence in support of the hiring decision. Without journals, the work load of search committees would skyrocket. If scholarly journals are the headhunters of the academic-job market, let us compensate them as such.
There are many ways to structure such compensation, but we only need one example to clarify the concept. Consider the following scenario:
- The new hire submitted a bibliography of 100 papers.
- The search committee selected 10 of those papers to argue the case in favor of the appointment. This subset consists of 6 papers in subscription journals, 3 papers in the OA journal Theoretical Approaches to Theory (TAT), and 1 paper in the OA journal Practical Applications of Practice (PAP).
- The university's journal budget is 1% of its budget for faculty salaries. (In reality, that percentage would be much lower.)
Divide the new faculty member's share of the journal budget, 1% of his or her salary, into three portions:
- (6/10) x 1% = 0.6% of salary to subscription journals,
- (3/10) x 1% = 0.3% of salary to the journal TAT, and
- (1/10) x 1% = 0.1% of salary to the journal PAP.
The first portion (0.6%) remains in the journal budget to pay for subscriptions. The second (0.3%) and third (0.1%) portion are, respectively, awarded yearly to the OA journals TAT and PAP. The university adjusts the reward formula every time a promotion committee determines a new list of best papers.
To move beyond a voluntary system, universities should give headhunting rewards only to those journals with whom they have a contractual relationship. Some Gold OA journals are already pursuing institutional-membership deals that eliminate or reduce author page charges (APCs). [BioMed Central] [PeerJ][SpringerOpen] Such memberships are a form of discounting for quantity. Instead, we propose a pay-for-performance contract that eliminates APCs in exchange for headhunting rewards. Before signing such a contract, a university would conduct a due-diligence investigation into the journal. It would assess the publisher's reputation, the journal's editorial board, its refereeing, editing, formatting, and archiving standards, its OA licensing practices, and its level of participation in various abstracting-and-indexing and content-mining services. This step would all but eliminate predatory journals.
Every headhunting reward would enhance the prestige (and the bottom line) of a journal. A reward citing a paper would be a significant recognition of that paper. Such citations might be even more valuable than citations in other papers, thereby creating a strong incentive for institutions to participate in the headhunting system. Nonparticipating institutions would miss out on publicly recognizing the work of their faculty, and their faculty would have to pay APCs. There is no Open Access free ride.
Headhunting rewards create little to no extra work for search committees. Academic libraries are more than capable to perform due diligence, to negotiate the contracts, and to administer the rewards. Our scenario assumed a base percentage of 1%. The actual percentage would be negotiated between universities and publishers. With rewards proportional to salaries, there is a built-in adjustment for inflation, for financial differences between institutions and countries, and for differences in the sizes of various scholarly disciplines.
Scholars retain the right to publish in the venue of their choice. The business models of journals are used when distributing rewards, but this occurs well after the search process has concluded. The headhunting rewards gradually reduce the subscription budget in proportion to the number of papers published in OA journals by the university's faculty. A scholar who wishes to support a brand-new journal should not pay APCs, but lobby his or her university to negotiate a performance-based headhunting contract.
The essence of this proposal is the performance-based contract that exchanges APCs for headhunting rewards. All other details are up for discussion. Every university would be free to develop its own specific performance criteria and reward structures. Over time, we would probably want to converge towards a standard contract.
Headhunting contracts create a competitive market for OA journals. In this market, the distributed and collective wisdom of search/promotion committees defines scholarly excellence and provides the monetary rewards to journals. As a side benefit, this free-market system creates a professionally managed open infrastructure for the scholarly archive.
Monday, December 16, 2013
Beall's Rant
Jeffrey Beall of Beall's list of predatory scholarly publishers recently made some strident arguments against Open Access (OA) in the journal tripleC (ironically, an OA journal). Beall's comments are part of a non-refereed section dedicated to a discussion on OA.
Michael Eisen takes down Beall's opinion piece paragraph by paragraph. Stevan Harnad responds to the highlights/lowlights. Roy Tennant has a short piece on Beall in The Digital Shift.
Beall's takes a distinctly political approach in his attack on OA:
For those of us more comfortable with technocratic arguments, politics is not particularly welcome. Yet, we cannot avoid the fact that the OA movement is trying to reform a large socio-economic system. It would be naïve to think that that can be done without political ideology playing a role. But is it really too much to ask to avoid the lowest level of political debate, politics by name-calling?
The system of subscription journals has an internal free-market logic to it that no proposed or existing OA system has been able to replace. In a perfect world, the subscription system uses an economic market to assess the quality of editorial boards and the level of interest in a particular field. Economic viability acts as a referee of sorts, a market-based minimum standard. Some editorial boards deserve the axe for doing poor work. Some fields of study deserve to go out of business for lack of interest. New editorial boards and new fields of study deserve an opportunity to compete. Most of us prefer that these decisions are made by the collective and distributed wisdom of free-market mechanisms.
Unfortunately, the current scholarly-communication marketplace is far from a free market. Journals hardly compete directly with one another. Site licenses perpetuate a paper-era business model that forces universities to buy all content for 100% of the campus community, even those journals that are relevant only to a sliver of the community. Site licenses limit competition between journals, because end users never get to make the price/value trade-offs critical to a functional free market. The Big Deal exacerbates the problem. Far from providing a service, as Beall contends, the Big Deal gives big publishers a platform to launch new journals without competition. Consortial deals are not discounts; they introduce peer networks to make it more difficult to cancel existing subscriptions. [What if Libraries were the Problem?] [Libraries: Paper Tigers in a Digital World]
If Beall believes in the free market, he should support competition from new methods of dissemination, alternative assessment techniques, and new journal business models. Instead, he seems to be motivated more by a desire to hold onto his disrupted job description:
Thus far, scholarly publishing has been the only type of publishing not disrupted by the Internet. In his seminal work on disruption [The Innovator's Dilemma], Clayton Christensen characterizes the defenders of the status quo in disrupted industries. Like Beall, they are blinded by traditional quality measures, dismiss and/or denigrate innovations, and retreat into a defense of the status quo.
Students, researchers, and the general public deserve a high-quality scholarly-communication system that satisfies basic minimum technological requirements of the 21st century. [Peter Murray-Rust, Why does scholarly publishing give me so much technical grief?] In the last 20 years of the modern Internet, we have witnessed innovation after innovation. Yet, scholarly publishing is still tied to the paper-imitating PDF format and to paper-era business models.
Open Access may not be the only answer [Open Access Doubts], but it may very well be the opportunity that this crisis has to offer. [Annealing the Library] In American political terms, Green Open Access is a public option. It provides free access to author-formatted versions of papers. Thereby, it serves the general public and the scholarly poor. It also serves researchers by providing a platform for experimentation without having to go through onerous access negotiations (for text mining, for example). It also serves as an additional disruptive trigger for free-market reform of the scholarly market. Gold Open Access in all its forms (from PLOS to PEERJ) is a set of business models that deserve a chance to compete on price and quality.
The choice is not between one free-market option and a plot of European collectivists. The real choice is whether to protect a functionally inadequate system or whether to foster an environment of innovation.
Michael Eisen takes down Beall's opinion piece paragraph by paragraph. Stevan Harnad responds to the highlights/lowlights. Roy Tennant has a short piece on Beall in The Digital Shift.
Beall's takes a distinctly political approach in his attack on OA:
“The OA movement is an anti-corporatist movement that wants to deny the freedom of the press to companies it disagrees with.”This is the rhetorical style of American extremist right-wing politics that casts every problem as a false choice between freedom and – take your pick – communism or totalitarianism or colonialism or slavery or... European collectivists like George Soros (who became a billionaire by being a free-market capitalist).
“It is an anti-corporatist, oppressive and negative movement, [...]”
“[...] a neo-colonial attempt to cast scholarly communication policy according to the aspirations of a cliquish minority of European collectivists.”
“[...] mandates set and enforced by an onerous cadre of Soros-funded European autocrats.”
For those of us more comfortable with technocratic arguments, politics is not particularly welcome. Yet, we cannot avoid the fact that the OA movement is trying to reform a large socio-economic system. It would be naïve to think that that can be done without political ideology playing a role. But is it really too much to ask to avoid the lowest level of political debate, politics by name-calling?
The system of subscription journals has an internal free-market logic to it that no proposed or existing OA system has been able to replace. In a perfect world, the subscription system uses an economic market to assess the quality of editorial boards and the level of interest in a particular field. Economic viability acts as a referee of sorts, a market-based minimum standard. Some editorial boards deserve the axe for doing poor work. Some fields of study deserve to go out of business for lack of interest. New editorial boards and new fields of study deserve an opportunity to compete. Most of us prefer that these decisions are made by the collective and distributed wisdom of free-market mechanisms.
Unfortunately, the current scholarly-communication marketplace is far from a free market. Journals hardly compete directly with one another. Site licenses perpetuate a paper-era business model that forces universities to buy all content for 100% of the campus community, even those journals that are relevant only to a sliver of the community. Site licenses limit competition between journals, because end users never get to make the price/value trade-offs critical to a functional free market. The Big Deal exacerbates the problem. Far from providing a service, as Beall contends, the Big Deal gives big publishers a platform to launch new journals without competition. Consortial deals are not discounts; they introduce peer networks to make it more difficult to cancel existing subscriptions. [What if Libraries were the Problem?] [Libraries: Paper Tigers in a Digital World]
If Beall believes in the free market, he should support competition from new methods of dissemination, alternative assessment techniques, and new journal business models. Instead, he seems to be motivated more by a desire to hold onto his disrupted job description:
“Now the realm of scholarly communication is being removed from libraries, and a crisis has settled in. Money flows from authors to publishers rather than from libraries to publishers. We've disintermediated libraries and now find that scholarly system isn't working very well.”In fact, it is the site-license model that reduced the academic library to the easy-to-disintermediate dead-end role of subscription manager. [Where the Puck won't Be] Most librarians are apprehensive about the changes taking place, but they also realize that they must re-interpret traditional library values in light of new technology to ensure long-term survival of their institution.
Thus far, scholarly publishing has been the only type of publishing not disrupted by the Internet. In his seminal work on disruption [The Innovator's Dilemma], Clayton Christensen characterizes the defenders of the status quo in disrupted industries. Like Beall, they are blinded by traditional quality measures, dismiss and/or denigrate innovations, and retreat into a defense of the status quo.
Students, researchers, and the general public deserve a high-quality scholarly-communication system that satisfies basic minimum technological requirements of the 21st century. [Peter Murray-Rust, Why does scholarly publishing give me so much technical grief?] In the last 20 years of the modern Internet, we have witnessed innovation after innovation. Yet, scholarly publishing is still tied to the paper-imitating PDF format and to paper-era business models.
Open Access may not be the only answer [Open Access Doubts], but it may very well be the opportunity that this crisis has to offer. [Annealing the Library] In American political terms, Green Open Access is a public option. It provides free access to author-formatted versions of papers. Thereby, it serves the general public and the scholarly poor. It also serves researchers by providing a platform for experimentation without having to go through onerous access negotiations (for text mining, for example). It also serves as an additional disruptive trigger for free-market reform of the scholarly market. Gold Open Access in all its forms (from PLOS to PEERJ) is a set of business models that deserve a chance to compete on price and quality.
The choice is not between one free-market option and a plot of European collectivists. The real choice is whether to protect a functionally inadequate system or whether to foster an environment of innovation.
Tuesday, November 5, 2013
Cartoon Physics
When Wile E. Coyote runs off a cliff, he starts falling only after he realizes the precariousness of his situation.
In real life, cartoon physics is decidedly less funny. Market bubbles arise when a trend continues far past the point where the fundamentals make sense. The bubble bursts when the collective wisdom of the market acts on a reality that should have been obvious much earlier. Because of this unnecessary delay, bubbles inflict much unnecessary damage. We saw it recently with the Internet and mortgage bubbles, but the phenomenon is as old as the tulip bubble of 1637.
We also see cartoon physics in action at less epic scales. Cartoon physics applies to almost any disruptive technology. The established players almost never adapt to the new reality when fundamentals require it or when it is logical to do so. Instead of preparing for a viable future, they fight a losing battle hanging onto the past. Most recently, Blackberry ignored the iPhone thinking its serious corporate clients would not be lured by its gadgetry. There is a long line of disrupted industries whose leadership ignored upstart competitors and new realities. This has been the topic of acclaimed academic studies and popularized in every possible venue.
The blame game is a significant part of the process. The recording industry blamed pirates for destroying the music business. In fact, their own neglect to adapt to a digital age contributed at least as much to the disruption.
The scenario is well known, by now too cliché to be a good movie. Leaders of industries in upheaval should know the playbook. Yet, they keep repeating the mistakes of their disrupted predecessors.
Wile E. Coyote finally learned his lesson and decided to stop looking down.
PS: Cartoon physics does not apply to academic institutions, which are protected by their importance and seriousness.
In real life, cartoon physics is decidedly less funny. Market bubbles arise when a trend continues far past the point where the fundamentals make sense. The bubble bursts when the collective wisdom of the market acts on a reality that should have been obvious much earlier. Because of this unnecessary delay, bubbles inflict much unnecessary damage. We saw it recently with the Internet and mortgage bubbles, but the phenomenon is as old as the tulip bubble of 1637.
We also see cartoon physics in action at less epic scales. Cartoon physics applies to almost any disruptive technology. The established players almost never adapt to the new reality when fundamentals require it or when it is logical to do so. Instead of preparing for a viable future, they fight a losing battle hanging onto the past. Most recently, Blackberry ignored the iPhone thinking its serious corporate clients would not be lured by its gadgetry. There is a long line of disrupted industries whose leadership ignored upstart competitors and new realities. This has been the topic of acclaimed academic studies and popularized in every possible venue.
The blame game is a significant part of the process. The recording industry blamed pirates for destroying the music business. In fact, their own neglect to adapt to a digital age contributed at least as much to the disruption.
The scenario is well known, by now too cliché to be a good movie. Leaders of industries in upheaval should know the playbook. Yet, they keep repeating the mistakes of their disrupted predecessors.
Wile E. Coyote finally learned his lesson and decided to stop looking down.
PS: Cartoon physics does not apply to academic institutions, which are protected by their importance and seriousness.
Tuesday, May 21, 2013
Turow vs Everyone
According to celebrated author, lawyer, and president of the Author's Guild Scott Turow, the legal and technological erosion of copyright endangers writers. (New York Times, April 7th, 2013) His enemy list is conspiratorial in length and breadth. It includes the Supreme Court, publishers, search engines, the Hathi trust, Google, academics, libraries, and Amazon. Nevertheless, Turow makes compelling arguments that deserve scrutiny.
The Supreme Court decision on re-importation. (Kirtsaeng v. John Wiley & Sons, Inc.)
This 6-3 decision merely reaffirmed the first sale doctrine. It is highly unlikely that this will significantly affect book prices in the US. If it does, any US losses will be offset by price increases in foreign markets. More importantly, the impact will be negligible because paper books will soon be a niche market in the US.
Publishers restrict royalties on e-books.
Publishers who manage the technology shift by making minor business adjustments, such as transferring costs to authors, libraries, and consumers, underestimate the nature of current changes. Traditional publishers built their business when disseminating information was difficult. Once they built their dissemination channels, making money was relatively easy. In our current world, building dissemination channels is easy and cheap. Making money is difficult. Authors may need new partners who built their business in the current environment; there are some in his list of enemies.
Search engines make money of referring users to pirate sites.
Turow has a legitimate moral argument. However, politicizing search engines by censoring search results is as wrong as it is ineffective. Pirate sites also spread through social networks. Cutting off pirate sites from advertizing networks, while effective, is difficult to achieve across international borders and requires unacceptable controls on information exchange. iTunes and its competitors have shown it is possible to compete with pirate sites by providing a convenient user interface, speed, reliability, quality, and protection against computer viruses.
The Hathi trust and Google scanned books without authorization.
Hathi and Google were careless. Authors and publishers were rigid. Experimentation gave way to litigation.
Some academics want to curtail copyright.
Scholarly publishers like Elsevier have profit margins that exceed 30%. Yet, Turow claims that “For many academics today, their own copyrights hold little financial value because scholarly publishing has grown so unprofitable.”
Academics' research is often funded in part by government, and it is always supported by universities. Universities have always been committed to research openness, and they use published research as means for assessment. This is why academics forego royalties when they publish research. The concept of research openness is changing, and many academics are lobbying for the idea that research should be freely available to all. The idea of Open Access was recently embraced by the White House. Open Access applies only to researchers funded by the government and/or employed by participating universities and research labs. It only covers research papers, not books. It does not apply to independent authors. Open Access does not curtail copyright.
Legal academics like Prof. Lawrence Lessig have argued for stricter limits on traditional copyright and alternative copyrights. Pressured by industry lobbyists, Congress has repeatedly increased the length of copyright. If this trend continues, recent works may never enter into the public domain. Legislation must balance authors' intellectual property rights and everyone's (including authors') freedom to produce derivative works, commentaries, parodies, etc.
Amazon patents a scheme to re-sell used e-books.
This patent is a misguided attempt to monetize the human frailty of carrying familiar concepts from old technology senselessly into the new. It is hardly the stuff that made this forward-looking company formidable.
Libraries expand paper lending into digital lending.
Turow demands more money from libraries for digital lending privileges. He is too modest; he should demand their whole budget.
When a paper-based library acquires a book, it permanently increases the value of its collection. This cumulative effect over many years created the world's great collections. When a community spends resources on a digital-lending library, it rents information from publishers and provides a fleeting service for only as long as the licenses last. When the license ends, the information disappears. There is no cumulative effect. That digital-lending library only adds overhead. It will never own or contribute new information. It is an empty shell.
Digital lending is popular with the public. It gives librarians the opportunity to transition gradually into digital space. It continues the libraries' billion-dollar money stream to publishers. Digital lending have a political constituency, but it does not stand up to rational scrutiny. Like Amazon's scheme to resell used e-books, digital-lending programs are desperate attempts to hang on to something that simulates the status quo.
Lending is the wrong paradigm for the digital age. Instead, libraries should use their budgets to accumulate quality open-access information. They should sponsor qualified authors to produce open-access works of interest to the communities they serve. This would give authors a choice. They could either produce their work commercially behind a pay wall, or they could produce library-funded open-access works.
The Supreme Court decision on re-importation. (Kirtsaeng v. John Wiley & Sons, Inc.)
This 6-3 decision merely reaffirmed the first sale doctrine. It is highly unlikely that this will significantly affect book prices in the US. If it does, any US losses will be offset by price increases in foreign markets. More importantly, the impact will be negligible because paper books will soon be a niche market in the US.
Publishers restrict royalties on e-books.
Publishers who manage the technology shift by making minor business adjustments, such as transferring costs to authors, libraries, and consumers, underestimate the nature of current changes. Traditional publishers built their business when disseminating information was difficult. Once they built their dissemination channels, making money was relatively easy. In our current world, building dissemination channels is easy and cheap. Making money is difficult. Authors may need new partners who built their business in the current environment; there are some in his list of enemies.
Search engines make money of referring users to pirate sites.
Turow has a legitimate moral argument. However, politicizing search engines by censoring search results is as wrong as it is ineffective. Pirate sites also spread through social networks. Cutting off pirate sites from advertizing networks, while effective, is difficult to achieve across international borders and requires unacceptable controls on information exchange. iTunes and its competitors have shown it is possible to compete with pirate sites by providing a convenient user interface, speed, reliability, quality, and protection against computer viruses.
The Hathi trust and Google scanned books without authorization.
Hathi and Google were careless. Authors and publishers were rigid. Experimentation gave way to litigation.
Some academics want to curtail copyright.
Scholarly publishers like Elsevier have profit margins that exceed 30%. Yet, Turow claims that “For many academics today, their own copyrights hold little financial value because scholarly publishing has grown so unprofitable.”
Academics' research is often funded in part by government, and it is always supported by universities. Universities have always been committed to research openness, and they use published research as means for assessment. This is why academics forego royalties when they publish research. The concept of research openness is changing, and many academics are lobbying for the idea that research should be freely available to all. The idea of Open Access was recently embraced by the White House. Open Access applies only to researchers funded by the government and/or employed by participating universities and research labs. It only covers research papers, not books. It does not apply to independent authors. Open Access does not curtail copyright.
Legal academics like Prof. Lawrence Lessig have argued for stricter limits on traditional copyright and alternative copyrights. Pressured by industry lobbyists, Congress has repeatedly increased the length of copyright. If this trend continues, recent works may never enter into the public domain. Legislation must balance authors' intellectual property rights and everyone's (including authors') freedom to produce derivative works, commentaries, parodies, etc.
Amazon patents a scheme to re-sell used e-books.
This patent is a misguided attempt to monetize the human frailty of carrying familiar concepts from old technology senselessly into the new. It is hardly the stuff that made this forward-looking company formidable.
Libraries expand paper lending into digital lending.
Turow demands more money from libraries for digital lending privileges. He is too modest; he should demand their whole budget.
When a paper-based library acquires a book, it permanently increases the value of its collection. This cumulative effect over many years created the world's great collections. When a community spends resources on a digital-lending library, it rents information from publishers and provides a fleeting service for only as long as the licenses last. When the license ends, the information disappears. There is no cumulative effect. That digital-lending library only adds overhead. It will never own or contribute new information. It is an empty shell.
Digital lending is popular with the public. It gives librarians the opportunity to transition gradually into digital space. It continues the libraries' billion-dollar money stream to publishers. Digital lending have a political constituency, but it does not stand up to rational scrutiny. Like Amazon's scheme to resell used e-books, digital-lending programs are desperate attempts to hang on to something that simulates the status quo.
Lending is the wrong paradigm for the digital age. Instead, libraries should use their budgets to accumulate quality open-access information. They should sponsor qualified authors to produce open-access works of interest to the communities they serve. This would give authors a choice. They could either produce their work commercially behind a pay wall, or they could produce library-funded open-access works.
Labels:
#disruption,
#openaccess,
copyright,
economy,
education,
elsevier,
law,
library,
open access,
open archives,
piracy,
publishing,
research,
scholar,
site license,
technology
Monday, April 22, 2013
The Sibyl of Cumae
“The seventh was of Cumae, by name Amalthaea, who is termed by some Herophile, or Demophile and they say that she brought nine books to the king Tarquinius Priscus, and asked for them three hundred philippics, and that the king refused so great a price, and derided the madness of the woman; that she, in the sight of the king, burnt three of the books, and demanded the same price for those which were left; that Tarquinius much more considered the woman to be mad; and that when she again, having burnt three other books, persisted in asking the same price, the king was moved, and bought the remaining books for the three hundred pieces of gold: and the number of these books was afterwards increased, after the rebuilding of the Capitol; because they were collected from all cities of Italy and Greece, and especially from those of Erythraea, and were brought to Rome, under the name of whatever Sibyl they were.”
The myth of the Sibyl of Cumae from: The Divine Institutes, by Lactantius (b. ca. A.D. 250), Book I, Chapter VI.
Publishers select, prepare, market, and disseminate information. They developed their selection processes at a time when it was expensive to prepare and disseminate information. As these costs decreased, they could publish more and be less selective. However, the selection process endows information with gravitas, a valuable commodity for marketing. Today's publishers must balance two conflicting interests: increase revenue by publishing as much as possible vs. increase profit margins by selectively publishing high-value information. Scholarly publishers found a way to do both.
Where the Sibyl of Cumae burned some books to increase the value of the remaining books, a scholarly journal rejects a certain number of papers for each paper it publishes. Many of the rejected papers may be interesting, but they do not fit the journal's mission. For the publisher, this is an opportunity to spawn new journals in the wake of its successful journals. Such portfolios of journals are less selective than their individual journals. Of course, if one considers the scholarly publishing industry at the macro level, the notion of selectivity virtually vanishes. Papers are submitted and re-submitted until an outlet is found.
The Sibyls of Scholarly Publishing perform an elaborate dance with pyrotechnic effects that give the illusion they burn papers. In fact, each Sibyl takes in new and rejected papers, packages some of them in a journal, and pretends to burn the rest before handing them off to her sisters. Each Sibyl maximizes the price in her respective corner of the universe. Academia repeatedly acts like King Tarquinius, who thinks the woman mad and pays the price she demands.
It may take years and several turnovers of the editorial board before an established journal that covers a large domain accepts papers in an emerging field. This has created a seemingly insatiable demand for new highly specialized journals. Each successful journal serves its publisher by raising revenue, its editorial-board members by raising their research prestige, and its authors by providing an avenue for dissemination of material without a natural home in existing journals. Many of these journals cater to such a small cadre of specialists that they subvert the single largest scholarly benefit of the refereeing process: a critical reading by someone with a different point of view and background. Even when run with the best of intentions, these narrow journals are echo chambers for group think. Emerging fields need some breathing room, particularly in the early developmental stages, but they should not be immune from outside criticism. Do these journals really serve the cause of good scholarship? Are they worth the super-inflationary cost increases, which they help create?
Open Access may not reduce the cost of scholarly communication as originally hoped. A large-scale conversion to Gold Open Access would shift the costs from universities to governments. Once university administrations no longer feel the budgetary pain and the costs are baked into government budgets, publishers would be free to continue the super-inflationary trajectory. There would not be any market forces that limit the introduction of new journals, the growth of existing journals, or the price charged per paper published. The access problem would be resolved by hiding, compounding, and postponing the cost problem. In the end, the scholarly-communication market would remain as dysfunctional as ever.
Technology has eroded the foundation of the current scholarly-communication system. It assumes that there is a scarcity of dissemination, and it uses that scarcity for the purpose of gatekeeping. In fact, dissemination is abundant and nearly free. The scarcity and associated gatekeeping are marketing illusions.
The reluctance to change is understandable. A scholarly-communication system is a delicate balancing act. It must be fair, but critical. It must discourage poor research, yet be supportive of new ideas, including ideas that challenge established views. Because scholarly communication is tied to research assessment, any changes to the system must gain wide institutional acceptance.
Ultimately, we have little choice but to accept today's reality. Anyone has the power to disseminate any information, regardless of quality. No one has the power to be a gatekeeper. At most, editorial boards have the power of influence in their respective communities; they can highlight important achievements and developments. But even this power to influence may soon be challenged by crowd-sourced quality labels of alternative metrics. (Perhaps not.)
We should be elated about the recent successes of the Open Access movement. We should also recognize that Open Access is not an end point. It is only the first step in the reinvention of scholarly communication.
Tuesday, March 26, 2013
Open Access Politics
The Open Access (OA) movement is gaining some high-level political traction.
The White House Open Access memorandum enacts a national Green OA mandate: Most US funding agencies are directed to set up OA repositories for the research they fund. This Green OA strategy contrasts with the Gold OA strategy proposed by the Finch report in the UK. The latter all but guarantees that established publishers will retain their revenue stream if they switch their business model from site licenses to Author Page Charges (APCs).
The White House memorandum is likely to have the greatest impact. As its consequences ripple through the system, the number and size of Green OA repositories is likely to grow substantially over the next few years. Large-scale validation of altmetrics and the development of new business models may lead to the emergence of new forms of scholarly communication. Green OA archivangelist Stevan Harnad hypothesizes a ten-step scenario of changes.
There are also reasons for concern. As this new phase of the OA movement unfolds on the national political stage, all sides will use their influence and try to re-shape the initial policies to further their respective agendas. The outcome of this political game is far from certain. Worse, the outcome may not be settled for years, as these kind of policies are easily reversed without significant voter backlash.
At its core, OA is about an industry changing because of (not-so-)new technology and its accompanying shift in attitudes and values. In such cases, we expect established players to resist innovation by (ab)using politics and litigation. The entertainment industry lobbied and litigated against VCRs, DVRs, every Internet service ever launched, and now even antennas. In the dysfunctional scholarly-communication market, on the other hand, it is the innovators who resort to politics.
To understand why, suppose university libraries were funded by user-paid memberships and/or service fees. In this scenario, libraries and publishers encountered the same paper-to-digital transition costs. When library prices sky rocketed, students and faculty created underground exchanges of scholarly information. They cancelled their library memberships and/or stopped using their services. The publishers' revenue streams collapsed. Only the most successful journals survived, and even they suffered. Publishing a paper became increasingly difficult because of a lack of journals. This created an opening for experiments in scholarly publishing. This bottoms-up free-market transition would have been chaotic, painful, and forgotten by now.
We do not need to convert our libraries and research institutions into free-market enterprises. We do not need to abandon the fundamental principles on which these institutions are built. On the contrary, we must return to those principles and apply them in a new technological reality. Rebuilding the foundations of institutions is hard under the best of circumstances. When users are shielded from the external incentives/hardships of the free market, it is near impossible to disrupt, and continuity remains an option far beyond reason.
Green OA is an indirect approach to achieve fundamental change. It asks scholars to accept a little inconvenience for the sake of the larger principle. It asks them to deposit their papers into OA repositories and provide free access to publicly-funded research. It is hoped that this will gradually change the journal ecosystem and build pressure to innovate. It took dedicated developers, activists, advocates, and academic leaders over twenty years to promote this modest goal and create a movement that, finally, seems to have achieved critical mass. A growing number of universities have enacted OA mandates. These pioneers led the way, but only a government mandate can achieve the scale required to change the market. Enter politics.
Scholars, the creators and consumers of this market, should be able to dictate their terms. Yet, they are beholden to the establishment journals (and their publishers), which are the fountain of academic prestige. The SCOAP3 initiative for High Energy Physics journals shows how scholars are willing to go to unprecedented lengths to protect their journals.
Market-dominating scholarly publishers are paralyzed. They cannot abandon their only source of significant revenue (site licenses) on a hunch that another business model may work out better in the long term. In the mean time, they promote an impossible-to-defend hybrid Gold OA scheme, and they miss an opportunity to create value from author/reader networks (an opportunity recognized by upstart innovators). This business paralysis translates into a lobbying effort to protect the status quo for as long as feasible.
Academic libraries, which enthusiastically supported and developed Green OA, now enter this political arena in a weak position. The White House memorandum all but ignores them. Before complacency sets in, there is precious little time to argue a compelling case for independent institutional or individual repositories preserved in a long-term archive. After all, government-run repositories may disappear at any time for a variety of reasons.
The Gold OA approach of the Finch report is conceptually simpler. Neither scholars nor publishers are inconvenienced, let alone disrupted. It underwrites the survival of favored journals as Gold OA entities. It preempts real innovation. Without a mechanism in place to limit APCs, it's good to be a scholarly publisher in the UK. For now.
The White House Open Access memorandum enacts a national Green OA mandate: Most US funding agencies are directed to set up OA repositories for the research they fund. This Green OA strategy contrasts with the Gold OA strategy proposed by the Finch report in the UK. The latter all but guarantees that established publishers will retain their revenue stream if they switch their business model from site licenses to Author Page Charges (APCs).
The White House memorandum is likely to have the greatest impact. As its consequences ripple through the system, the number and size of Green OA repositories is likely to grow substantially over the next few years. Large-scale validation of altmetrics and the development of new business models may lead to the emergence of new forms of scholarly communication. Green OA archivangelist Stevan Harnad hypothesizes a ten-step scenario of changes.
There are also reasons for concern. As this new phase of the OA movement unfolds on the national political stage, all sides will use their influence and try to re-shape the initial policies to further their respective agendas. The outcome of this political game is far from certain. Worse, the outcome may not be settled for years, as these kind of policies are easily reversed without significant voter backlash.
At its core, OA is about an industry changing because of (not-so-)new technology and its accompanying shift in attitudes and values. In such cases, we expect established players to resist innovation by (ab)using politics and litigation. The entertainment industry lobbied and litigated against VCRs, DVRs, every Internet service ever launched, and now even antennas. In the dysfunctional scholarly-communication market, on the other hand, it is the innovators who resort to politics.
To understand why, suppose university libraries were funded by user-paid memberships and/or service fees. In this scenario, libraries and publishers encountered the same paper-to-digital transition costs. When library prices sky rocketed, students and faculty created underground exchanges of scholarly information. They cancelled their library memberships and/or stopped using their services. The publishers' revenue streams collapsed. Only the most successful journals survived, and even they suffered. Publishing a paper became increasingly difficult because of a lack of journals. This created an opening for experiments in scholarly publishing. This bottoms-up free-market transition would have been chaotic, painful, and forgotten by now.
We do not need to convert our libraries and research institutions into free-market enterprises. We do not need to abandon the fundamental principles on which these institutions are built. On the contrary, we must return to those principles and apply them in a new technological reality. Rebuilding the foundations of institutions is hard under the best of circumstances. When users are shielded from the external incentives/hardships of the free market, it is near impossible to disrupt, and continuity remains an option far beyond reason.
Green OA is an indirect approach to achieve fundamental change. It asks scholars to accept a little inconvenience for the sake of the larger principle. It asks them to deposit their papers into OA repositories and provide free access to publicly-funded research. It is hoped that this will gradually change the journal ecosystem and build pressure to innovate. It took dedicated developers, activists, advocates, and academic leaders over twenty years to promote this modest goal and create a movement that, finally, seems to have achieved critical mass. A growing number of universities have enacted OA mandates. These pioneers led the way, but only a government mandate can achieve the scale required to change the market. Enter politics.
Scholars, the creators and consumers of this market, should be able to dictate their terms. Yet, they are beholden to the establishment journals (and their publishers), which are the fountain of academic prestige. The SCOAP3 initiative for High Energy Physics journals shows how scholars are willing to go to unprecedented lengths to protect their journals.
Market-dominating scholarly publishers are paralyzed. They cannot abandon their only source of significant revenue (site licenses) on a hunch that another business model may work out better in the long term. In the mean time, they promote an impossible-to-defend hybrid Gold OA scheme, and they miss an opportunity to create value from author/reader networks (an opportunity recognized by upstart innovators). This business paralysis translates into a lobbying effort to protect the status quo for as long as feasible.
Academic libraries, which enthusiastically supported and developed Green OA, now enter this political arena in a weak position. The White House memorandum all but ignores them. Before complacency sets in, there is precious little time to argue a compelling case for independent institutional or individual repositories preserved in a long-term archive. After all, government-run repositories may disappear at any time for a variety of reasons.
The Gold OA approach of the Finch report is conceptually simpler. Neither scholars nor publishers are inconvenienced, let alone disrupted. It underwrites the survival of favored journals as Gold OA entities. It preempts real innovation. Without a mechanism in place to limit APCs, it's good to be a scholarly publisher in the UK. For now.
Labels:
#altmetrics,
#disruption,
#openaccess,
#scoap3,
economy,
education,
elsevier,
library,
open access,
open archives,
publishing,
research,
scholar,
school,
site license,
technology
Tuesday, October 16, 2012
A Physics Experiment
Researchers in High Energy Physics (HEP) live for that moment when they can observe results, interpret data, and raise new questions. When it arrives, after a lifetime of planning, funding, and building an experiment, they set aside emotional attachment and let the data speak.
Since 1991, virtually all HEP research papers have been freely available through an online database. This repository, now known as arXiv, inspired the Green model of the Open Access movement: Scholars submit author-formatted versions of their refereed papers to open-access repositories. With this simple action, they create an open-access alternative to the formal scholarly-communication system, which mostly consists of pay-walled journals. The HEP scholarly-communication market gives us an opportunity to observe the impact of 100% Green Open Access. Following the scientists' example, let us take a moment, observe this twenty-year-long large-scale experiment, and let the data speak.
When publishers digitized scholarly journals in the 1990s, they added site licenses as an add-on option to paper-journal subscriptions. Within a few years, paper-journal subscriptions all but disappeared. At first, publishers continued the super-inflationary price trajectory of subscriptions. Then, they steepened the price curve with assorted technology fees and access charges for digitized back-files of old issues. The growing journal-pricing crisis motivated many university administrators to support the Open Access movement. While the latter is about access, not about the cost of publishing, it is impossible to separate the two issues.
In 1997, the International School of Advanced Studies (SISSA) launched the Journal of High Energy Physics (JHEP) as an open-access journal. JHEP was an initial step towards a larger goal, now referred to as Gold Open Access: replacing the current scholarly-communication system with a barrier-free system of journals without pay walls. The JHEP team implemented a highly efficient system to process submitted papers, thereby reducing the journal's operating costs to the bare minimum. The remaining expenses were covered by a handful of research organizations, which agreed to a cost-sharing formula for the benefit of their community. This institutional-funding model proved unsustainable, and JHEP converted to a site-licensed journal in 2003. This step back seems strange now, because JHEP could have copied the funding model of BioMed Central, which had launched in 2000 and funded open access by charging authors a per-article processing fee. Presumably, JHEP's leadership considered this author-pay model too experimental and too risky after their initial attempt at open access. In spite of its difficult start, JHEP was an academic success and subsequently prospered financially as a site-licensed journal produced by Springer under the auspices of SISSA.
Green Open Access delivers the immediate benefit of access. Proponents argue it will also, over time, fundamentally change the scholarly-communication market. The twenty-year HEP record lends support to the belief that Green Open Access has a moderating influence: HEP journals are priced at more reasonable levels than other disciplines. However, the HEP record thus far does not support the notion that Green Open Access creates significant change:
- Only one event occurred that could have been considered disruptive: JHEP capturing almost 20% of the HEP market as an open-access journal. Instead, this event turned into a case of reverse disruption!
- There was no change in the business model. All leading HEP publishers of 2012 still use pre-1991 business channels. They still sell to the same clients (acquisition departments of academic libraries) through the same intermediaries (journal aggregators). They sell a different product (site licenses instead of subscriptions), and the transactions differ, but the business model survives unchanged.
- No journals with significant HEP market share disappeared. Even with arXiv as an open-access alternative, canceling an established HEP journal is politically toxic at any university with a significant HEP department. This creates a scholarly-communication market that is highly resistant to change.
- Journal prices continued on a trajectory virtually unaffected by turbulent economic times.
In an attempt to re-engineer the market, influential HEP organizations launched the Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP³). It is negotiating with publishers the conversion of established HEP journals to Gold Open Access. To pay for this, hundreds of research institutions world-wide must pool the funds they are currently spending on HEP site licenses. Negotiated article processing charges will, in aggregate, preserve the revenue stream from academia to publishers.
If SCOAP³ proves sustainable, it will become the de-facto sponsor and manager of all HEP publishing world-wide. It will create a barrier-free open-access system of refereed articles produced by professional publishers. This is an improvement over arXiv, which contains mostly author-formatted material.
Many have praised the initiative. Others have denounced it. Those who observe with scientific detachment merely note that, after twenty years of 100% Green Open Access, the HEP establishment really wants Gold Open Access.
The HEP open-access experiment continues.
Tuesday, July 17, 2012
The Isentropic Disruption
The free dissemination of research is intrinsically good. For this reason alone, we must support open-access initiatives in general and Green Open Access in particular. One open repository does not change the dysfunctional scholarly-information market, but every new repository immediately expands open access and contributes to a worldwide network that may eventually create the change we are after.
Some hope that Green Open Access together with other incremental steps will lead to a “careful, thoughtful transition of revenue from toll to open access”. Others think that eminent leaders can get together and engineer a transition to a pre-defined new state. It is understandable to favor a gradual, careful, thoughtful, and smooth transition to a well-defined new equilibrium along an expertly planned path. In thermodynamics, a process that takes a system from one equilibrium state to another via infinitesimal steps that maintain order and equilibrium is called isentropic. (Note: Go elsewhere to learn thermodynamics.) Unfortunately, experience since the dawn of the industrial age has taught us that there is nothing isentropic about a disruption. There is no pre-defined destination. Leaders and experts usually have it wrong. The path is a random walk. The transition, if it happens, is sudden.
No matter what we do, the scholarly-information market will disrupt. The web has disrupted virtually every publisher and information intermediator. Idiosyncrasies of the scholarly-information market may have delayed the disruption of academic publishers and libraries, but the disruptive triggers are piling up. Will Green Open Access be a disruptive trigger when some critical mass is reached? Will it be a start-up venture based on a bright idea that catches on? Will it be a boycott to end all boycotts? Will it be some legislation somewhere? Will it be one or more major university systems opting out and causing an avalanche? Will it be the higher-education bubble bursting?
No matter what we do, disruption is disorderly and painful. Publishers must change their business model and transition from a high-margin to a low-margin environment. Important journals will be lost. This will disrupt some scholarly disciplines more severely than others. An open-access world without site licenses will disrupt academic libraries, whose budget is dominated by site-license acquisition and maintenance. Change of this depth and breadth is messy, disorderly, turbulent, and chaotic.
Disruption of the scholarly-information market is unavoidable. Disruption is disorderly and painful. We do not know what the end point will be. It is impossible to engineer the perfect transition. We do not have to like it, but ignoring the inevitable does not help. We have to come to terms with it, grudgingly accept it, and eventually embrace it by realizing that all of us have benefitted tremendously from technology-driven disruption in every other sector of the economy. Lack of disruption is a weakness. It is a sign that market conditions discourage experiments and innovation. We need to lower the barriers of entry for innovators and give them an opportunity to compete. Fortunately, universities have the power to do this without negotiation, litigation, or legislation.
If 10% of a university community wants one journal, 10% wants a competing journal, and 5% wants both, the library is effectively forced to buy both site licenses for 100% of the community. Site licenses reduce competition between journals and force universities to buy more than they need. The problem is exacerbated further by bundling and consortium “deals”. It is inordinately expensive in staff time to negotiate complex site-license contracts. Once acquired, disseminating the content according to contractual terms requires expensive infrastructure and ongoing maintenance. This administrative burden, pointlessly replicated at thousands of universities, adds no value. It made sense to buy long-lived paper-based information collectively. Leasing digital information for a few years at a time is sensible only inside the mental prison of the paper model.
Everyone with an iTunes library is familiar with the concept of a personal digital library. Pay-walled content should be managed by individuals who assess their own needs and make their own personal price-value assessments. After carefully weighing the options, they might still buy something just because it seems like a good idea. Eliminating the rigid acquisition policies of libraries invigorates the market, lowers the barriers of entry to innovators, incentivizes experiments, and increases price pressure on all providers. This improves the market for pay-walled content immediately, and it may help increase the demand for open access.
I would implement a transition to subsidized personal digital libraries in three steps. Start with a small step to introduce the university community to personal digital libraries. Cancel enough site licenses to transfer 10% of the site-license budget to an individual-subscription fund. After one year, cancel half of the remaining site licenses. After two years, transfer the entire site-license budget to the individual-subscription fund. From then on, individuals are responsible to buy their own pay-walled content, subsidized by the individual-subscription fund.
Being the middleman in digital-lending transactions is a losing proposition for libraries. It is a service that contradicts their mission. Libraries disseminate information; they do not protect it on behalf of publishers. Libraries buy information and set it free; they do not rent information and limit its availability to a chosen few. Libraries align themselves with the interests of their users, not with those of the publishers. Because of site licenses, academic libraries have lost their identity. They can regain it by focusing 100% on archiving and open access.
Librarians need to ponder the future and identity of academic libraries. For a university leadership under budgetary strain, the question is less profound and more immediate. Right now, what is the most cost-effective way to deliver pay-walled content to students and faculty?
No matter what we do, disruption is disorderly and painful. Publishers must change their business model and transition from a high-margin to a low-margin environment. Important journals will be lost. This will disrupt some scholarly disciplines more severely than others. An open-access world without site licenses will disrupt academic libraries, whose budget is dominated by site-license acquisition and maintenance. Change of this depth and breadth is messy, disorderly, turbulent, and chaotic.
Disruption of the scholarly-information market is unavoidable. Disruption is disorderly and painful. We do not know what the end point will be. It is impossible to engineer the perfect transition. We do not have to like it, but ignoring the inevitable does not help. We have to come to terms with it, grudgingly accept it, and eventually embrace it by realizing that all of us have benefitted tremendously from technology-driven disruption in every other sector of the economy. Lack of disruption is a weakness. It is a sign that market conditions discourage experiments and innovation. We need to lower the barriers of entry for innovators and give them an opportunity to compete. Fortunately, universities have the power to do this without negotiation, litigation, or legislation.
If 10% of a university community wants one journal, 10% wants a competing journal, and 5% wants both, the library is effectively forced to buy both site licenses for 100% of the community. Site licenses reduce competition between journals and force universities to buy more than they need. The problem is exacerbated further by bundling and consortium “deals”. It is inordinately expensive in staff time to negotiate complex site-license contracts. Once acquired, disseminating the content according to contractual terms requires expensive infrastructure and ongoing maintenance. This administrative burden, pointlessly replicated at thousands of universities, adds no value. It made sense to buy long-lived paper-based information collectively. Leasing digital information for a few years at a time is sensible only inside the mental prison of the paper model.
Everyone with an iTunes library is familiar with the concept of a personal digital library. Pay-walled content should be managed by individuals who assess their own needs and make their own personal price-value assessments. After carefully weighing the options, they might still buy something just because it seems like a good idea. Eliminating the rigid acquisition policies of libraries invigorates the market, lowers the barriers of entry to innovators, incentivizes experiments, and increases price pressure on all providers. This improves the market for pay-walled content immediately, and it may help increase the demand for open access.
I would implement a transition to subsidized personal digital libraries in three steps. Start with a small step to introduce the university community to personal digital libraries. Cancel enough site licenses to transfer 10% of the site-license budget to an individual-subscription fund. After one year, cancel half of the remaining site licenses. After two years, transfer the entire site-license budget to the individual-subscription fund. From then on, individuals are responsible to buy their own pay-walled content, subsidized by the individual-subscription fund.
Being the middleman in digital-lending transactions is a losing proposition for libraries. It is a service that contradicts their mission. Libraries disseminate information; they do not protect it on behalf of publishers. Libraries buy information and set it free; they do not rent information and limit its availability to a chosen few. Libraries align themselves with the interests of their users, not with those of the publishers. Because of site licenses, academic libraries have lost their identity. They can regain it by focusing 100% on archiving and open access.
Librarians need to ponder the future and identity of academic libraries. For a university leadership under budgetary strain, the question is less profound and more immediate. Right now, what is the most cost-effective way to deliver pay-walled content to students and faculty?
Thursday, June 14, 2012
The End of Stuff
Ever since the industrial revolution,
the world economy has grown by producing more, better, and cheaper
goods and services. Because we produce more efficiently, we spend
fewer resources on need-to-haves and are able to buy more
nice-to-haves. The current recession, or depression, interrupted the
increase in material prosperity for many, but the long-term trend of
increasing efficiency continued and, perhaps, accelerated.
The major driver of efficiency in the
industrial and service economy was information technology. In the
last fifty years, we streamlined production, warehouses,
transportation, logistics, retailing, marketing, accounting, and
management. Travel agents were replaced by web sites. Telephone
operators all but disappeared. Even financial management, tax
preparation, and legal advice were partially automated. Lately, this
efficiency evolution has shifted into hyperdrive with a new
phenomenon: information technology replacing physical goods. Instead
of producing goods more efficiently, we are not producing them at all
and replacing them with lines of code.
It started with music, where bit
streams replaced CDs. Photography, video, and books followed.
Smartphone apps have replaced or may replace alarm clocks, watches,
timers, cameras, voice recorders, road maps, agendas, planners,
handheld game devices, etc. Before long, apps will replace keys to
our houses and cars. They will replace ID cards, driver licenses,
credit cards, and membership cards. As our smart phones replace
everything in our wallet and the wallet itself, they will also
replace ATMs. Tablet computers are replacing the briefcase and its
contents. Soon, Google Glass may improve upon phone and tablet
functionality and replace both. If not Google Glass, another product
will. Desk phones and the analog phone
network are on their unavoidable decline into oblivion.
The paperless office has been imminent
since the seventies, always just out of reach. But technology and
people's attitudes have now converged to a point where the paperless
office is practical and feasible, even desirable. We may never
eliminate print entirely, but the number of printers will eventually
start declining. As printers go, so will copiers. Electronic receipts
will, eventually, kill the small thermal printers deployed in stores
and restaurants everywhere. Inexplicably, faxes still exist, but
their days are numbered.
New generations of managers will be
more comfortable with the distributed office and telecommuting. Video
conferencing is steadily growing. Distance teaching is poised to
explode with Massive Open Online Courses. All of these trends will
reduce our need for transportation, particularly mass transportation
used for daily commuting, and for offices and classrooms.
Self-driving cars are about to hit the market in a few years. Initially, self-driving will be a
nice-to-have add-on option to a traditional car. The far more
interesting prospect is the development of a new form of mass
transit. Order a car from your smartphone, and it shows up wherever
and whenever you need it. Suddenly, car sharing is easy. It may even
be more convenient than a personal car: never look for (and pay for)
a parking space again.
When this technology kicks in, it will
reduce our need for personal cars. Imagine the multiplier effect of
two- and three-car households reducing their number of cars by one:
fewer car dealerships, car mechanics, gas stations, parking garages,
etc. With fewer accidents, we need fewer body shops. Self-driving
cars do not need traffic signs, perhaps not even traffic lights.
Brick-and-mortar stores already find it
difficult to compete with online retailers. How will they fare when
door-to-door mail and package delivery is fully automated without a
driver? (The thought of self-driving trucks barreling down the
highway scares me, but they may turn out to be the safer
alternative.) With fewer stores and malls, how will the construction
industry and building-maintenance services sector fare?
Cloud computing makes it easy and
convenient to share computers. Xbox consoles will not be replaced by
another must-have box, but by multiplayer games that run in the cloud. When companies move their enterprise
systems to the cloud, they immediately reduce the number of servers
through sharing. Over time, cloud computing will drastically reduce
the number of company-owned desktop, notebook, and tablet computers.
Instead, employees will use their personal access devices to access
corporate information stored and protected in the cloud.
Perhaps, a new class of physical
products that will change the manufacturing equation is about to be
discovered. Perhaps, we will hang on to obsolete technology like
faxes longer than expected. But right now, the overall trend seems
inescapable: we are getting rid of a lot of products, and we are
dis-intermediating a lot of services.
For the skeptical, it is easy to
dismiss these examples are mere speculative anecdotes that will not
amount to anything substantial. Yet, these new technologies are not
pie-in-the-sky. They already exist now and will be operational soon.
Moreover, the affected industries represent large segments of the
economy and have a significant multiplier effect on the rest of the
economy.
From an environmental point of view,
this is all good news. Economically, we may become poorer in a
material sense, yet improve our standard of living. Disruption like
this always produces collateral damage. To reduce the severity of the
transition problems, our best course of action may be to help others.
Developing nations desperately need to grow their material wealth.
They need more goods and services. Investing in these nations now and
expanding their prosperity could be our best strategy to survive the
transition.
Subscribe to:
Posts (Atom)