Showing posts with label school. Show all posts
Showing posts with label school. Show all posts

Monday, June 30, 2014

Disruption Disrupted?

The professor who books his flights online, reserves lodging with Airbnb, and arranges airport transportation with Uber understands the disruption of the travel industry. He actively supports that disruption every time he attends a conference. When MOOCs threaten his job, when The Economist covers reinventing the university and titles it “Creative Destruction", that same professor may have second thoughts. With or without disruption, academia surely is in a period of immense change. There is the pressure to reduce costs and tuition, the looming growth of MOOCs, the turmoil in scholarly communication (subscription prices, open access, peer review, alternative metrics), the increased competition for funding, etc.

The term disruption was coined and popularized by Harvard Business School Professor Clayton Christensen, author of The Innovator's Dilemma. [The Innovator's Dilemma, Clayton Christensen, Harvard Business Review Press, 1997] Christensen created a compelling framework for understanding the process of innovation and disruption. Along the way, he earned many accolades in academia and business. In recent years, a cooling of the academic admiration became increasingly noticeable. A snide remark here. A dismissive tweet there. Then, The New Yorker launched a major attack on the theory of disruption. [The Disruption Machine, Jill Lepore, The New Yorker, June 23rd, 2014] In this article, Harvard historian Jill Lepore questions Christensen's research by attacking the underlying facts. Were Christensen's disruptive startups really startups? Did the established companies really lose the war or just one battle? At the very least, Lepore is implying that Christensen misled his readers.

As of this writing, Christensen has only responded in a brief interview. [Clayton Christensen Responds to New Yorker Takedown of 'Disruptive Innovation', Bloomberg Businessweek, June 20th, 2014] It is clear he is preparing a detailed written response.

Lepore's critique appears at the moment when disruption may be at academia's door, seventeen years after The Innovator's Dilemma was published, much of the research almost twenty years old. Perhaps, the article is merely a symptom of academics growing nervous. Yet, it would be wrong to dismiss Lepore's (or anyone other's) criticism based on any perceived motivation. Facts can be and should be examined.

In 1997, I was a technology manager tasked with dragging a paper-based library into the digital era. When reading (and re-reading) the book, I did not question the facts. When Christensen stated that upstart X disrupted established company Y, I accepted it. I assume most readers did. The book was based on years of research, all published in some of the most prestigious peer-reviewed journals. It is reasonable to assume that the underlying facts were scrutinized by several independent experts. Truth be told, I did not care much that his claims were backed by years of research. Christensen gave power to the simple idea that sticking with established technology can carry an enormous opportunity cost.

Established technology has had years, perhaps decades, to mitigate its weaknesses. It has a constituency of users, service providers, sales channels, and providers of derivative services. This constituency is a force that defends the status quo in order to maintain established levels of quality, profit margins, and jobs. The innovators do not compete on a level playing field. Their product may improve upon the old in one or two aspects, but it has not yet had the opportunity to mitigate its weaknesses. When faced with such innovations, all organizations tend to stick with what they know for as long as possible.

Christensen showed the destructive power of this mind set. While waiting until the new is good enough or better, organizations lose control of the transition process. While pleasing their current customers, they lose future customers. By not being ahead of the curve, by ignoring innovation, by not restructuring their organizations ahead of time, leaders may put their organizations at risk. Christensen told compelling disruption stories in many different industries. This allowed readers to observe their own industry with greater detachment. It gave readers the confidence to push for early adoption of inevitable innovation.

I am not about to take sides in the Lepore-Christensen debate. Neither needs my help. As an observer interested in scholarly communication, I cannot help but noting that Lepore, a distinguished scholar, launched her critique from a distinctly non-scholarly channel. The New Yorker may cater to the upper-crust of intellectuals (and wannabes), but it remains a magazine with journalistic editorial-review processes, quite distinct from scholarly peer-review processes.

Remarkably, the same happened only a few weeks ago, when the Financial Times attempted to take down Piketty's book. [Capital in the Twenty-First Century, Thomas Piketty, Belknap Press; 2014]  [Piketty findings undercut by errors, Chris Giles, Financial Times, May 23rd, 2014] Piketty had a distinct advantage over Christensen. The Financial Times critique appeared a few weeks after his book came out. Moreover, he had made all of his data public, including all technical adjustments required to make data from different sources compatible. As a result, Piketty was able to respond quickly, and the controversy quickly dissipated. Christensen has the unenviable task of defending twenty-year old research. For his sake, I hope he was better at archiving data than I was in the 1990s.

What does it say about the status of scholarly journals when scholars use magazines to launch scholarly critiques? Was Lepore's article not sufficiently substantive for a peer-reviewed journal? Are scholarly journals incapable or unwilling to handle academic controversy involving one of its eminent leaders? Is the mainstream press just better at it? Would a business journal even allow a historian to critique business research in its pages? If this is the case, is peer review less about maintaining standards and more about protecting an academic tribe? Is the mainstream press just a vehicle for some scholars to bypass peer review and academic standards? What would it say about peer review if Lepore's arguments should prevail?

This detached observer pours a drink and enjoys the show.


PS (7/15/2014): Reposted with permission at The Impact Blog of The London School of Economics and Political Science.

Monday, March 17, 2014

Textbook Economics

The impact of royalties on a book's price, and its sales, is greater than you think. Lower royalties often end up better for the author. That was the publisher's pitch when I asked him about the details of the proposed publishing contract. Then, he explained how he prices textbooks.

It was the early 1990s, I had been teaching a course on Concurrent Scientific Computing, a hot topic then, and several publishers had approached me about writing a textbook. This was an opportunity to structure a pile of course notes. Eventually, I would sign on with a different publisher, a choice that had nothing to do with royalties or book prices. [Concurrent Scientific Computing, Van de Velde E., Springer-Verlag New York, Inc., New York, NY, 1994.]

He explained that a royalty of 10% increases the price by more than 10%. To be mathematical about it: With a royalty rate r, a target revenue per book C, and a retail price P, we have that C = P-rP (retail price minus royalties). Therefore, P = C/(1-r). With a target revenue per book of $100, royalties of 10%, 15%, and 20% lead to retail prices of $111.11, $117.65, and $125.00, respectively.

In a moment of candor, he also revealed something far more interesting: how he sets the target revenue C. Say the first printing of 5000 copies requires an up-front investment of $100,000. (All numbers are for illustrative purposes only.) This includes the cost of editing, copy-editing, formatting, cover design, printing, binding, and administrative overhead. Estimating library sales at 1000 copies, this publisher would set C at $100,000/1,000 = $100. In other words, he recovered his up-front investment from libraries. Retail sales were pure profit.

The details are, no doubt, more complicated. Yet, even without relying on a recollection of an old conversation, it is safe to assume that publishers use the captive library market to reduce their business risk. In spite of increasingly recurrent crises, library budgets remain fairly predictable, both in size and in how the money is spent. Any major publisher has reliable advance estimates of library sales for any given book, particularly if published as part of a well-known series. It is just good business to exploit that predictability.

The market should be vastly different now, but textbooks have remained stuck in the paper era longer than other publications. Moreover, the first stage of the move towards digital, predictably, consists of replicating the paper world. This is what all constituents want: Librarians want to keep lending books. Researchers and students like getting free access to quality books. Textbook publishers do not want to lose the risk-reducing revenue stream from libraries. As a result, everyone implements the status quo in digital form. Publishers produce digital books and rent their collections to libraries through site licenses. Libraries intermediate electronic-lending transactions. Users get the paper experience in digital form. Universities pay for site licenses and the maintenance of the digital-lending platforms.

After the disaster of site licenses for scholarly journals, repeating the same mistake with books seems silly. Once again, take-it-or-leave-it bundles force institutions into a false choice between buying too much for everyone or nothing at all. Once again, site licenses eliminate the unlimited flexibility of digital information. Forget about putting together a personal collection tailored to your own requirements. Forget about pricing per series, per book, per chapter, unlimited in time, one-day access, one-hour access, readable on any device, or tied to a particular device. All of these options are eliminated to maintain the business models and the intermediaries of the paper era.

Just by buying/renting books as soon as they are published, libraries indirectly pay for a significant fraction of the initial investment of producing textbooks. If libraries made that initial investment explicitly and directly, they could produce those same books and set them free. Instead of renting digital books (and their multimedia successors), libraries could fund authors to write books and contract with publishers to publish those manuscripts as open-access works. Authors would be compensated. Publishers would compete for library funds as service providers. Publishers would be free to pursue the conventional pay-for-access publishing model, just not with library dollars. Prospective authors would have a choice: compete for library funding to produce an open-access work or compete for a publishing contract to produce a pay-for-access work.

The Carnegie model of libraries fused together two distinct objectives: subsidize information and disseminate information by distributing books to many different locations. In web-connected communities, spending precious resources on dissemination is a waste. Inserting libraries in digital-lending transactions only makes those transactions more inconvenient. Moreover, it requires expensive-to-develop-and-maintain technology. By reallocating these resources towards subsidizing information, libraries could set information free without spending part of their budget on reducing publishers' business risk. The fundamental budget questions that remain are: Which information should be subsidized? What is the most effective way to subsidize information?

Libraries need not suddenly stop site licensing books tomorrow. In fact, they should take a gradual approach, test the concept, make mistakes, and learn from them. A library does not become a grant sponsor and/or publisher overnight. Several models are already available: from grant competition to crowd-funded ungluing. [Unglue.it for Libraries] By phasing out site licenses, any library can create budgetary space for sponsoring open-access works.

Libraries have a digital future with almost unlimited opportunities. Yet, they will miss out if they just rebuild themselves as a digital copy of the paper era.

Monday, December 2, 2013

Amazon Floods the Information Commons

Amazon is bringing cloud computing to the masses. Any individual with access to a browser now has access to almost unlimited computing power and storage. This may be the moment that marks the official beginning of the end of the desktop computer, which was already on a downward slide because of the rise of notebooks, netbooks, tablets, and smartphones.

For managers of computer labs, this technology eliminates a slew of nitty gritty management problems without good solutions. When a shared computer is idle, do you take action after 5, 10, or 15 minutes? If you wait too long, you annoy users who are waiting for their turn, and you invite unauthorized users to sneak into someone else's session. If you act too soon, you ruin the experience for the current user. Should you immediately log off an idle user or do you lock the screen for a while before logging off? Again, you balance the interests of the current user against those of the next user. Which software do you install where? Installing all software on every computer is usually too expensive. But if each computer in the lab has its own configuration, how do you communicate those differences to the users? The ultimate challenge of the shared computer is how to let students install software that they themselves are developing while keeping the computer relatively secure, usable to others, and free from pirated software.

Amazon has solved all of this and more. With cloud-based computers, there is no such thing as an idle computer, only idle screens. Shutting down a screen and turning it over to another user does not ruin a session in progress. It is more like turning over a printer. The cloud-based personal computer is configured for one user according to his or her requirements. Students and faculty can install whatever software they need, including their own research software. As to the usual suite of standard applications, cloud services like Adobe Creative Cloud, Google Apps, and Windows Azure have eliminated software installation and maintenance entirely.

The potential of cloud computing in the Information Commons is more than substituting one technology with another. Students and faculty suddenly have their own custom computing laboratory with an unlimited number of computers over which they have complete control. One can imagine projects in which cloud-based computers harvest measurements from sensors across the globe (weather-related, for example), read and analyze the news, and data mine social networks. All of this data can then be fed to high-performance servers running research software for analysis and visualization.

Currently, retail pricing for a cloud-based personal computer starts at $35 per month. This is already a very good price point, considering that it eliminates the hardware replacement cycle, software maintenance, security issues, etc. One can also add and drop computers as needed. Moreover, this is a price point established before competitors have even entered the market. 

When computing and storage become relatively inexpensive on-demand commodity services, computing labs are no longer in the business of sharing computing devices, storage, and software; they are in the business of sharing visualization devices. Currently, Information Commons provide large-screen high-resolution monitors attached to a computer. As large-scale, high-performance, big-data projects grow in popularity across many disciplines, there will be increasing demand for more advanced equipment to visualize and render the results. Today's computing labs will morph into advanced visualization labs. They will provide the capacity to use multiple large high-resolution screens. They may provide access to CAVEs (CAVE Automatic Virtual Environment) and/or additive-manufacturing equipment (which includes 3-D printing). The support requirements for such equipment are radically different from those for current computer labs. CAVEs need large rooms with no windows, multiple projectors, and a sound system. Additive manufacturing may be loud and may require specialized venting systems.

For managers of Information Commons, it is not too early to start planning for this transition. They may look forward to getting rid of the nitty-gritty unsolvable problems mentioned above, but integrating these technologies into the real estate currently used for computing labs and libraries will require all of the organizational and management skills they can muster.

Tuesday, November 5, 2013

Cartoon Physics

When Wile E. Coyote runs off a cliff, he starts falling only after he realizes the precariousness of his situation.

In real life, cartoon physics is decidedly less funny. Market bubbles arise when a trend continues far past the point where the fundamentals make sense. The bubble bursts when the collective wisdom of the market acts on a reality that should have been obvious much earlier. Because of this unnecessary delay, bubbles inflict much unnecessary damage. We saw it recently with the Internet and mortgage bubbles, but the phenomenon is as old as the tulip bubble of 1637.

We also see cartoon physics in action at less epic scales. Cartoon physics applies to almost any disruptive technology. The established players almost never adapt to the new reality when fundamentals require it or when it is logical to do so. Instead of preparing for a viable future, they fight a losing battle hanging onto the past. Most recently, Blackberry ignored the iPhone thinking its serious corporate clients would not be lured by its gadgetry. There is a long line of disrupted industries whose leadership ignored upstart competitors and new realities. This has been the topic of acclaimed academic studies and popularized in every possible venue.

The blame game is a significant part of the process. The recording industry blamed pirates for destroying the music business. In fact, their own neglect to adapt to a digital age contributed at least as much to the disruption.

The scenario is well known, by now too cliché to be a good movie. Leaders of industries in upheaval should know the playbook. Yet, they keep repeating the mistakes of their disrupted predecessors.

Wile E. Coyote finally learned his lesson and decided to stop looking down.

PS: Cartoon physics does not apply to academic institutions, which are protected by their importance and seriousness.

Wednesday, June 19, 2013

Chudnov's Mission

Library mission statements are pablum intended to placate everyone and offend no one. It could be different, as I recently found out because of a tweet and a blog from Lorcan Dempsey, which led me to the personal mission statement of Dan Chudnov:


How refreshing!

Chudnov blogged this in 2006, the year in which Time Magazine's person of the year was “You.” Youtube had just exploded into our consciousness. Social networking was hot. This was the end of broadcasting and the beginning of narrowcasting. Time Magazine realized then that new web technologies would center around the individual and his or her personal needs and wants. The world embraced this idea.

Libraries could have aligned with this fundamental shift. But seven years later, libraries remain rooted in the concept of providing services for the average user of a particular community. Chudnov's mission is a radical departure from this model and an ambitious goal. Give to the masses what not so long ago was a rare commodity of only the most privileged: a personal library that archives all the information one has created, has consumed, is consuming, and intends to consume.

In 2013, parts of this vision have been realized. Unfortunately, libraries were largely on the sidelines. A slew of commercial enterprises provide aspects of personal digital libraries, either free of charge or at relatively low cost. Google is organizing the world's information, but its personalized services put the individual front and center. Browsers keep track of the information we have consumed, and they let us bookmark the information we intend to consume. Netflix keeps track of our movies, the Kindle store of our books, iTunes of our music, and Gamefly of our games. We archive our writings, our observations, our pictures, our videos in social networks, cloud-based storage, and blogs. Amazon, Facebook, Flickr, Google, Microsoft, Tumblr, Twitter, Yahoo, and many others would love to provide as many services as possible to each of us as part of their corporate strategy. The current situation is chaotic and messy. Yet, the last thing we should strive for is an orderly, easy, convenient information landscape dominated by a few commercial entities and governments. We should wish for more chaos and more providers competing with one another.

We take it increasingly for granted that we can experience our entertainment on our terms. We want to watch our movies and TV shows when the time is right for us, not when a network decides we should watch it. Unlike DVD rentals, streaming services never sell out no matter how many of our neighbors rent the same video. Yet, when it comes to academic libraries and professional information needs, researchers still accept that their individual requirements are subject to community compromises. Researchers whose information needs are much different from those of average library users are effectively relegated to second-class status.

How can a community-based library adapt? What is its role in an environment increasingly dominated by commercial enterprises? What are the specific steps it can take to help its users develop a personal library? What kind of help do its users need? Should the community library provide alternatives for commercial services? Or, should it merely supplement them? How do these new services fit with institutional traditions and commitments? Should the library help its users regain control of the information they ceded to for-profit companies in a Faustian bargain? If yes, what are the concrete steps that can accomplish this? Should the library help its users regain control of search engines dominated by commercial priorities? If yes, how?

Chudnov's mission statement leaves considerable freedom for interpretation. Like all good mission statements, it sets a direction. It provides a long-distance view. It crystallizes what is important in a time of information overload: focus on the real information needs of individuals. Libraries ignore this at their peril.

Tuesday, March 26, 2013

Open Access Politics

The Open Access (OA) movement is gaining some high-level political traction.

The White House Open Access memorandum enacts a national Green OA mandate: Most US funding agencies are directed to set up OA repositories for the research they fund. This Green OA strategy contrasts with the Gold OA strategy proposed by the Finch report in the UK. The latter all but guarantees that established publishers will retain their revenue stream if they switch their business model from site licenses to Author Page Charges (APCs).

The White House memorandum is likely to have the greatest impact. As its consequences ripple through the system, the number and size of Green OA repositories is likely to grow substantially over the next few years. Large-scale validation of altmetrics and the development of new business models may lead to the emergence of new forms of scholarly communication. Green OA archivangelist Stevan Harnad hypothesizes a ten-step scenario of changes.

There are also reasons for concern. As this new phase of the OA movement unfolds on the national political stage, all sides will use their influence and try to re-shape the initial policies to further their respective agendas. The outcome of this political game is far from certain. Worse, the outcome may not be settled for years, as these kind of policies are easily reversed without significant voter backlash.

At its core, OA is about an industry changing because of (not-so-)new technology and its accompanying shift in attitudes and values. In such cases, we expect established players to resist innovation by (ab)using politics and litigation. The entertainment industry lobbied and litigated against VCRs, DVRs, every Internet service ever launched, and now even antennas. In the dysfunctional scholarly-communication market, on the other hand, it is the innovators who resort to politics.

To understand why, suppose university libraries were funded by user-paid memberships and/or service fees. In this scenario, libraries and publishers encountered the same paper-to-digital transition costs. When library prices sky rocketed, students and faculty created underground exchanges of scholarly information. They cancelled their library memberships and/or stopped using their services. The publishers' revenue streams collapsed. Only the most successful journals survived, and even they suffered. Publishing a paper became increasingly difficult because of a lack of journals. This created an opening for experiments in scholarly publishing. This bottoms-up free-market transition would have been chaotic, painful, and forgotten by now.

We do not need to convert our libraries and research institutions into free-market enterprises. We do not need to abandon the fundamental principles on which these institutions are built. On the contrary, we must return to those principles and apply them in a new technological reality. Rebuilding the foundations of institutions is hard under the best of circumstances. When users are shielded from the external incentives/hardships of the free market, it is near impossible to disrupt, and continuity remains an option far beyond reason.

Green OA is an indirect approach to achieve fundamental change. It asks scholars to accept a little inconvenience for the sake of the larger principle. It asks them to deposit their papers into OA repositories and provide free access to publicly-funded research. It is hoped that this will gradually change the journal ecosystem and build pressure to innovate. It took dedicated developers, activists, advocates, and academic leaders over twenty years to promote this modest goal and create a movement that, finally, seems to have achieved critical mass. A growing number of universities have enacted OA mandates. These pioneers led the way, but only a government mandate can achieve the scale required to change the market. Enter politics.

Scholars, the creators and consumers of this market, should be able to dictate their terms. Yet, they are beholden to the establishment journals (and their publishers), which are the fountain of academic prestige. The SCOAP3 initiative for High Energy Physics journals shows how scholars are willing to go to unprecedented lengths to protect their journals.

Market-dominating scholarly publishers are paralyzed. They cannot abandon their only source of significant revenue (site licenses) on a hunch that another business model may work out better in the long term. In the mean time, they promote an impossible-to-defend hybrid Gold OA scheme, and they miss an opportunity to create value from author/reader networks (an opportunity recognized by upstart innovators). This business paralysis translates into a lobbying effort to protect the status quo for as long as feasible.

Academic libraries, which enthusiastically supported and developed Green OA, now enter this political arena in a weak position. The White House memorandum all but ignores them. Before complacency sets in, there is precious little time to argue a compelling case for independent institutional or individual repositories preserved in a long-term archive. After all, government-run repositories may disappear at any time for a variety of reasons.

The Gold OA approach of the Finch report is conceptually simpler. Neither scholars nor publishers are inconvenienced, let alone disrupted. It underwrites the survival of favored journals as Gold OA entities. It preempts real innovation. Without a mechanism in place to limit APCs, it's good to be a scholarly publisher in the UK. For now.

Tuesday, October 16, 2012

A Physics Experiment


Researchers in High Energy Physics (HEP) live for that moment when they can observe results, interpret data, and raise new questions. When it arrives, after a lifetime of planning, funding, and building an experiment, they set aside emotional attachment and let the data speak.

Since 1991, virtually all HEP research papers have been freely available through an online database. This repository, now known as arXiv, inspired the Green model of the Open Access movement: Scholars submit author-formatted versions of their refereed papers to open-access repositories. With this simple action, they create an open-access alternative to the formal scholarly-communication system, which mostly consists of pay-walled journals. The HEP scholarly-communication market gives us an opportunity to observe the impact of 100% Green Open Access. Following the scientists' example, let us take a moment, observe this twenty-year-long large-scale experiment, and let the data speak.

When publishers digitized scholarly journals in the 1990s, they added site licenses as an add-on option to paper-journal subscriptions. Within a few years, paper-journal subscriptions all but disappeared. At first, publishers continued the super-inflationary price trajectory of subscriptions. Then, they steepened the price curve with assorted technology fees and access charges for digitized back-files of old issues. The growing journal-pricing crisis motivated many university administrators to support the Open Access movement. While the latter is about access, not about the cost of publishing, it is impossible to separate the two issues.

In 1997, the International School of Advanced Studies (SISSA) launched the Journal of High Energy Physics (JHEP) as an open-access journal. JHEP was an initial step towards a larger goal, now referred to as Gold Open Access: replacing the current scholarly-communication system with a barrier-free system of journals without pay walls. The JHEP team implemented a highly efficient system to process submitted papers, thereby reducing the journal's operating costs to the bare minimum. The remaining expenses were covered by a handful of research organizations, which agreed to a cost-sharing formula for the benefit of their community. This institutional-funding model proved unsustainable, and JHEP converted to a site-licensed journal in 2003. This step back seems strange now, because JHEP could have copied the funding model of BioMed Central, which had launched in 2000 and funded open access by charging authors a per-article processing fee. Presumably, JHEP's leadership considered this author-pay model too experimental and too risky after their initial attempt at open access. In spite of its difficult start, JHEP was an academic success and subsequently prospered financially as a site-licensed journal produced by Springer under the auspices of SISSA.

Green Open Access delivers the immediate benefit of access. Proponents argue it will also, over time, fundamentally change the scholarly-communication market. The twenty-year HEP record lends support to the belief that Green Open Access has a moderating influence: HEP journals are priced at more reasonable levels than other disciplines. However, the HEP record thus far does not support the notion that Green Open Access creates significant change:
  • Only one event occurred that could have been considered disruptive: JHEP capturing almost 20% of the HEP market as an open-access journal. Instead, this event turned into a case of reverse disruption!
  • There was no change in the business model. All leading HEP publishers of 2012 still use pre-1991 business channels. They still sell to the same clients (acquisition departments of academic libraries) through the same intermediaries (journal aggregators). They sell a different product (site licenses instead of subscriptions), and the transactions differ, but the business model survives unchanged.
  • No journals with significant HEP market share disappeared. Even with arXiv as an open-access alternative, canceling an established HEP journal is politically toxic at any university with a significant HEP department. This creates a scholarly-communication market that is highly resistant to change.
  • Journal prices continued on a trajectory virtually unaffected by turbulent economic times.
Yet, most participants and observers are convinced that the current market is not sustainable. They are aware of the disruptive triggers that are piling up. Scholarly publishers witnessed, at close proximity, the near-collapse of the non-scholarly publishing industry. All of these fears remain theoretical. Many disruptions could have happened. Some almost happened. Some should have happened. None did.

In an attempt to re-engineer the market, influential HEP organizations launched the Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP³). It is negotiating with publishers the conversion of established HEP journals to Gold Open Access. To pay for this, hundreds of research institutions world-wide must pool the funds they are currently spending on HEP site licenses. Negotiated article processing charges will, in aggregate, preserve the revenue stream from academia to publishers.

If SCOAP³ proves sustainable, it will become the de-facto sponsor and manager of all HEP publishing world-wide. It will create a barrier-free open-access system of refereed articles produced by professional publishers. This is an improvement over arXiv, which contains mostly author-formatted material.

Many have praised the initiative. Others have denounced it. Those who observe with scientific detachment merely note that, after twenty years of 100% Green Open Access, the HEP establishment really wants Gold Open Access.

The HEP open-access experiment continues.

Tuesday, September 4, 2012

Queer Education

When a son in his pre-teens acts effeminate, likes to wear dresses, or thinks of himself as a girl, most parents force the child to conform to society's preconceived norms. (There is more tolerance for girls acting boyish.) A New York Times Magazine article profiles some parents who question this orthodoxy. These parents give their children the freedom to be who they are. They take on the hard, at times socially awkward, task of protecting their children as much as possible against the social consequences of non-conforming. They postpone the big questions, “Is he gay?” or “Is he a transsexual?”, until the questions evolve into “Am I gay?” or “Am I a transsexual?”, or until they evolve into nothing.

Serious scholars will debate this topic, at length, in learned journals and at scholarly conferences. The debate will spill over in the popular press and in online forums. After all is said and written, this will be the outcome: these brave parents are developing the model for how all parents and all teachers should educate all children.

The primary purpose of our current educational model is to serve society, not to serve the individual child. Listen to politicians when they talk about education. It is about creating a competitive labor force. It is about economic growth. These goals appeal to parents, who want their children to do well, be able to provide for themselves and their future families, and have a successful and satisfying career.

By putting society's goals front and center, parents, teachers, and government officials think of children as empty vessels, to be filled up with knowledge and skills developed by previous generations that society deems important. At every step, educators evaluate how well students have absorbed the information. They award certificates, diplomas, degrees, and other distinctions that serve as entry tickets to the labor force. These are worthwhile goals, and the classical educational model has exponentially improved our standard of living.

Yet, can't we give children a break? Stop the rush. Give them time and opportunities to explore who they are and what they like to do. Expose them to as many different experiences as possible. Use grades and other assessment techniques not to rank children, but to observe their individual strengths, weaknesses, and interests. Teachers should help parents observe their children as they are, not as they wish they should be. After all, few parents are able to be objective about their children. Even if they do not intend to, they invest their own dreams and ambitions in their children, often squashing the child's own dreams and aspirations.

Let children tell us who they are, what they like, and what they are good at. They will tell us in their play and in their creative endeavors. Postpone the question “What would I like my child to be?” until it evolves into “What would I like to be?”

A child-centered approach to education does not fit the model of a teacher in front of a class of twenty or more students. The “sage on the stage” model completely ignores whether a particular child is ready for and/or interested in a particular subject at a particular time. It is moderately efficient to fill twenty vessels with the same information, and it is extremely effective at turning education into a chore that kills the creativity and natural curiosity of children.

Cultivating this creativity and curiosity should be the primary purpose of education from kindergarten through high school. Give them opportunities to work on a range projects of their choice. Introduce increasingly challenging projects, and let them discover what particular knowledge or skills they need. Let them learn new knowledge and new skills when they need it, when they are most interested in it. In this model, the teacher observes, guides, and points children to resources that are helpful. The teacher becomes a “guide on the side”. (See Clayton Christensen's book, “Disrupting Class”.)

To make this concept work, we must build a comprehensive library of online courses. Advanced educational software will take on the role of “filling the vessels”. As guides on the side, the teacher's role is to make sure a child takes a particular course at the right time: when the child is primed by curiosity and by the innate drive to finish an interesting project. As educational software evolves and improves, it will adapt to each individual child's learning style.

Adaptive, on-demand, just-in-time education will become an enduring facet of the information- and technology-based economy, and not just for children. Our fast-changing society requires a culture of life-long learning. Such a culture is built by adults eager to continue learning, no matter at which stage they are in life. Everyone will need access to this kind of educational infrastructure.

To prepare our children for their future, let start listening to them now.

Tuesday, July 17, 2012

The Isentropic Disruption


The free dissemination of research is intrinsically good. For this reason alone, we must support open-access initiatives in general and Green Open Access in particular. One open repository does not change the dysfunctional scholarly-information market, but every new repository immediately expands open access and contributes to a worldwide network that may eventually create the change we are after.

Some hope that Green Open Access together with other incremental steps will lead to a “careful, thoughtful transition of revenue from toll to open access”. Others think that eminent leaders can get together and engineer a transition to a pre-defined new state. It is understandable to favor a gradual, careful, thoughtful, and smooth transition to a well-defined new equilibrium along an expertly planned path. In thermodynamics, a process that takes a system from one equilibrium state to another via infinitesimal steps that maintain order and equilibrium is called isentropic. (Note: Go elsewhere to learn thermodynamics.) Unfortunately, experience since the dawn of the industrial age has taught us that there is nothing isentropic about a disruption. There is no pre-defined destination. Leaders and experts usually have it wrong. The path is a random walk. The transition, if it happens, is sudden.

No matter what we do, the scholarly-information market will disrupt. The web has disrupted virtually every publisher and information intermediator. Idiosyncrasies of the scholarly-information market may have delayed the disruption of academic publishers and libraries, but the disruptive triggers are piling up. Will Green Open Access be a disruptive trigger when some critical mass is reached? Will it be a start-up venture based on a bright idea that catches on? Will it be a boycott to end all boycotts? Will it be some legislation somewhere? Will it be one or more major university systems opting out and causing an avalanche? Will it be the higher-education bubble bursting?

No matter what we do, disruption is disorderly and painful. Publishers must change their business model and transition from a high-margin to a low-margin environment. Important journals will be lost. This will disrupt some scholarly disciplines more severely than others. An open-access world without site licenses will disrupt academic libraries, whose budget is dominated by site-license acquisition and maintenance. Change of this depth and breadth is messy, disorderly, turbulent, and chaotic.

Disruption of the scholarly-information market is unavoidable. Disruption is disorderly and painful. We do not know what the end point will be. It is impossible to engineer the perfect transition. We do not have to like it, but ignoring the inevitable does not help. We have to come to terms with it, grudgingly accept it, and eventually embrace it by realizing that all of us have benefitted tremendously from technology-driven disruption in every other sector of the economy. Lack of disruption is a weakness. It is a sign that market conditions discourage experiments and innovation. We need to lower the barriers of entry for innovators and give them an opportunity to compete. Fortunately, universities have the power to do this without negotiation, litigation, or legislation.

If 10% of a university community wants one journal, 10% wants a competing journal, and 5% wants both, the library is effectively forced to buy both site licenses for 100% of the community. Site licenses reduce competition between journals and force universities to buy more than they need. The problem is exacerbated further by bundling and consortium “deals”. It is inordinately expensive in staff time to negotiate complex site-license contracts. Once acquired, disseminating the content according to contractual terms requires expensive infrastructure and ongoing maintenance. This administrative burden, pointlessly replicated at thousands of universities, adds no value. It made sense to buy long-lived paper-based information collectively. Leasing digital information for a few years at a time is sensible only inside the mental prison of the paper model.

Everyone with an iTunes library is familiar with the concept of a personal digital library. Pay-walled content should be managed by individuals who assess their own needs and make their own personal price-value assessments. After carefully weighing the options, they might still buy something just because it seems like a good idea. Eliminating the rigid acquisition policies of libraries invigorates the market, lowers the barriers of entry to innovators, incentivizes experiments, and increases price pressure on all providers. This improves the market for pay-walled content immediately, and it may help increase the demand for open access.

I would implement a transition to subsidized personal digital libraries in three steps. Start with a small step to introduce the university community to personal digital libraries. Cancel enough site licenses to transfer 10% of the site-license budget to an individual-subscription fund. After one year, cancel half of the remaining site licenses. After two years, transfer the entire site-license budget to the individual-subscription fund. From then on, individuals are responsible to buy their own pay-walled content, subsidized by the individual-subscription fund.

Being the middleman in digital-lending transactions is a losing proposition for libraries. It is a service that contradicts their mission. Libraries disseminate information; they do not protect it on behalf of publishers. Libraries buy information and set it free; they do not rent information and limit its availability to a chosen few. Libraries align themselves with the interests of their users, not with those of the publishers. Because of site licenses, academic libraries have lost their identity. They can regain it by focusing 100% on archiving and open access.

Librarians need to ponder the future and identity of academic libraries. For a university leadership under budgetary strain, the question is less profound and more immediate. Right now, what is the most cost-effective way to deliver pay-walled content to students and faculty?

Friday, June 29, 2012

On Becoming Unglued...

On June 20th, the e-book world changed: One innovation cut through the fog of the discussions on copyright, digital rights management (DRM), and various other real and perceived problems of digital books. It did not take a revolution, angry protests, lobbying of politicians, or changes in copyright law. All it took was a simple idea, and the talent and determination to implement it.

Gluejar is a company that pays authors for the digital rights to their books. When it acquires those rights, Gluejar produces the e-book and makes it available under a suitable open-access license. Gluejar calls this process the ungluing of the book.

Handing out money, while satisfying, is not much of a business model. So, Gluejar provides a platform for the necessary fundraising. When proposing to unglue a book, an author sets a price level for the digital rights, and the public is invited to donate as little or as much as they see fit. If the price level is met, the pledged funds are collected from the sponsors, and the book is unglued.

Why would the public contribute? First and foremost, this is small-scale philanthropy: the sponsors pay an author to provide a public benefit. The ever increasing term of copyright, now 70 years beyond the death of the author, has long been a sore point for many of us. Here is a perfectly valid free-market mechanism to release important works from its copyright shackles, while still compensating authors fairly. Book readers that devote a portion of their book-buying budget to ungluing build a lasting free public electronic library that can be enjoyed by everyone.

The first ungluing campaign, “Oral Literature In Africa” by Ruth H. Finnegan (Oxford University Press, 1970), raised the requisite $7,500 by its June 20th deadline. Among the 271 donors, there were many librarians. Interestingly, two libraries contributed as institutions: the University of Alberta Library and the University of Windsor Leddy Library. The number of participating institutions is small, but any early institutional recognition is an encouraging leading indicator.

I hope these pioneers will now form a friendly network of lobbyists for the idea that all libraries contribute a portion of their book budget to ungluing books. I propose a modest target: within one year, every library should set aside 1% of its book budget for ungluing. This is large enough to create a significant (distributed) fund, yet small enough not to have a negative impact on operations, even in these tough times. Encourage your library to try it out now by contributing to any of four open campaigns. Once they see it in action and participate, they'll be hooked.

Special recognition should go to Eric Hellman, the founder of Gluejar. I have known Eric many years and worked with him when we were both on the NISO Committee that produced the OpenURL standard. Eric has always been an innovator. With Gluejar, he is changing the world... one book at a time.

Friday, April 27, 2012

Annealing the Library: Follow up


Here are responses to some of the off-line reactions to the previous blog.


-

“Annealing the Library” did not contain any statements about abandoning paper books (or journals). Each library needs to assess the value of paper for its community. This value assessment is different from one library to the next and from one collection to the next.

The main point of the post is that the end of paper acquisitions should NOT be the beginning of digital licenses. E-lending is not an adequate substitute for paper-based lending. E-lending is not a long-term investment. Libraries will not remain relevant institutions by being middlemen in digital-lending operations.

I neglected to concede the point that licensing digital content could be a temporary bandaid during the transition from paper to digital.

-

In the case of academic libraries, the bandaid of site licensing scholarly journals is long past its due expiration date. It is time to phase out of the system.

If the University of California and California State University jointly announced a cancellation of all site licenses over the next three to five years, the impact would be felt immediately. The combination of the UC and Cal State systems is so big that publishers would need to take immediate and drastic actions. Some closed-access publishers would convert to open access. Others would start pricing their products appropriate for the individual-subscription market. Some publishers might not survive. Start-up companies would find a market primed to accept innovative models.

Unfortunately, most universities are too small to have this kind of immediate impact. This means that some coordinated action is necessary. This is not a boycott. There are no demands to be met. It is the creation of a new market for open-access information. It is entirely up to the publishers themselves how to decide how to respond. There is no need for negotiations. All it takes is the gradual cancellation of all site licenses at a critical mass of institutions.

-

Annealing the Library does not contradict an earlier blog post, in which I expressed three Open Access Doubts. (1) I expressed disappointment in the quality of existing Open Access repositories. The Annealing proposal pumps a lot of capital into Open Access, which should improve quality. (2) I doubted the long-term effectiveness of institutional repositories in bringing down the total cost of access to scholarly information. Over time, the Annealing proposal eliminates duplication between institutional repositories and the scholarly literature, and it invests heavily into Open Access. (3) I wondered whether open-access journals are sufficiently incentivized to maintain quality over the long term. This doubt remains. Predatory open-access journals without discernible quality standards are popping up right and left. This is an alarming trend to serious open-access innovators. We urgently need a mechanism to identify and eliminate underperforming open-access journals.

-

If libraries cut off subsidies to pay-walled information, some information will be out of reach. By phasing in the proposed changes gradually, temporary disruption of access to some resources will be minimal. After the new policies take full effect, they will create many new beneficiaries, open up many existing information resources, and create new open resources.


Tuesday, April 17, 2012

Annealing the Library


The path of least resistance and least trouble is a mental rut already made. It requires troublesome work to undertake the alternation of old beliefs.
John Dewey

What if a public library could fund a blogger of urban architecture to cover in detail all proceedings of the city planning department? What if it could fund a local historian to write an open-access history of the town? What if school libraries could fund teachers to develop open-access courseware? What if libraries could buy the digital rights of copyrighted works and set them free? What if the funds were available right now?

Unfortunately, by not making decisions, libraries everywhere merely continue to do what they have always done, but digitally. The switch from paper-based to digital lending is well under way. Most academic libraries already converted to digital lending for virtually all scholarly journals. Scores of digital-lending services are expanding digital lending to books, music, movies, and other materials. These services let libraries pretend that they are running a digital library, and they can do so without disrupting existing business processes. Publishers and content distributors keep their piece of the library pie. The libraries' customers obtain legal free access to quality content. The path of least resistance feels good and buries the cost of lost opportunity under blissful ignorance.

The value propositions of paper-based and digital lending are fundamentally different. A paper-based library builds permanent infrastructure: collections, buildings, and catalogs are assets that continue to pay dividends far into the future. In contrast, resources spent on digital lending are pure overhead. This includes staff time spent on negotiating licenses, development and maintenance of authentication systems, OpenURL, proxy, and web servers, and the software development to give a unified interface to disparate systems of content distributors. (Some expenses are hidden in higher fees for the Integrated Library System.) These expenses do not build permanent infrastructure and merely increase the cost of every transaction.

Do libraries add value to the process? If so, do libraries add value in excess of their overhead costs? In fact, library-mediated lending is more cumbersome and expensive than direct-to-consumer lending, because content distributors must incorporate library business processes in their lending systems. If the only real value of the library's meddling is to subsidize the transactions, why not give the money to users directly? These are the tough questions that deserve an answer.

Libraries cannot remain relevant institutions by being meaningless middlemen who serve no purpose. Libraries around the world are working on many exciting digital projects. These include digitization projects and the development of open archives for all kinds of content. Check out this example. Unfortunately, projects like these will be underfunded or cannot grow to scale as long as libraries remain preoccupied with digital lending.

Libraries need a different vision for their digital future, one that focuses on building digital infrastructure. We must preserve traditional library values, not traditional library institutions, processes, and services. The core of any vision must be long-term preservation of and universal open access to important information. Yet, we also recognize that some information is a commercial commodity, governed by economic markets. Libraries have never covered all information needs of everyone. Yet, independent libraries serving their respective communities and working together have established a great track record of filling global information needs. This decentralized model is worth preserving.

Some information, like most popular music and movies, is obviously commercial and should be governed by copyright, licenses, and prices established by the free market. Other information, like many government records, belongs either in the public domain or should be governed by an open license (Creative Commons, for example). Most information falls somewhere in between, with passionate advocates on both sides of the argument for every segment of the information market. Therefore, let us decentralize the issue and give every creator a real choice.

By gradually converting acquisition budgets into grant budgets, libraries could become open-access patrons. They could organize grant competitions for the production of open-access works. By sponsoring works and creators that further the goals of its community, each library contributes to a permanent open-access digital library for everyone. Publishers would have a role in the development of grant proposals that cover all stages of the production and marketing of the work. In addition to producing the open-access works, publishers could develop commercial added-value services. Finally, innovative markets like the one developed by Gluejar allow libraries (and others) to acquire the digital rights of commercial works and set them free.

The traditional commercial model will remain available, of course. Some authors may not find sponsors. Others may produce works of such potential commercial value that open access is not a realistic option. These authors are free to sell their work with any copyright restrictions deemed necessary. They are free to charge what the market will bear. However, they should not be able to double-dip. There is no need to subsidize closed-access works when open access is funded at the level proposed here. Libraries may refer customers to closed-access works, but they should not subsidize access. Over time, the cumulative effect of committing every library budget to open access would create a world-changing true public digital library.

Other writers have argued the case against library-mediated digital lending. No one is making the arguments in support of the case. The path of least resistance does not need arguments. It just goes with the flow. Into oblivion.

Friday, September 23, 2011

Information Literacy, Libraries, and Schools

On September 14th, Los Angeles Times’ columnist Steve Lopez covered the closure and near-closure of libraries in elementary, middle, and high schools. In the best of times, school libraries play second fiddle to issues like improving student-teacher ratio. In crisis times like today, these libraries do not stand a chance. A week later, he covered the parents’ reaction.

The parents’ efforts to rescue these libraries are laudable, but lack vision and ambition. They are merely trying to retain a terrible status quo. A room of books is not the kind of library where primary literacy skills are learned. The school superintendent, John Deasy, has it basically right: primary literacy skills are learned in the classroom. Critical reading, identifying high-quality information, web-research techniques, and specific sources for particular subject matters are skills that can be learned only if they are incorporated in every class, every day.

At every level in our society, the response to this terrible economic crisis has been one of incremental retrenchment instead of visionary reinvention. The phrase “don’t let a crisis go to waste” may have a bad image, but it applies in this case. California is the birthplace of information technology, and its schools and their infrastructure should reflect this.

Around the same time as the first column, rumors started circulating that Amazon is planning an electronic library available by monthly subscription. This is a technology and a business model that can provide every student with a custom digital library. It may even save money by eliminating the management and warehousing of print books (including text books).

School districts should put out requests for proposals to supply every student with an e-book reader, tablet, or notebook computer that has access to a digital library of books and other resources. Big-name enterprises, such as Amazon, Apple, Barnes&Noble, and Google, would be eager to capture his young demographic. Some philanthropic organizations might be willing to pitch in by buying the rights of some books and putting them in the public domain. A slice of public library funds should be allocated to this digital library.

Traditional school libraries are inadequate. It is time to shelve twentieth century infrastructure and fund the tools students need in the twenty-first century.