Showing posts with label cloud computing. Show all posts
Showing posts with label cloud computing. Show all posts

Monday, March 13, 2017

Creative Destruction by Social Network

Academia.edu bills itself as a platform for scholars to share their research. As a start-up, it still provides mostly free services to attract more users. Last year, it tried to make some money by selling recommendations to scholarly papers, but the backlash from academics was swift and harsh. That plan was shelved immediately. [Scholars Criticize Academia.edu Proposal to Charge Authors for Recommendations]

All scholarly publishers sell recommendations, albeit artfully packaged in prestige and respectability. Academia.edu's direct approach seemed downright vulgar. If they plan a radically innovative replacement for journals, they will need a subtler approach. At least, they chose the perfect target for an attempt at creative destruction: Scholarly communication is the only type of publishing not disrupted by the web, it has sky-high profit margins, it is inefficient, and it is dominated by a relatively few well-connected insiders.

If properly designed (and that is a big if), a scholarly network could reduce the cost of all aspects of scholarly communication, even without radical innovation. It could improve the delivery of services to scholars. It could increase (open) access to research. And it could do all of this while scholars retain control over their own output for as long as feasible and/or appropriate. A scholarly network could also increase the operational efficiency of participating universities, research labs, and funding agencies.

All components of such a system already exist in some form:

Personal archive. Academics are already giving away ownership of their published works to publishers. They should not repeat this historic mistake by giving social networks control over their unpublished writings, data, and scholarly correspondence. They should only participate in social networks that make it easy to pack up and leave. Switching or leaving networks should be as simple as downloading an automatically created personal archive of everything the user shared on the network. Upon death or incapacity, the personal archive and perhaps the account itself should transfer to an archival institution designated by the user.

Marketplace for research tools. Every discipline has its own best practices. Every research group has its preferred tools and information resources. All scholars have their idiosyncrasies. To accomplish this level of customization, a universal platform needs an app store, where scholars could obtain apps that provide reference libraries, digital lab notebooks, data analysis and management, data visualization, collaborative content creation, communication, etc.

Marketplace for professional services. Sometimes, others can do the work better, faster, and/or cheaper. Tasks that come to mind are reference services, editorial and publishing services, graphics, video production, prototyping, etc.

Marketplace for institutional services. All organizations manage some business processes that need to be streamlined. They can do this faster and cheaper by sharing their solutions. For example, universities might be interested to buy and/or exchange applications that track PhD theses as they move through the approval process, that automatically deposit faculty works into their institutional repositories, that manage faculty-research review processes, that assist the preparation of grant applications, and that manage the oversight of awarded research grants. Funding agencies might be interested in services to accept and manage grant applications, to manage peer review, and to track post-award research progress.

Certificates. When a journal accepts a paper, it produces an unalterable version of record. This serves as an implied certificate from the publisher. When a university awards a degree, it certifies that the student has attended the university and has completed all degree requirements. Incidentally, it also certifies the faculty status of exam-committee members. Replacing implicit with explicit certificates would enable new services, such as CVs in which every paper, every academic position, and every degree is certified by the appropriate authority.

A scholarly network like this is a specialized business-application exchange, a concept pioneered by the AppExchange of Salesforce.com. Every day, thousands of organizations replace internal business processes with more efficient applications. Over time, this creates a gradual cumulative effect: Business units shrink to their essential core. They disappear or merge with other units. Corporate structures change. Whether or not we are prepared for the consequences of these profound changes, these technology-enabled efficiencies advance unrelentingly across all industries.

These trends will, eventually, affect everyone. While touting the benefits of creative destruction in their journals, the scholarly-communication system successfully protected itself. Like PDF, the current system is a digitally replication the paper system. It ignores the flexibility of digital information, while it preserves the paper-era business processes and revenue streams of publishers, middlemen, and libraries.

Most scholars manage several personal digital libraries for their infotainment. Yet, they are restricted by the usage terms of institutional site licenses for their professional information resources. [Where the Puck won't be] When they share papers with colleagues and students, they put themselves at legal risk. Scholarly networks will not solve every problem. They will have unintended consequences. But, like various open-access projects, they are another opportunity for scholars to reclaim the initiative.

Recently, ResearchGate obtained serious start-up funding. [ResearchGate raises $52.6M for its social research network for scientists] I hope more competitors will follow. Organizations and projects like ArXiv, Figshare, Mendeley, Web of Knowledge, and Zotero have the technical expertise, user communities, and platforms on which to build. There are thousands of organizations that can contribute to marketplaces for research tools, professional services, and institutional services. There are millions of scholars eager for change.

Build it, and they will come... Or they will just use Sci-Hub anyway.

Sunday, July 24, 2016

Let IR RIP

The Institutional Repository (IR) is obsolete. Its flawed foundation cannot be repaired. The IR must be phased out and replaced with viable alternatives.

Lack of enthusiasm. The number of IRs has grown because of a few motivated faculty and administrators. After twenty years of promoting IRs, there is no grassroots support. Scholars submit papers to an IR because they have to, not because they want to. Too few IR users become recruiters. There is no network effect.

Local management. At most institutions, the IR is created to support an Open Access (OA) mandate. As part of the necessary approval and consensus-building processes, various administrative and faculty committees impose local rules and exemptions. After launch, the IR is managed by an academic library accountable only to current faculty. Local concerns dominate those of the worldwide community of potential users.

Poor usability. Access-, copy-, reuse, and data-mining rights are overly restrictive or left unstated. Content consists of a mishmash of formats. The resulting federation of IRs is useless for serious research. Even the most basic queries cannot be implemented reliably. National IRs (like PubMed) and disciplinary repositories (like ArXiv) eliminate local idiosyncrasies and are far more useful. IRs were supposed to duplicate their success, while spreading the financial burden and immunizing the system against adverse political decisions. The sacrifice in usability is too high a price to pay.

Low use. Digital information improves with use. Unused, it remains stuck in obsolete formats. After extended non-use, recovering information requires a digital version of archaeology. Every user of a digital archive participates in its crowd-sourced quality control. Every access is an opportunity to discover, report, and repair problems. To succeed at its archival mission, a digital archive must be an essential research tool that all scholars need every day.

High cost. Once upon a time, the IR was a cheap experiment. Today's professionally managed IR costs far too much for its limited functionality.

Fragmented control. Over the course of their careers, most scholars are affiliated with several institutions. It is unreasonable to distribute a scholar's work according to where it was produced. At best, it is inconvenient to maintain multiple accounts. At worst, it creates long-term chaos to comply with different and conflicting policies of institutions with which one is no longer affiliated. In a cloud-computing world, scholars should manage their own personal repositories, and archives should manage the repositories of scholars no longer willing or able.

Social interaction. Research is a social endeavor. [Creating Knowledge] Let us be inspired by the titans of the network effect: Facebook, Twitter, Instagram, Snapchat, etc. Encourage scholars to build their personal repository in a social-network context. Disciplinary repositories like ArXiv and SSRN can expand their social-network services. Social networks like Academia.edu, Mendeley, Zotero, and Figshare have the capability to implement and/or expand IR-like services.

Distorted market. Academic libraries are unlikely to spend money on services that compete with IRs. Ventures that bypass libraries must offer their services for free. In desperation, some have pursued (and dropped) controversial alternative methods of monetizing their services. [Scholars Criticize Academia.edu Proposal to Charge Authors for Recommendations]

Many academics are suspicious of any commercial interests in scholarly communication. Blaming publishers for the scholarly-journal crisis, they conveniently forget their own contribution to the dysfunction. Willing academics, with enthusiastic help from publishers, launch ever more journals.[Hitler, Mother Teresa, and Coke] They also pressure libraries to site license "their" journals, giving publishers a strong negotiation position. Without library-paid site licenses, academics would have flocked to alternative publishing models, and publishers would have embraced alternative subscription plans like an iTunes for scholarly papers. [Where the Puck won't be] [What if Libraries were the Problem?] Universities and/or governments must change how they fund scholarly communication to eliminate the marketplace distortions that preserve the status quo, protect publishers, and stifle innovation. In a truly open market of individual subscriptions, start-up ventures would thrive.

I believed in IRs. I advocated for IRs. After participating in the First Meeting of the Open Archives Initiative (1999, Santa Fe, New Mexico), I started a project that would evolve into Caltech CODA. [The Birth of the Open Access Movement] We encouraged, then required, electronic theses. We captured preprints and historical documents. [E-Journals: Do-It-Yourself Publishing]

I was convinced IRs would disrupt scholarly communication. I was wrong. All High Energy Physics (HEP) papers are available in ArXiv. Being a disciplinary repository, ArXiv functions like an idealized version of a federation of IRs. It changed scholarly communication for the better by speeding up dissemination and improving social interaction, but it did not disrupt. On the contrary, HEP scholars organized what amounted to an an authoritarian take-over of the HEP scholarly-journal marketplace. While ensuring open access of all HEP research, this take-over also cemented the status quo for the foreseeable future. [A Physics Experiment] 

The IR is not equivalent with Green Open Access. The IR is only one possible implementation of Green OA. With the IR at a dead end, Green OA must pivot towards alternatives that have viable paths forward: personal repositories, disciplinary repositories, social networks, and innovative combinations of all three.

*Edited 7/26/2016 to correct formatting errors.

Wednesday, May 21, 2014

Sustainable Long-Term Digital Archives

How do we build long-term digital archives that are economically sustainable and technologically scalable? We could start by building five essential components: selection, submission, preservation, retrieval, and decoding.

Selection may be the least amenable to automation and the least scalable, because the decision whether or not to archive something is a tentative judgment call. Yet, it is a judgment driven by economic factors. When archiving is expensive, content must be carefully vetted. When archiving is cheap, the time and effort spent on selection may cost more than archiving rejected content. The falling price of digital storage creates an expectation of cheap archives, but storage is just one component of preservation, which itself is only one component of archiving. To increase the scalability of selection, we must drive down the cost of all other archive services.

Digital preservation is the best understood service. Archive content must be transferred periodically from old to new storage media. It must be mirrored at other locations around world to safeguard against natural and man-made disasters. Any data center performs processes like these every day.

The submission service enters bitstreams into the archive and enables future retrieval of identical copies. The decoding service extracts information from retrieved bitstreams, which may have been produced by lost or forgotten software.

We could try to eliminate the decoding service by regularly re-encoding bitstreams for current technology. While convenient for users, this approach has a weakness. If a refresh cycle should introduce an error, subsequent cycles may propagate and amplify the error, making recovery difficult. Fortunately, it is now feasible to preserve old technology using virtualization, which lets us emulate almost any system on almost any hardware. Anyone worried about the long term should consider the Chrome emulator of Amiga 500 (1987) or the Android emulator of the HP 45 calculator (1973). The hobbyists who developed these emulators are forerunners of a potential new profession. A comprehensive archive of virtual old systems is an essential enabling technology for all other digital archives.

The submission and retrieval services are interdependent. To enable retrieval, the submission service analyzes bitstreams and builds an index for the archive. When bitstreams contain descriptive metadata constructed specifically for this purpose, the process of submission is straightforward. However, archives must be able to accept any bitstream, regardless of the presence of such metadata. For bitstreams that contain a substantial amount of text, full-text indexing is appropriate. Current technology still struggles with non-text bitstreams, like images, graphics, video, or pure data.

To simplify and automate the submission service, we need the participation of software developers. Most bitstreams are produced by mass-market software such as word processors, database or spreadsheet software, video editors, or image processors. Even data produced by esoteric experiments are eventually processed by applications that still serve hundreds of thousands of specialists. Within one discipline, the number of applications rarely exceeds a few hundred. To appeal to this relatively small number of developers, who are primarily interested in solving their customers' problems, we need a better argument than “making archiving easy.”

Too few application developers are aware of their potential role in research data management. Consider, for example, an application that converts data into graphs. Although most of the graphs are discarded after a quick glance, each is one small step in a research project. With little effort, that graphing software could provide transparent support for research data management. It could reformat raw input data into a re-usable and archivable format. It could give all files it produces unique identifiers and time stamps. It could store these files in a personal repository. It could log activity in a digital lab notebook. When a file is deleted, the personal repository could generate an audit trail that conforms to discipline-specific customs. When research is published, researchers could move packages of published and supporting material from personal to institutional repositories and/or to long-term archives.

Ad-hoc data management harms the longer-term interests of individual researchers and the scholarly community. Intermediate results may be discarded before it is realized they were, after all, important. The scholarly record may not contain sufficient data for reproducibility. Research-misconduct investigations may be more complicated and less reliable.

For archivists, the paper era is far from over. During the long transition, archivists may prepare for the digital future in incremental steps. Provide personal repositories. Work with a few application developers to extend key applications to support data management. After proof of concept, gradually add more applications.

Digital archives will succeed only if they are scalable and sustainable. To accomplish this, digital archivists must simplify and automate their services by getting involved well before information is produced. Within each discipline, archives must work with researchers, application providers, scholarly societies, universities, and funding agencies to develop appropriate policies for data management and the technology infrastructure to support those policies.

Monday, January 20, 2014

A Cloud over the Internet

Cloud computing could not have existed without the Internet, but it may make Internet history by making the Internet history.

Organizations are rushing to move their data centers to the cloud. Individuals have been using cloud-based services, like social networks, cloud gaming, Google Apps, Netflix, and Aereo. Recently, Amazon introduced WorkSpaces, a comprehensive personal cloud-computing service. The immediate benefits and opportunities that fuel the growth of the cloud are well known. The long-term consequences of cloud computing are less obvious, but a little extrapolation may help us make some educated guesses.

Personal cloud computing takes us back to the days of remote logins with dumb terminals and modems. Like the one-time office computer, the cloud computer does almost all of the work. Like the dumb terminal, a not-so-dumb access device (anything from the latest wearable gadget to a desktop) handles input/output. Input evolved beyond keystrokes and now also includes touch-screen gestures, voice, image, and video. Output evolved from green-on-black characters to multimedia.

When accessing a web page with content from several contributors (advertisers, for example), the page load time depends on several factors: the performance of computers that contribute web-page components, the speed of the Internet connections that transmit these components, and the performance of the computer that assembles and formats the web page for display. By connecting to the Internet through a cloud computer, we bypass the performance limitations of our access device. All bandwidth-hungry communication occurs in the cloud on ultra-fast networks, and almost all computation occurs on a high-performance cloud computer. The access device and its Internet connection just need to be fast enough to process the information streams into and out of the cloud. Beyond that, the performance of the access device hardly matters.

Because of economies of scale, the cloud-enabled net is likely to be a highly centralized system dominated by a small number of extremely large providers of computing and networking. This extreme concentration of infrastructure stands in stark contrast to the original Internet concept, which was designed as a redundant, scalable, and distributed system without a central authority or a single point of failure.

When a cloud provider fails, it disrupts its own customers, and the disruption immediately propagates to the customers' clients. Every large provider is, therefore, a systemic vulnerability with the potential of taking down a large fraction of the world's networked services. Of course, cloud providers are building infrastructure of extremely high reliability with redundant facilities spread around the globe to protect against regional disasters. Unfortunately, facilities of the same provider all have identical vulnerabilities, as they use identical technology and share identical management practices. This is a setup for black-swan events, low-probability large-scale catastrophes.

The Internet is overseen and maintained by a complex international set of authorities. [Wikipedia: Internet Governance] That oversight loses much of its influence when most communication occurs within the cloud. Cloud providers will be tempted to deploy more efficient custom communication technology within their own facilities. After all, standard Internet protocols were designed for heterogeneous networks. Much of that design is not necessary on a network where one entity manages all computing and all communication. Similarly, any two providers may negotiate proprietary communication channels between their facilities. Step by step, the original Internet will be relegated to the edges of the cloud, where access devices connect with cloud computers.

Net neutrality is already on life support. When cloud providers compete on price and performance, they are likely to segment the market. Premium cloud providers are likely to attract high-end services and their customers, relegating the rest to second-tier low-cost providers. Beyond net neutrality, there may be a host of other legal implications when communication moves from public channels to private networks.

When traffic moves to the cloud, telecommunication companies will gradually lose the high-margin retail market of providing organizations and individuals with high-bandwidth point-to-point communication. They will not derive any revenue from traffic between computers within the same cloud facility. The revenue from traffic between cloud facilities will be determined by a wholesale market with customers that have the resources to build and/or acquire their own communication capacity.

The existing telecommunication infrastructure will mostly serve to connect access devices to the cloud over relatively low-bandwidth channels. When TV channels are delivered to the cloud (regardless of technology), users select their channel on the cloud computer. They do not need all channels delivered to the home at all times; one TV channel at a time per device will do. When phones are cloud-enabled, a cloud computer intermediates all communication and provides the functional core of the phone.

Telecommunication companies may still come out ahead as long as the number of access devices keeps growing. Yet, they should at least question whether it would be more profitable to invest in cloud computing instead of ever higher bandwidth to the consumer.

The cloud will continue to grow as long as its unlimited processing power, storage capacity, and communication bandwidth provide new opportunities at irresistible price points. If history is any guide, long-term and low-probability problems at the macro level are unlikely to limit its growth. Even if our extrapolated scenario never completely materializes, the cloud will do much more than increase efficiency and/or lower cost. It will change the fundamental character of the Internet.