Copyright (c) 2012 IOS Press
Abstract: In this position paper, I will consider some of the trends that I see in place today in the realm of library technology as indicators of what we might expect in the coming decades. It is naturally impossible to make any kind of reliable prediction as far forward as 2050. Many disruptions are likely to occur to shape the future quite differently. The exercise of pressing current trends past the horizon of predictable trajectories can be helpful in positing a range of possibilities for which the new generation of information professionals need to be prepared. Given the trends underway and their possible outcomes, the perspectives, knowledge, and skills that information professionals will need to be successful in their careers may differ significantly from what applies today.
I'm fortunate to have been able to build my career as a technology professional in the library community, riding successive waves of major changes in computing. My earliest experiences with mainframe-based systems were followed by involvement with microcomputers, and then to client/server systems able put powerful desktop computers to work in conjunction with servers available at very moderate costs. In recent years, cloud computing has taken a strong hold and I'm tracking the beginning of a new phase of Web-based library platforms designed to be deployed through software-as-a-service. I've been especially fortunate to have had generous opportunities to share my expertise and perspectives with the broader profession through professional and academic publications, conference presentations, and through the Web and social media.
I entered the profession through a non-traditional route, working my way up the technology track in an academic library without formal education through a graduate program in library and information science. An aptitude for technology, ability to learn through practical experience, and attentiveness to the broader realm of libraries and information technology helped fill that gap. My graduate degrees in the liberal arts broadened my perspective, improved my communication skills, and honed analytical abilities. Over the years, I've had the chance to work with hardware from the mainframe to the desktop, network gear of all sorts, and to figure things how computing works up the stack through the firmware, operating systems, and applications, with quite a bit of software development experience along the way. All this was done in library setting, where no matter what the task at hand, my main perspective was oriented to how it would help support the specific needs of the organization. My consulting and speaking activities have expanded the scope of my experience and I have had the good fortune to work with libraries of almost all sizes and types in many regions around the globe.
I can attribute, at least in part, the opportunities open to me, were due to the traditional nature of the professional educational programs of the time. The disconnect in the 1980's between the curriculum of the LIS programs and the technology used in the trenches in libraries meant that there was not an abundance of systems librarians with up-to-date technology skills emerging from these graduate programs, opening a gap that allowed self-taught professionals such as myself or librarians from other specialties—especially technical services—to become technology leaders. Today as all areas of librarianship are deeply intertwined with technology, it's essential that these programs instill new information professionals with the ability to master the technologies of today, but more importantly to adapt to those that will cycle in and out through the decades of their career.
In this position paper, I will consider some of the trends that I see in place today in the realm of library technology as indicators of what we might expect in the coming decades. It is naturally impossible to make any kind of reliable prediction as far forward as 2050. Many disruptions are likely to occur to shape the future quite differently. The exercise of pressing current trends past the horizon of predictable trajectories can be helpful in positing a range of possibilities for which the new generation of information professionals need to be prepared. Given the trends underway and their possible outcomes, the perspectives, knowledge, and skills that information professionals will need to be successful in their careers may differ significantly from what applies today.
I'm hopeful that much of the unevenness of access to technology that we see today will moderate over time. While those with more financial resources will always likely have earlier and more plentiful access to new technologies, there will come a time when some level of communications and computational devices will be affordable and available to the vast majority of the population in all regions of the globe. How long will it be until smart phones and their corresponding data plans will be just as widely deployed as land line telephones or cell phones? It may be overly optimistic to imagine that in the distant future a world with a rapidly expanding population will be able to provide sustenance and shelter for all, much less basic technology, but there is hope that many of the digital divides in place today will steadily erode over the coming decades. A well-connected global society will amplify opportunities for educational institutions to have global reach beyond the traditional residential and distance programs.
Dominant high-tech will companies provide an important context for future information professionals. Today technology companies such as Google, Apple and Facebook stand as the established corporate giants. I would anticipate that in the coming decades that at least some of the favored tech companies known today will have fallen from grace and that new ones will have ascended that were able to better exploit the cycles in society and technology that came after the phases of Web search, social networking, and consumer-ized technology that fueled the success of today's giants. I expect the future to be far more deeply connected even than what we experience today. Connectivity and mobile devices may ultimately be the free loss leaders that bring mass society into this new media for commerce. Will the time come when social networks prosper not just on their entertainment value, but also become part of the fabric that supports official civil discourse and daily commerce?
If society is saved from a digital dark age, where some catastrophic event turns back—or even turns off for a while—progress made in technology, we can expect electronic information will pervade almost all aspects of human life. Will commercial interests find ways to restrict and monetize information, or will current ideals of open access pervade? How the freedom or restriction of information play out in our future society will have broad implications for the educational institutions. Whether the tide turns toward open access to information or toward the reinforcement of proprietary commercialized models for access to scholarly research will make an enormous difference in the role of libraries of that era. I'm again optimistic that the current momentum of open access publishing will continue to build, allowing libraries to focus more on building services based on widely available scholarly content rather than exhausting their resources on procuring it.
If trends stay on course, cloud computing will be fully realized, allowing libraries and their parent institutions to reallocate resources sunk into managing commodity infrastructure and to focus more on value added activities higher on the technology stack closer to their constituents: students, faculty, and staff. Reliance on IT infrastructure outsourced to third parties will come at a cost, hopefully less overall than current expenditures, but will allow organizations to focus more on their areas of core expertise and strategic interest. We can anticipate that many of the problems that currently impede adoption of cloud computing will be solved in the coming years, such as those involving privacy regulations such as HIPPA and FERPA, as well as general concerns to rigorously protect the privacy of library patrons.
Today, cloud computing in libraries has been more talk than substance. Much of what is touted today under the banner of cloud computing can really be considered legacy applications that live in the vendor's data center instead of the customer's. This arrangement offers marginal improvement in efficiency, but does not radically transform the way that the technology supports the organization. True multi-tenant software as a service is needed to realize the potential of cloud computing. Not only does this truer model enable a more scalable and efficient use of computing resources, but it opens up opportunities for vast communities to share information assets in ways that radically diminish the burden of information management. Especially in the library domain, the current models of automation bake in massive redundancy of efforts. In order for libraries to serve their own communities well, it's important for us to massively aggregate the efforts we apply to describing and managing collections and to the fullest extent possible to consolidate the collections themselves. Shared content resources will form increasingly strategic components of future library automation technologies.
Future information professionals will need to understand computing through an entirely virtualized form. Those that have been around for a while think of software running on servers that we see and touch. We've built systems from scratch—or at least from basic components, where we cobbled together motherboards, memory SIMS, drives, video and network cards, installed the operating system and finally were able to load software that does something useful. Now, it's much faster, cheaper, and easier to sign on to Amazon Web Services, spin up a machine instance, and have significant computing power at your disposal in just a few minutes. Even as the core infrastructure of the library shifts toward large-scale business systems, there will continue to be need for smaller hand-crafted applications that address specific niches or features not addressed. Skills to work with virtual servers, design systems and interfaces, to create niche applications will complement the ability to manage large-scale industrialized business systems.
It's hard to imagine that higher education will hold onto its current shape in future decades. Universities currently face economic pressures that will surely result in major structural change. The cost of tuition and other factors will challenge the viability of the traditional residential undergraduate program. If the current form of universities is to survive mostly intact, they will do so by finding new efficiencies, with technology and information management playing a key role.
Events taking place in recent months and years stand to reshape the future of education, research, and scholarly publishing. Many institutions, such as Harvard University, MIT, and others, have taken bold steps to open their intellectual outputs to the widest audiences, with great potential to break the stranglehold that commercial publishers have exerted. Can we imagine that the default level of access to research and teaching will be something more like Creative Commons 0 public domain license? Will the products of teaching also see similar distribution? Will other institutions follow the model of the MIT Open Courseware initiative? To what extent will primary research data be exposed through open licenses?
Consistent with its policies toward open access to its productive outputs, the Harvard University Library released 12 million MARC records representing the collections of its 73 libraries under CC0. This release of a substantial body of metadata had an immediate effect, with its incorporation into EBSCO Discovery Service, the Community Catalog of Ex Libris Alma, and into LibraryThing within days. The library community has been had a pent-up demand for sources of large bodies of bibliographic data in the public domain. The definitive body of bibliographic metadata, OCLC's WorldCat, has been subject to that organization's Rights and Responsibilities statements which have had the effect of mostly walling those records off within WorldCat itself and within the systems of its members. OCLC's position regarding the sharing and reuse of WorldCat has been steadily evolving toward more openness. But with standing precedents such as Harvard's public release of their records, many of which are downstream from WorldCat, I see the floodgates now open so that bibliographic data will be readily available to any project and that future business models will depend on services performed based on such metadata and not on the existence of the metadata itself. I see access to bibliographic metadata a small fraction of the broader information landscape, but it does serve as an indicator of the transformations can be accomplished once bodies of content break free from barriers of constraint.
Information professionals of the future will hopefully be advocates that effect changes toward more open access to information. Toward this end, they will necessarily be savvy in the legal, organizational, and commercial angles that comprise the information landscape. It will not be enough, for example, to simply hold philosophical views that information should be open, but to also participate in the business ecologies that depend on information to design solutions that maximize the openness of content while still enabling business models to sustain the organizations on which we depend for important services. The ability to construct sustainable processes that support the information environment will be paramount, even in the scenarios where open access prevails over proprietary restrictions.
Today libraries are in the beginning of a ten-year cycle that will see the maturation and deployment of a new generation of library services platforms. These products, such as Ex Libris Alma, Serials Solutions Intota, OCLC's WorldShare Platform, Innovative Interfaces Sierra, and the community source Kuali OLE project are well positioned to replace the current crop of legacy systems that automate most libraries that serve higher education. These new library services platforms, each in their own way, help libraries break away from the print-dominated models of management, discovery and access to library collections to a more unified approach that recognizes the dominant role of electronic resources, digital collections, and diminishing reliance on print materials. While just coming on the scene, they enter many years late given how long ago library collections reached critical tipping points in the transition from print to electronic. Given progress to date and the typical rate of change in libraries, it seems that these new systems will see a surge of acceptance in the next two or three years, followed by a more gradual phase of transition for the following four to five years.
These new systems bring capabilities important to academic and research libraries, including the ability to leverage cloud computing infrastructure, to take advantage of highly shared metadata and content stores among very broad communities of educational institutions, to manage complex collections of print, digital, and electronic materials, to deal with both owned and licensed content, to better integrate with the enterprise infrastructure of the broader institutions through Web services and APIs, and to present modern user interfaces.
The advance from the traditional integrated library systems in widespread use today to the new generation of library services platforms makes a much needed step in helping academic libraries meet their immediate needs. Yet, it also represents only a small step toward supporting the role of libraries in a more distant time frame. These platforms consolidate the fractured nature of how libraries manage and provide access to their collections, but still in a library-centric way that must eventually evolve into an approach focused on the broader information management needs of the institution.
I expect to see increased consolidation of the business and information management infrastructure of educational institutions. Large organizations require enterprise resource planning (ERP) systems that deliver comprehensive management of their resources in a way that supports strategic and operational decisions. Examples of these ERP systems include those provided by PeopleSoft or SAP and community source projects such as the Kuali Financial System. The automation infrastructure for a library can be considered, at least to an extent, as an ERP system that drives the operation of the library. One of the huge challenges today involves improving the interoperability between the business and accounting components of the library management system with that of the institutional ERP.
Current integrated library systems generally deal with institutional ERP systems through inefficient batch processes. The new generation of systems aims for more dynamic interoperability through APIs, which should result in significant improvements in efficiency. Even this approach fails to unify the library with the strategic infrastructure of its parent institutions. I would anticipate a further level of consolidation, where the library operates more as a node of the institutional ERP. The Kuali OLE project follows this approach, using the Kuali Financial System as its foundation for acquisitions and other business functions. Similar opportunities for deeper integration or synergies exist between the institutions' learning management systems and the library's services platforms. The delivery of library content to students and instructors through learning management systems such as Blackboard or Sakai will rise to strategic importance and demand more efficient interoperability that currently exists, even in the next generation of products. I would anticipate additional examples of alignment or consolidation between information technology infrastructure for the institution and that for the library. Information technology infrastructure deployed specifically for libraries may eventually be displaced by institutional infrastructure that subsumes information assets and business processes formerly considered as within a separate library domain.
In the same way that libraries today have quite a difficult time operating a dozen different applications to manage their infrastructure, in the next round the consolidation will take place at the institutional level where information technology systems deployed for specific operational units will be subsumed into a cohesive and comprehensive institutional environment. This may not be achieved through the creation of monolithic systems of epic scope, but through domain-specific components that use service-oriented architecture to achieve strategic interoperability.
Part of the efficient information technology management that will help ensure the survival of higher educational institutions will involve the library moving away from its current technical isolationist stance toward one fully integrated into the enterprise. The idea of the library managing one set of resources apart from other information assets of the institution will diminish in favor of a more comprehensive and unified information architecture. Today we think of only certain areas of information as within the library's domain, including its established collection and the metadata that describes it, digital collections it may have created, and the body of content it licenses on behalf of the institution.
If this trajectory of library information systems evolving toward a more enterprise flavor takes place, it will have other interesting implications. Going forward, the pragmatic differences between open source and proprietary software will evaporate. As systems become more complex and deployed through multi-tenant software as a service, the concept of having local programmers work with the internal workings of these systems to make modifications will become increasingly less applicable. Rather, these systems will be created out of low-level services and will expose these services through APIs, which will become the basis of local customizations, extensibility, and interoperability. The role of library technical personnel will rise up the stack away from working with low-level hardware, operating systems, or applications software development toward adding value to these core business systems through creation of higher-level, customer-facing services. It is not within the library's core expertise to manage servers, data centers, enforce network security, or engineer business software. Rather, the role of the library technologist of the future may focus more on designing or developing value-added services based on existing platforms.
In this future of deep integration or consolidation between the information systems of libraries with that of the parent institution will impose the need for future information professionals to have broader views of information architecture. Today information professionals in libraries work to consolidate the silos of content. In the next round of activity, the challenge will be to ensure that the library itself does not operate as a silo within the institution.
I can recall the distribution of information from the days we received the Wilson databases on magnetic tape to load into our mainframe-based NOTIS system using the MDAS (Multiple Database Access System) and the CD-ROM discs that we mounted on multi-drive network towers and jukeboxes that came before the current era of electronic resources available through the Web. From my earliest days in library computing, I've worked with the problem of bringing many different types of library content together in a consolidated interface. That work continues today through my involvement with index-based discovery systems, including studies of the products and technologies and through the Open Discovery Initiative. One of the key challenges of the future will involve closing the gaps in the library resources not covered in these discovery environment, expanding the depth of indexing from mostly metadata to mostly full-text, and improving the state of the art of relevancy and other search and retrieval technologies used in these products to more effectively deliver access to library managed materials.
While library discovery services will inevitably become comprehensive relative to the body of content of interest to academic institutions, its less clear that they will ever become the central tool that students or faculty rely upon for their academic research. Will they become powerful enough and be positioned in ways that they become the starting point for research? That isn't the case today. Or will the power of these tools be realized as their capabilities are embedded in other tools closer to the daily lives of the students, faculty and staff? Information professionals of the future will face challenges of finding ever new ways to bring traditional library services into the appropriate information infrastructures and architectures.
Cutting across all of the threads that I've explored, the qualities most needed by future information professionals would include adaptability and continual curiosity and exploration. Little of the specific information learned at the beginning of a career will apply towards its end. Any given technology has a short shelf life, and the expiration dates are shorter with every turn of the technology cycle. It's the general aptitudes, attitudes, and learning techniques that seem more likely to persist over the long haul. Sometimes changes in technologies or organizational context require evolutionary adaptation, other times it requires a major overhaul of career path. Any program that aims to train information professionals must not only equip them with the specific knowledge and practical skills relevant in the short term but to also instill the insight to anticipate the implications of the technology cycles and societal changes that will transpire across multiple decades.
|Type of Material:||Article|
Information Services and Use|
|Issue:||June 5, 2012|
|This paper was presented at the Information Professionals 2050 at the University of North Carolina at Chapel Hill on June 5, 2012. Also published in: Information Services & Use DOI 10.3233/ISU-2012-0665/ 1 This work is licensed under the Creative Commons Attribution-Noncommercial No Derivations Works 3.0 Unported license. You are free to Share this work (copy, Distribute and transmit) under the following conditions: attribution, noncommercial and no derivative works. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/. This article is published online with Open Access and distributed under the terms of the Creative Commons Attribution Non-Commercial License.|
|Conference:||Information Professionals 2050|
|Download||[ pdf ]|
|Last Update:||2014-01-29 10:39:21|
|Date Created:||2012-09-01 12:29:22|