|Organization:||Library Technology Guides|
Perspective and commentary by Marshall Breeding
One of the key issues in the current library automation environment involves a keen interest in providing new interfaces for library patrons that function better than the ones delivered with the ILS. A number of commercial and open source products have been developed in this new genre of library software.
An enormous challenge in this environment of front-end interfaces separate from back-end ILS products involves finding ways to transfer and synchronize data between systems and to deliver services from the ILS through a different interface.
Last week, I had the privilege to attend a workshop convened to discuss the work of a task force charged by the Digital Library Federation to address the problems that have arisen in the current efforts to implement new discovery layer interfaces in front of legacy ILS products. The DLF ILS Discover Interface Task Force, as summarized by member Emily Lynema:
Öwas charged with analyzing issues involved in integrating ILSs and discovery systems, and creating a technical proposal for accomplishing such integration.
One of the main objectives of the workshop involved getting feedback from the library community and from organizations involved in creating front-end systems and ILS products.
Convened by Peter Brantly, executive director of the Digital Library Federation and led by John Mark Ockerbloom, the chair of the DLF ILS-Discovery Interface committee.
The ILS-DL committee described a number of functions that focus on specific areas of interoperability needed to connect new discovery layer products with the ILS. For each of these functions, the committee documents provide a summary of the function, the parameters needed to drive the function, the data or service element returned, as well as some suggested bindings that might be employed to instantiate the function.
The committee also prioritized the functions and components within the functions, specifying some as essential core functions essential to the process while recognizing others as helpful for a more robust implementation that might be implemented as a follow-up once core functions have been developed.
Following the preliminary presentations about the work of the committee and its intentions, the workshop evolved into discussions regarding the specific functions described in the committee recommendations.
Those represented included Talin Bingham (SirsiDynix), Betsy Graham (Innovative Interfaces), Uri Livnat and Kathryn Harnish (Ex Libris), Stephen DiStasio and Jack Kirstein (Serials Solutions), John Law (ProQuest), Candy Zemon (Polaris Library Systems), Galen Charlton (LibLime), Andrew Pace and Mat Goldner (OCLC), Dave Errington* (Talis). Those representing discovery products included Jack Kirstein (Serials Solutions), ProQuest, the David Lindahl* (University of Rochester River Campus Libraries, eXtensible Catalog project), Villanova University -- VUfind, Steve Taub* (BiblioCommons). Committee Members attending the workshop included John Mark Ockerbloom (University of Pennsylvania), Dianne McCutcheon (National Library of Medicine), Terry Reese (Oregon State University), Terry Ryan (University of California, Los Angeles), Patricia Martin (California Digital Library), ? Wolven (Columbia University), David Bucknum (Library of Congress), David Kennedy (umd), Dale Flecker (Harvard University).
I attended the workshop representing my role as a writer and researcher on the library automation industry and as an advocate for a more open approach in the development of library technology products.
Throughout the discussions of the functions proposed by the committee, there were many points of agreement and disagreement.
There was no disagreement regarding the need to efficiently extract data out of the ILS for the benefit of new discovery layer products. All agreed that the Open Archives Initiative Protocol for Metadata Harvesting stands as a natural approach for achieving this function. As one of the main outcomes of the workshop, all the ILS vendors voiced support for adding functionality to their systems that would enable a library to harvest its own data from its ILS using OAP-PMH.
Another proposal that emerged in the course of the workshop involved the use of OpenURL as a mechanism for linking back into the ILS for specific requests for data or services. While there was no general agreement from the developers present to adopt this approach, it does seem to be a promising approach that can be explored for feasibility.
Some of the questions that emerged dealt with the scope of the API. Was it meant for all types of libraries, or does it target the needs of DLF member libraries, mostly large academic and research libraries. Functionality was described, for example, relating to course reserves, which would not be relevant to public libraries.
I see the work of this committee as a small step toward the development of a much broader API that encompasses all aspects of library automation. Libraries have an interest in much more open systems. In the short term, many needs present themselves related to the implementation of this new genre of discovery layer interfaces, but this is just one example of the broader interoperability needs. We also need standard APIs for connecting electronic resource management systems with ILS acquisitions modules, for the interactions between ILS and external authentication services, between the ILS and institutional ERP systems, courseware and e-learning environments.
We can no longer think of the ILS as a closed box with proprietary internal functionality. We need application programming interfaces that unlock the data and services bound within our library automation systems.
Many ILS products already offer proprietary APIís. My place of work, Vanderbilt University, for example, couldnít get by without the API that SirsiDynix makes available to Unicorn.
While itís great that some systems offer their own APIs, I see a tremendous need to move toward a more open API implemented in a standard way across each of the products. Itís time to move to a much more open approach were each aspect of functionality within the ILS is exposed through a standard, well documented interface.
The implementation of open APIs will address many of the concerns that libraries express regarding the closed-source commercial products. If libraries have full access to their data and services in open and flexible programming interfaces, they may have less need for access to the source code of the software.
The commercial, traditionally licensed, closed source systems must now compete against open source systems. Implementing an ILS-wide set of APIs offers great benefits to libraries. Libraries today demand full access to all aspects of data housed within their automation systems. An open API delivers access to that data and key services surrounding that data back to the library that was previously held captive within the black box of the ILS. I'm looking forward to an ongoing set of conversations in the library community that works toward building out a discovery interface API as well as a more comprehensive set of APIs that span other aspects of library automation environments.
Marshall Breeding Mar 14, 2008 14:51:00 Link to this thread
The National Library of Australia has been doing a great deal of work toward the design and development of a library automation infrastructure following the service oriented architecture. These efforts show great promise in the creation of a number of key library automation components available as open source software.
In recent days, NLA has launched a wiki called Library Labs with lots of information about their work. Iím especially interested in the IT Architecture Project Report that they have writen that gives some background on thier strategy and technical approach. The key concepts discussed in the document include the Service Oriented Architecture, a single business approach, and open source solutions. The document leads with this introductory paragraph:
The aim of this report is to define the IT architecture that will be needed to support the management, discovery and delivery of the National Library of Australiaís collections over the next three years. The current architecture has enabled the Library to develop a significant digital library capability over the last decade. Now the burden of maintaining and supporting existing systems and services is increasingly hindering us from bringing new services online, improving the user experience, exploring new ideas or responding to technological change. In the meantime, enormous changes are occurring in the broader environment.
The initiatives described in the NLA Library Labs are quite consistent with my view of what should be different in the next generation of library automation relative to the legacy systems in place today. SOA stands as the current state of the art for business automation software today. The library world seems a bit behind in adopting systems that follow this approach. I especially like the concept of looking at the library as a single business around which we need to design and create a new automation environment. This approach gives us a chance to break away from the legacy models of library automation and build SOA-based applications designed around a set of workflows better suited for libraries today and moving forward into the future.
Marshall Breeding Mar 21, 2008 09:10:30 Link to this thread