by Marshall Breeding. January 9, 2008
The year 2007 saw considerable upheaval in the library automation industry. To get some sense of the aftermath of the recent rounds of mergers, acquisitions, product consolidations, and to gauge interest in open source automation systems, I created and executed a survey that aims to measure the prevailing perceptions in libraries.
A total of 1,779 individuals from 47 different countries responded to the survey. The vast majority were from the United States (1,484), Canada (125), and the United Kingdom (51). Many library types were represented, including 1,012 public, 512 academic, and 53 consortia. Survey results were gathered between August 8, 2007 and January 5, 2008.
The survey results indicate major differences in satisfaction in the products and companies from which libraries acquire their automation systems. Dissatisfaction and concern prevail, yet some companies maintain exceptional levels of satisfaction from the libraries that use their products.
This article is an original publication of Library Technology Guides. It will not appear in any print publication. Please direct any comments to the author.
This survey and its analysis reflect my ongoing interest in following trends in the library automation industry. It is designed to complement, and not replace, the annual Automated Systems Marketplace feature that I have written the last six years for Library Journal. The survey underlying the Library Journal article relies on information provided by the companies that offer library automation products and services. The survey that serves as the basis for this article collects data from the libraries themselves.
The survey instrument included five numerical ratings, two yes/no responses, and two short response fields, and a text field for general comments. The numeric rating fields allow responses from 0 through 9. Each scale was labeled to indicate the meaning of the numeric selection.
Four of the numeric questions probe at the level of satisfaction with and loyalty to the company or organization that provides its current automation system:
A yes/no question asks whether the library is considering migrating to a new ILS and a fill-in text field provides the opportunity to provide specific systems under consideration.
Given the recent interest in new search interfaces, a yes/no question asks “Is the library currently considering a search interface for its collection that is separate from the ILS?” and a fill-in form to indicate products under consideration.
The survey includes two questions that aim to gauge interest in open source ILS, a numerical rating that asks “How likely is it that this library would consider implementing and open source ILS?” and a fill-in text field for indicating products under consideration.
The survey concludes with a text box inviting comments.
View the survey. (This version of the survey does not accept or record response data.)
In order to correlate the responses with particular automation systems and companies, the survey links to entries in the lib-web-cats directory of libraries. Each entry in lib-web-cats indicates the automation system currently in use as well as data on the type of library, location, collection size, and other factors that might be of potential interest. In order to fill out the survey, the responder had first to find their library in lib-web-cats and then press a button that launched the response form. Some potential respondents indicated that found this process complex.
The link between the lib-web-cats entry and the survey automatically populated fields for the library name and current automation system and provided access to other data elements about the library as needed. The report on survey response demographics, for example, relies on data from lib-web-cats.
A number of methods were used to solicit responses to the survey. E-mail messages were sent to library-oriented mailing lists such as WEB4LIB and NGC4LIB. Where contact information was available in lib-web-cats, and automated script produced e-mail messages with a direct link to the survey response form for that library.
The survey attempted to limit responses to one per library. This restriction was imposed to attempt to sway the respondents to reflect the broad perceptions of their institution rather than their personal opinions.
The survey instrument was created using the same infrastructure as the Library Technology Guides web site—a custom interface written in perl using MySQL to store the data, with ODBC as the connection layer. Access to the raw responses is controlled through a user name and password available only to the author. Scripts were written to provide public access to the survey in a way that does not expose individual responses.
In order to provide access to the comments without violating the stated agreement not to attribute individual responses to any given institution or individual, an addition field was created for “edited comments.” This field was manually populated with text selected from the “comments” text provided by the respondent. Any information that might identify the individual or library was edited out, with an ellipse indicating the removed text. Comments that only explained a response or described the circumstances of the library were not transferred to the Edited Comments field.
To analyze the results, a few scripts were written to summarize, analyze, and present the responses.
In order to avoid making generalizations based on inadequate sample sizes, the processing scripts included a threshold variable that would only present results when the number of responses exceeded the specified value. The threshold was set to a value of 30.
For each of the survey questions that involve a numeric rating, a set of subroutines was created to calculate and display simple statistics.
The “survey-report-by-category.pl” script processes each of the numerical ratings, displaying each of the statistical components listed above for each product that received responses above the threshold value. This report provides a convenient way to compare the performance of each ILS product for the selected question. The report sorts the statistics for each product in descending order of the mean. The report categories available correspond to the survey questions with numerical scale responses.
The “survey-product-report.pl” script provides the results for each of the ILS products mentioned in the responses. This report also provides the statistical components for each of the numeric question. It also provides the percentage of yes responses to the two yes/no questions:
Statistics related to the question: How satisfied is the library with your current Integrated Library System (ILS)?
|Satisfaction Score for ILS||Response Distribution||Statistics|
Statistics related to the question: How satisfied is the library overall with the company from which you purchased your current ILS?
|Satisfaction Score for Company||Response Distribution||Statistics|
Statistics related to the question: How satisfied is this library with this company's customer support services?
|Satisfaction Score for ILS Support||Response Distribution||Statistics|
Statistics related to the question: How likely is it that this library will purchase its next ILS from this company?
|Loyalty to Company Score||Response Distribution||Statistics|
Statistics related to the question: How likely is it that this library would consider implementing an open source ILS?
|Interest Level in Open Source||Response Distribution||Statistics|
An interactive version of the statistical reports is available here, which includes the ability to view the responses for each of the ILS products, along with the redacted comments.
Polaris emerged as the system with the highest positive ratings. Libraries that use Polaris rated their system higher in all categories than any of the competing systems and are the least interested in open source alternatives. Only 1.56% of responding libraries indicated they were considering migrating to a new system.
The Library Corporation scored very highly with their Library.Solution customers but not nearly as well with Carl or Carl.X. While the number of responses for Carl/Carl.X were well below the threshold to be included in the main summary tables, the ratings and comments reflected mostly negative perceptions. About 12 percent of the libraries running Library.Solution that responded to the survey indicated interest in migrating to a new system; one third of Carl libraries indicated interest in changing systems.
Innovative Interfaces received strong positive ratings from its libraries, generally about third highest among the companies that produce higher-end automation systems. 327 libraries using Millennium responded to the survey—more than any other product. While the numerical ranking might be characterized as lukewarm, the comments generally reflected satisfaction. Many complimented the functionality of the system; some complained about the cost.
Two products provided by Ex Libris received enough survey responses to be included in the main reports, ALEPH 500 and Voyager. Both of these systems received mediocre satisfaction scores, with Voyager (4.11) ranked a significant notch below that of ALEPH 500 (5.06). Comments about ALEPH included both strongly positive and strongly negative statements. The comments from libraries running Voyager tended to voice uncertainty about the corporate situation, with only a few complaints about the system itself. Libraries running Voyager indicated the strongest interest in migrating to an open source ILS.
The products of SirsiDynix, Unicorn and Horizon, received low satisfaction scores from libraries responding to the survey. Unicorn, the company’s flagship ILS performed somewhat better than Horizon. 14% of libraries running Unicorn and about half of those with Horizon indicate interest in migrating to another system--not surprising considering SirsiDynix's position not to develop that system into the future. Horizon libraries scored high interest in open source ILS alternatives. The comments provided by libraries running Horizon voiced an extremely high level of frustration with SirsiDynix as a company and its decision to discontinue Horizon. Many indicated distrust toward the company. The comments from libraries running Unicorn, the system which SirsiDynix selected as the basis for its flagship Symphony ILS, also ran strongly negative—some because of issues with the software some because of concerns with the company.
A large number of libraries running Winnebago Spectrum and Circulation Plus, both products of Follett Software Company responded to the survey. These two systems were designed for, and mostly used in K-12 school libraries. Most of the libraries responding to the survey that used these systems were from small public libraries. Both Circulation Plus and Winnebago Spectrum received remarkably favorable ratings considering that neither product receives ongoing development. While FSC continues to provide support, it promotes Destiny as its current flagship automation system. 51% of the libraries using Winnebago Spectrum and 34% of those using Circulation Plus indicate plans to migrate.
Concourse from BookSystems, another ILS frequently installed in small public libraries received very favorable ratings, though the number of responses was too small to be represented in the main tables. Its mean satisfaction score of 7.92 topped even that of Polaris.
The seven responses from libraries using Apollo from Biblionix forms a sample size too small to draw significant conclusions. All responses and comments received indicated remarkably high satisfaction.
In the broadest view, the results of the survey give a snapshot of the perceptions of libraries to the companies from which they purchase their library automaton systems. With a few exceptions, the relations are a bit chilly. The recent rounds of mergers and acquisitions have taken their toll. Survey comments reflect more uncertainty than trust in library automation vendors.
Many libraries complained about the cost of the automation products, which isn’t surprising given the continual state of budget constraint libraries face. Many comments indicated satisfaction with the products, yet noted the high cost.
In many ways, the results of the survey speak for themselves. The statistical reports indicate clear differences. Some companies and products garner higher levels of satisfaction from the libraries that use them than others.
I trust that libraries will also take the results with the proverbial “grain of salt.” Knowing something about the perceptions of other libraries regarding a given product or company can be an important component in making automation decisions. But it shouldn’t be overemphasized. I worry that surveys like this one draw out the negative more than the positive. A survey provides an opportunity to vent against a vendor during a problematic episode, even when the relations with that vendor have been positive over the longer term.
I also hope that the companies mentioned in the survey will also find these results to be useful and provide some information on how they can make adjustments to improve the satisfaction of their library customers.