Perceptions 2009: An International Survey of Library Automation

by Marshall Breeding. January 22, 2010

Click Here
To view the interactive version of the survey's statistical results

Libraries make significant investments in technology in order to automate their libraries and deliver information resources and services through their Web sites. The integrated library system (ILS) for most libraries represents the most critical component of a its technology infrastructure and can do the most to help or hinder a library in fulfilling its mission to serve its patrons and in operating efficiently. As libraries consider their automation strategies, such as moving to a new ILS, it’s helpful to have as much data as possible to make an informed decision. One aspect of that data might involve some measure of the perceptions of libraries that use that those products regarding such things as the quality of the ILS, the company involved, and its customer support. In order to produce data that portrays some of the general perceptions that libraries have about these questions, I have conducted a major survey for the last three years. This survey records each library's satisfaction level with their ILS and the company involved and probes at levels of interest in open source ILS products, one of the major issues brewing in the industry. The survey aims not only to provide libraries with helpful information regarding the products in the field, but might also serve as a tool for the companies involved to glean information on areas of strengths and weaknesses that will help them make any needed improvements.

Top survey findings

  • Products and companies focusing on smaller libraries and narrower niches generally receive higher perception scores than those involved with larger, more complex organizations and that serve multiple types of libraries.
  • Apollo, a system adopted exclusively by small public libraries topped the charts in ILS, company, support perceived satisfaction and in company loyalty, following the formula for success mentioned above. Most libraries adopting Apollo have migrated from abandoned products such as Winnebago Spectrum and Athena.
  • Libraries operating AGent Verso from Auto-Graphics and Polaris from Polaris Library Systems continue to receive extremely high scores, consistent with previous editions of this survey.
  • Companies and products serving large and complex library organizations and diverse library types receive a broader range of responses, and fall into a middle tier of rankings. Yet where they fall within this middle ground represents important differences. Millennium from Innovative Interfaces, Library.Solution from The Library Corporation, and Evergreen as supported by Equinox Software came out as very strong performers at the top of this middle tier.
  • Companies supporting proprietary ILS products receive generally higher satisfaction scores than companies involved with open source ILS. Evergreen, primarily supported by Equinox Software fell into the middle tier of satisfaction ratings. LibLime received especially poor marks in customer satisfaction; libraries implementing Koha independently gave themselves high ratings.
  • Except for the libraries already using an open source ILS, the survey reflected low levels of interest, even when the company rates their satisfaction with their current proprietary ILS and its company as poor. Other than libraries already running an open source ILS, and for Winnebago Spectrum and Athena, the mode score from libraries using proprietary ILS products was 0. These results fail to confirm the trend of broad-based interest in open source ILS; rather we observe a minority of early adopters voicing strong support.

General Information about the Survey

This report describes the results of a survey that I conducted to gather data regarding the perceptions of libraries toward their automation systems, the organizations that provide support, and the quality of support they receive. It also aims to gauge interest in open source library automation systems.

I conducted a similar survey in 2007 and 2008 .

This year, I received 2,098 responses from libraries in 56 different countries. The countries most strongly represented include the United States (1,633 responses), United Kingdom (80), Canada (128), Australia (91). As with the general demographics of the lib-web-cats database, the respondents of the library primarily come from libraries in English-speaking countries. Survey results were gathered between November 4, 2009 and January 11, 2010 (Full demographic summary).

The survey attracted more responses from libraries using Millennium (343), Symphony (304), Horizon (192), Voyager (164), ALEPH 500 (133), Library.Solution (110), Polaris (92), AGent VERSO (71). There were fewer than 70 responses for each of the other ILS products represented in the survey. Systems with less than 20 did not appear in the main statistical tables. These responses can be seen through the individual ILS Product Reports available.

This article is an original publication of Library Technology Guides and is not slated to appear in any print publication. Please direct any comments or enquiries to the author.

This survey and its analysis reflect my ongoing interest in following trends in the library automation industry. It is designed to complement, and not replace, the annual Automation Systems Marketplace feature that I have written the last seven years for Library Journal. The survey underlying the Library Journal article relies on information provided by the companies that offer library automation products and services. The survey that serves as the basis for this article collects data from the libraries themselves.

Survey Results

Statistics related to the question: How satisfied is the library with your current Integrated Library System (ILS)?

Satisfaction Score for ILS Response Distribution Statistics
CompanyResponses 0 1 2 3 4 5 6 7 8 9 ModeMeanMedianStd Dev
Apollo34 6 10 18 98.359 1.37
AGent VERSO71 1 1 5 16 27 21 87.838 0.95
Polaris92 1 1 2 1 1 13 52 21 87.798 0.83
Koha -- Independent26 4 5 10 7 87.778 1.57
OPALS41 1 1 1 1 12 7 18 97.668 1.41
Atriuum55 1 1 3 4 13 14 19 97.538 0.13
Millennium343 1 7 10 31 32 109 102 51 77.137 0.43
Library.Solution110 1 2 4 3 6 4 45 26 19 77.067 0.67
Evergreen49 1 10 5 21 6 6 76.767 1.00
Circulation Plus38 2 2 1 2 2 1 3 7 9 9 86.397 1.30
Spydus22 1 1 1 1 4 7 6 1 76.367 1.28
ALEPH 500133 2 1 1 7 7 16 27 37 31 4 76.297 0.61
Virtua21 1 1 1 5 8 4 1 76.297 1.31
Koha -- LibLime49 5 1 4 4 9 14 9 3 76.127 1.00
Symphony (Unicorn)304 3 5 5 9 16 62 56 97 44 7 76.076 0.46
Horizon192 3 5 7 7 13 24 25 69 32 7 76.067 0.51
Voyager164 8 10 9 31 39 41 22 4 75.916 0.23
Athena22 1 3 2 1 1 4 4 2 4 65.736 0.64
Winnebago Spectrum33 3 1 3 1 5 6 3 5 5 1 54.915 1.22



Statistics related to the question: How satisfied is the library overall with the company from which you purchased your current ILS?

Satisfaction Score for Company Response Distribution Statistics
CompanyResponses 0 1 2 3 4 5 6 7 8 9 ModeMeanMedianStd Dev
Apollo34 7 27 98.799 1.37
AGent VERSO71 1 2 2 5 19 42 98.319 1.07
OPALS41 1 1 1 1 5 11 21 97.909 1.41
Koha -- Independent22 1 3 4 5 9 97.828 1.92
Polaris92 1 1 1 4 4 6 48 27 87.808 0.83
Atriuum55 1 2 1 3 2 9 16 21 97.568 0.13
Library.Solution110 1 2 6 4 2 7 25 33 30 87.278 0.67
Circulation Plus38 1 2 1 3 3 2 7 7 12 96.748 1.30
Evergreen49 2 1 1 12 1 15 10 7 76.637 1.00
Millennium342 1 4 13 8 17 41 48 92 71 47 76.587 0.32
Virtua21 1 1 1 1 3 7 5 2 76.487 1.31
Spydus22 1 1 3 4 8 5 76.327 1.07
ALEPH 500131 3 1 5 4 11 21 20 35 25 6 76.047 0.52
Voyager162 1 1 7 12 17 23 35 45 19 2 75.736 0.24
Koha -- LibLime49 2 2 4 3 3 6 7 13 7 2 75.476 1.00
Symphony (Unicorn)303 5 9 27 22 22 56 48 80 29 5 75.356 0.46
Athena22 2 4 1 1 2 3 2 4 3 25.276 0.64
Horizon190 4 14 19 22 21 23 25 37 17 8 74.925 0.51
Winnebago Spectrum32 3 2 1 5 3 4 3 5 2 4 34.885 1.06


Statistics related to the question: How satisfied is this library with this company's customer support services?

Satisfaction Score for ILS Support Response Distribution Statistics
CompanyResponses 0 1 2 3 4 5 6 7 8 9 ModeMeanMedianStd Dev
Apollo34 1 5 28 98.799 1.37
AGent VERSO70 1 1 1 8 15 44 98.349 1.08
OPALS41 1 1 1 1 3 10 24 98.109 1.41
Koha -- Independent23 2 2 4 6 9 97.788 1.88
Polaris91 2 1 2 1 1 17 45 22 87.688 0.84
Atriuum55 1 1 1 1 1 1 4 5 18 22 97.568 0.13
Library.Solution110 1 1 4 3 1 5 12 20 31 32 97.238 0.67
Circulation Plus38 3 2 1 1 1 5 10 15 97.138 1.30
Millennium339 2 2 11 15 16 28 69 83 71 42 76.537 0.38
Virtua21 2 1 2 3 5 6 2 86.527 1.31
Evergreen48 2 2 12 2 15 7 8 76.487 1.01
ALEPH 500132 3 2 5 4 12 25 27 23 20 11 65.906 0.52
Spydus22 1 1 3 1 2 3 5 5 1 75.827 0.85
Horizon190 3 5 10 17 14 31 25 40 28 17 75.766 0.58
Voyager162 1 3 11 15 12 20 26 44 25 5 75.726 0.16
Symphony (Unicorn)303 3 12 30 22 17 54 45 67 41 12 75.456 0.52
Koha -- LibLime49 4 1 3 5 3 5 7 9 10 2 85.356 1.14
Athena22 4 1 1 2 3 2 3 3 3 04.916 0.64
Winnebago Spectrum32 5 1 8 1 2 3 2 7 1 2 24.064 0.35

Statistics related to the question: How likely is it that this library will purchase its next ILS from this company?

Loyalty to Company Score Response Distribution Statistics
CompanyResponses 0 1 2 3 4 5 6 7 8 9 ModeMeanMedianStd Dev
Apollo34 1 1 3 29 98.769 1.03
AGent VERSO71 1 1 1 9 9 50 98.459 1.07
OPALS41 2 1 1 1 10 26 97.989 1.41
Koha -- Independent21 1 2 1 1 3 13 97.819 1.96
Polaris91 3 1 1 1 5 2 5 37 36 87.688 0.84
Atriuum54 3 1 2 1 1 1 3 10 10 22 97.098 0.14
Library.Solution109 4 3 6 1 1 12 4 15 22 41 96.948 0.67
Evergreen49 1 1 1 13 1 8 11 13 56.867 1.00
Millennium340 14 10 12 15 21 42 28 56 59 83 96.337 0.49
Spydus21 2 2 2 2 4 4 5 96.247 1.09
ALEPH 500132 10 5 5 6 7 11 14 26 31 17 85.937 0.52
Circulation Plus38 7 2 2 1 2 3 3 4 14 95.637 1.30
Virtua20 4 1 2 3 3 5 2 85.557 0.00
Voyager160 11 5 5 11 14 22 20 42 17 13 75.516 0.00
Symphony (Unicorn)301 32 14 12 14 22 67 31 49 44 16 55.085 0.46
Koha -- LibLime48 12 1 1 1 9 3 8 9 4 04.906 1.01
Horizon188 26 10 12 9 22 30 16 26 22 15 54.725 0.51
Winnebago Spectrum33 11 1 1 4 4 2 1 2 7 03.763 1.57
Athena22 8 1 3 1 4 1 1 3 03.413 0.64


Statistics related to the question: Has the customer support for your ILS gotten better or worse in the last year?

Change in customer support quality Response Distribution Statistics
CompanyResponses 0 1 2 3 4 5 6 7 8 9 ModeMeanMedianStd Dev
Apollo30 1 1 1 8 19 98.439 1.64
AGent VERSO64 2 7 2 10 10 33 97.759 1.00
OPALS41 2 9 2 5 6 17 97.158 0.78
Koha -- Independent21 1 8 2 2 8 56.957 1.96
Virtua20 1 1 1 1 1 8 2 5 76.857 2.01
Polaris87 3 1 1 3 13 12 8 29 17 86.838 0.75
Atriuum53 1 1 1 1 14 3 7 13 12 56.747 0.55
Evergreen47 3 3 12 5 2 16 6 86.477 0.29
Library.Solution104 3 2 2 3 6 26 6 19 20 17 56.267 0.49
Spydus21 1 3 6 1 4 3 3 56.146 0.87
Circulation Plus36 4 1 4 9 2 4 4 8 55.756 1.17
ALEPH 500123 2 1 2 3 14 43 19 17 11 11 55.735 0.54
Millennium316 6 3 9 8 12 136 50 48 19 25 55.655 0.28
Voyager153 2 5 8 8 18 54 17 17 18 6 55.275 0.57
Symphony (Unicorn)292 7 11 19 29 22 89 35 40 30 10 55.105 0.35
Horizon179 6 10 11 9 23 59 25 16 14 6 54.885 0.60
Koha -- LibLime45 6 3 1 6 4 7 4 7 2 5 54.625 0.75
Athena21 4 3 1 2 6 1 2 2 53.905 0.65
Winnebago Spectrum32 7 6 2 3 8 3 1 2 53.414 0.35


Statistics related to the question: How likely is it that this library would consider implementing an open source ILS?

Interest Level in Open Source Response Distribution Statistics
CompanyResponses 0 1 2 3 4 5 6 7 8 9 ModeMeanMedianStd Dev
Koha -- Independent20 1 19 98.859 1.34
Evergreen43 1 1 1 1 39 98.429 1.37
Koha -- LibLime46 2 1 3 1 39 98.209 1.33
OPALS34 4 1 2 4 23 96.889 1.54
Winnebago Spectrum33 3 3 1 4 8 4 2 8 55.365 1.57
Athena20 3 3 1 1 2 1 4 5 95.357 0.00
Horizon187 28 13 20 20 17 20 18 23 7 21 04.254 0.15
Voyager164 24 19 23 11 8 23 16 19 12 9 03.974 0.70
Symphony (Unicorn)300 52 34 26 25 31 40 25 20 23 24 03.924 0.12
Virtua20 5 4 1 1 3 1 1 1 3 03.704 2.01
ALEPH 500131 30 10 11 13 12 15 16 15 3 6 03.614 0.26
Millennium340 69 47 36 27 31 41 20 29 18 22 03.513 0.11
Library.Solution110 24 17 11 7 9 16 9 8 4 5 03.303 0.29
Circulation Plus37 8 3 6 5 4 4 1 2 1 3 03.273 0.33
Atriuum53 15 7 4 5 3 10 2 2 2 3 03.043 0.14
Apollo34 14 4 2 4 2 3 1 2 1 1 02.351 0.17
Polaris90 27 21 13 2 6 10 4 3 3 1 02.281 0.42
AGent VERSO71 29 9 12 1 5 8 2 3 1 1 02.061 0.24
Spydus22 9 3 6 2 1 1 01.501 0.00


An interactive version of the statistical reports is available here, which includes the ability to view the responses for each of the ILS products, along with the redacted comments.


Most Positive Perceptions

This year a relatively new company, Biblionix, attracted the top satisfaction scores in the categories for ILS product, company, and support for Apollo. This ILS product, offered exclusively through software-as-a-service, targets small public libraries. The product has been implemented in only about 110 sites; 34 responded to the survey. The responses for Apollo were overwhelming positive, the only product to receive 9 as either the mode or median response. The comments offered gave effusive praise for the company, the product, the ease of migration, and for support. Biblionix sells Apollo exclusively to small public libraries. While the majority of its customers lie in its home state of Texas, but it has recently found many client libraries through the Midwest states of Oklahoma, Nebraska, Iowa, and Kansas, as well as Tennessee, Florida and Maine (view map of Apollo sites).

The next tier of products received also received excellent satisfaction scores, with differences too small to justify assigning an ordered ranking: AGent Verso from Auto-Graphics, Polaris from Polaris Library Systems, Atriuum from Book Systems.

Polaris, which achieved top rankings in each catagory in the 2007 survey and in the ILS and Company categories in 2008, scored even higher this year, but this was not sufficient to rank over the chart-topping results given by Apollo libraries. AGent VERSO, supported by Auto-Graphics has shown improvement in each of the three years of the survey.

Two open source products OPALS supported by Media Flex and Koha when implemented Indepentently also fell within this tier of excellent perceptions in all categories.

Note: Koha received high marks only by those libraries that implemented it independently. Those that relied on LibLime as its support provider gave much more negative ratings.

Most Negative Perceptions

It's not surprising that products that have been discontinued, such as Athena and Winnebago Spectrum, from Follett Software Company recieved the most negative scores. It's a bit more interesting to see libraries give low satisfaction rankings to products that do receive ongoing development, though not positioned as the company's flagship ILS. Both Horizon from SirsiDynix and Voyager from Ex Libris received low ILS satisfaction scores. Libraries using SirsiDynix Symphony, gave almost exactly the same rankings as Horizon.

The customer support ratings given for LibLime's support for Koha were low, with only Winnebago Spectrum and Athena receiving worse marks. LibLime's customer support and company ratings have steadily declined over the three years of the survey.

ILS Satisfaction

As noted, libraries using Apollo from Biblionix seem to be almost unanimously delighted with their ILS. The mean satisfaction score of 8.35 surpasss those reported in the current or previous editions of the survey. The libraries using Apollo tend to be small, and moving away from long outdated systems.

The next tier of products reflect quite high satisfaction, all with a mean score higher than 7.0. These include AGent Verso (7.83), Polaris (7.79) Koha -- Independent (7.77), OPALS (7.66), Atriuum (7.53), Millennium (7.13), Library.Solution (7.06).

While the most positive and the most negative rankings tend to attract the most attention, the real story lies in those that fell between the extremes. hese are companies and products with generally solid support from their customer libraries, that reflect the realities of serving more complex organizations. Evergreen, Spydus, ALEPH 500, and Virtua all fell into ILS satisfaction with mean scores between 7.13 and 6.29. Though there is room for improvement in the satisfaction ratings for each of these products, these systems serve larger libraries or consortia with more complex operational contexts where product bugs and support issues are more likely to surface.

Open Source Perceptions

The survey reflected interesting attitudes regarding open source integrated library systems. Libraries already running open source ILS products such as Koha, Evergreen, and OPALS naturally give highly positive responses to the question “how likely is it that this library would consider implementing an open source ILS?.” These libraries have already implemented one. I notice that those using OPALS, an open source ILS primarily for K-12 schools, rated open source interest (6.99) much lower than those using Evergreen (8.42) or Koha (8.85 independent, 8.20 LibLime). It’s striking how dramatically the responses turn less positive from all other libraries. The trend continues from previous years of expressing less interest in open source from those giving their current system positive ratings. Yet, even those registering dissatisfaction for their current ILS, do not register especially high scores for open source interest. Of those currently using a proprietary ILS, only libraries using Winnebago Spectrum, one of the ILS’s no longer receiving support from Follett Software Company, gave an interest rating above the midpoint (5.36). When collapsing responses across all libraries regardless of current ILS, the mean score of 3.91 does not seem to reflect widespread interest; the mode score to this question, that is the response most frequently selected, was 0.

The survey separated the libraries using Koha according to whether they provide their own support or if they contract with one of the support companies such as LibLime. Libraries involved with other support firms including PTFS, Nusoft, and ByWater Solutions also responded, but not in sufficient numbers to be included in the summary tables. Libraries using Koha independently gave very high marks in all categories. One might either infer that self-support leads to higher satisfaction than working with one of the commercial support companies or that the marks are higher since they are essentially scoring themselves.


Comments on Selected Companies and products

Biblionix provides Apollo exclusively to small public libraries and only as a hosted solution. As noted above, this company has found a winning formula to win high praise from thier customer libraries. In all areas the libraries that use Apollo gave surpurlative ratings and the comments offered reflected the ease of installation and use of the product.

Polaris Library Systems, offering the Polaris ILS, continues to receive extremely positive rankings from its client libraries. The three-year comparison reflects slightly higher satisfaction scores for the ILS itself this year than the previous two. The company’s customer support satisfaction score was equal to last year's, but down just a bit from the stellar 8.11 mean ranking it received in 2007. In recent years Polaris has attracted some large municipal libraries as customers, which may not only put a bit more of a strain on the company’s support personnel, but it also fits into the context we’ve observed where more complex organizations tend not to give superlative scores.

Auto-Graphics, offering the AGent VERSO ILS, primarily to small public libraries and through Software-as-a-service earned extremely positive scores in all categories. Libraries using AGent VERSO like the product, Auto-Graphics as a company, and are satisfied with the support they receive. Auto-Graphics has seen a steady improvement in satisfaction across each of the three years of this survey. Its customer support score(8.34) improved almost a full point above the 2007 value (7.46). Comments provided on the survey were overwhelmingly positive.

Library.Solution from The Library Corporation, used primarily by small to medium-sized public libraries, received very good responses in all categories. Its scores trended toward the top of the middle tier of responses for satisfaction for ILS, company, and support. 8.70 percent of libraries using Library.Solution indicate interest in moving to a new automation system. Comments reflected mostly praise for the system and for its support.

Innovative Interfaces: More libraries using Millennium from Innovative Interfaces responded than any other ILS. While not at the top of the charts, libraries gave this ILS favorable ratings, though there were some dissenting voices. Libraries using Millennium seem to appreciate the quality of the ILS (7.13) a bit more than the company itself (6.58) or the support they receive (6.53). Scores for Millennium across the three years of the survey have been remarkably consistent. Despite these generally strong satisfaction scores, 11.68 percent of responding libraries indicate they are considering changing to a new ILS. The comments offered vary from high praise, to complaints about the cost of the system, or its perceived closed architecture. Comments regarding support ran from hot to cold.

Ex Libris Ex Libris, supporting both ALEPH 500 and Voyager deals with some of the world’s largest and most complex libraries. Its ALEPH 500 customers gave responses generally in the middle tier of satisfaction in all of the categories. 11.11 percent of libraries using ALEPH 500 indicated consideration of moving to a new ILS. 45.19 percent of ALEPH 500 libraries registered interest in a new interface, good news for their ambitions for Primo. Scores for ALEPH 500 improved in each category for across all three editions of the survey. Voyager did not fare quite as well on the survey. Its ILS satisfaction score was lower (5.91) as was customer support ratings (5.72) relative to ALEPH 500. Even though Ex Libris offers continuing support, 18.90 percent of these libraries express interest in moving to a new ILS.

SirsiDynix provides both Symphony and Horizon Symphony stands as the company’s strategic ILS going forward, but the company continues to develop and support Horizon. It also supports Dynix, but this system is no longer being developed and only a remnant of the once massive customer base of libraries using this product remains.

The ILS satisfaction for both Symphony and Horizon fell in the lower tier of customer satisfaction. Though the scores were infinitesimally close, it’s remarkable that Horizon was rated as more satisfactory than Symphony. Given the status of Symphony as the company’s flagship product, this result is startling. Both products received enough responses (Symphony = 305, Horizon = 191) to reinforce the validity of the results. Symphony sites did rate SirsiDynix as a company slightly higher than those with Horizon, with gave the company second-worst ratings. Only those using the abandoned Winnebago Spectrum product disliked their ILS company more. Relative to the other products satisfaction ratings for Symphony were near the middle in 2007, but fall toward the lower bounds in this year's edition.

The 10 Dynix libraries that responded rated their satisfaction with the ILS about the same as those running Symphony (6.00); but rated the company itself higher than libraries using either Horizon or Symphony (6.00); and customer support significantly higher (7.10). While the number of results submitted from Dynix libraries falls below the 20 required to show on the main summary tables, they are worth mentioning since they reflect similar perceptions to that of the company’s ongoing products.

Comments submitted from Symphony sites were generally negative, but peppered with ones reflecting more satisfaction. Libraries using Horizon generally state or imply that they are considering options other than Symphony for their next system. These comments imply that Horizon libraries with strong intentions to move to Symphony have done so by now.

When comparing results across the three years of this survey, Symphony libraries perceptions improved by a very small degree over those reflected in 2008. Satisfaction for Symphony, though in the lower tier, was higher in 2007 than in subsequent years. Support ratings are higher for 2009 (5.44) than for 2008 (4.91, but lower than 2007 (5.48). Symphony sites on average express quite low interest in open source alternatives, with a striking mean score of 0.


Details about The Survey

The survey instrument included five numerical ratings, three yes/no responses, and two short response fields, and a text field for general comments. The numeric rating fields allow responses from 0 through 9. Each scale was labeled to indicate the meaning of the numeric selection.

Four of the numeric questions probe at the level of satisfaction with and loyalty to the company or organization that provides its current automation system:

A yes/no question asks whether the library is considering migrating to a new ILS and a fill-in text field provides the opportunity to provide specific systems under consideration. Another yes/no question asks whether the automation system currently in use was installed on schedule.

view automation survey

Given the recent interest in new search interfaces, a yes/no question asks “Is the library currently considering a search interface for its collection that is separate from the ILS?” and a fill-in form to indicate products under consideration.

The survey includes two questions that aim to gauge interest in open source ILS, a numerical rating that asks “How likely is it that this library would consider implementing and open source ILS?” and a fill-in text field for indicating products under consideration.

The survey concludes with a text box inviting comments.

View the survey. (This version of the survey does not accept or record response data.)

In order to correlate the responses with particular automation systems and companies, the survey links to entries in the lib-web-cats directory of libraries. Each entry in lib-web-cats indicates the automation system currently in use as well as data on the type of library, location, collection size, and other factors that might be of potential interest. In order to fill out the survey, the responder had first to find their library in lib-web-cats and then press a button that launched the response form. Some potential respondents indicated that found this process complex.

The link between the lib-web-cats entry and the survey automatically populated fields for the library name and current automation system and provided access to other data elements about the library as needed. The report on survey response demographics, for example, relies on data from lib-web-cats.

A number of methods were used to solicit responses to the survey. E-mail messages were sent to library-oriented mailing lists such as WEB4LIB, PUBLIB, and NGC4LIB. Invitational messages were also sent to many lists for specific automation systems and companies. Where contact information was available in lib-web-cats, and automated script produced e-mail messages with a direct link to the survey response form for that library.

The survey attempted to limit responses to one per library. This restriction was imposed to attempt to sway the respondents to reflect the broad perceptions of their institution rather than their personal opinions.

The survey instrument was created using the same infrastructure as the Library Technology Guides web site—a custom interface written in perl using MySQL to store the data, with ODBC as the connection layer. Access to the raw responses is controlled through a user name and password available only to the author. Scripts were written to provide public access to the survey in a way that does not expose individual responses.

In order to provide access to the comments without violating the stated agreement not to attribute individual responses to any given institution or individual, an addition field was created for “edited comments.” This field was manually populated with text selected from the “comments” text provided by the respondent. Any information that might identify the individual or library was edited out, with an ellipse indicating the removed text. Comments that only explained a response or described the circumstances of the library were not transferred to the Edited Comments field.

Statistics

To analyze the results, a few scripts were written to summarize, analyze, and present the responses.

In order to avoid making generalizations based on inadequate sample sizes, the processing scripts included a threshold variable that would only present results when the number of responses exceeded the specified value. The threshold was set to a value of 20.

For each of the survey questions that involve a numeric rating, a set of subroutines was created to calculate and display simple statistics.

The “survey-report-by-category.pl” script processes each of the numerical ratings, displaying each of the statistical components listed above for each product that received responses above the threshold value. This report provides a convenient way to compare the performance of each ILS product for the selected question. The report sorts the statistics for each product in descending order of the mean. The report categories available correspond to the survey questions with numerical scale responses.

The “survey-product-report.pl” script provides the results for each of the ILS products mentioned in the responses. This report also provides the statistical components for each of the numeric question. It also provides the percentage of yes responses to the two yes/no questions:

[The text of this section mostly replicates what appeared in the 2007 version of this article. For for both editions of the survey I followed the same methodology for collection and and statistical analysis.]

Caveat

As I noted with last year’s survey, one should not read too much into the survey results. Responders to the survey provide their subjective impressions to fairly general questions. Although the survey instructions encourage responders to consider the broader institutional perceptions, it’s usually the case that multiple opinions prevail within any given library. While I believe that this survey does provide useful information about the experiences of libraries with their current integrated library systems and the companies that provide support, it should not be used as a definitive assessment tool.