Copyright (c) 2006 Information Today
|Library Technology Guides, a Web site that I created and maintain to provide access to information related to library automation, and Abzu, a guide to information on the Web related to the study of the ancient Near East, offer quite different types of information, but they share a similar technology platform-a content management system that I have been developing over the course of the last 6 or 7 years. From the perspective of layered technical components, the search options specified in the link are passed to the Web server through the standard CGI (common gateway interface) as name/ value pairs, which are parsed by a Perl script and translated into a database query statement expressed in SQL.|
Those of us who work in libraries focus much of our energies on providing access to information. The major changes we have seen in the last decade or so have taught us that we need to be flexible in the ways that we approach content delivery. The initial emergence of the Web brought enormous change, making electronic access to information commonplace-in many ways the dominant expectation. Even more recently, a new set of evolutionary shifts in electronic information, often dubbed Web 2.0, changed the mix even more. This new vision of the Web emphasizes services, interactivity, and collaboration. Manifestations of Web 2.0 include wikis, blogs, Web services, and syndication. (For more information, see "What Is Web 2.0? Design Patterns and Business Models for the Next Generation of Software" by Tim O'Reilly at http ://www. oreillynet. com/pub/a/oreilly/tim/ news/2005/09/30/what-is-web-20.html.)
In practical terms, it's important to offer an array of ways that we distribute information. Library users span all the generations of information distribution mechanisms. Some prefer to receive information in their e-mail in boxes, some get information as they need it through the Web, and others like it fed to them through a syndicated service such as RSS. Let us also be careful not to neglect those that prefer the printed page.
To the extent that those of us in libraries are involved in creating and distributing content, we need to take these factors into consideration if we want to reach the largest range of readers. This month, I'll talk about a couple of the resources I've helped develop and discuss the access options they use.
Library Technology Guides, a Web site that I created and maintain to provide access to information related to library automation, and Abzu, a guide to information on the Web related to the study of the ancient Near East, offer quite different types of information, but they share a similar technology platform-a content management system that I have been developing over the course of the last 6 or 7 years. Both consist of bibliographic and fulltext information that is stored in a database and accessed through the Open Database Connectivity (ODBC) layer using SQL. Both rely on an interface programmed in Perl for search, retrieval, and presentation. The framework includes Web forms for creating and managing the content of the resource. In addition to these two sites that deal mostly with textbased information, a number of other resources that manage images, digital audio content, and digital video are also supported by this infrastructure.
The Abzu bibliographic database (http:// www.etana.org/abzu) was created as part of an initiative at Vanderbilt University Library called ETANA, or Electronic Tools for Ancient Near East Archives, which spans a number of projects. One of the participants associated with ETANA, an archivist at the Oriental Institute of the University of Chicago, is the editor for Abzu. He scours the Web for new publications and other resources relevant to the field and posts them to the database. The original version of Abzu was a set of static Web pages. Under ETANA, the resource was transformed into a dynamic database-powered Web site, where the content can be retrieved and viewed in many different ways.
The Library Technology Guides (http://www.librarytechnology.org) Web site supports my interests in the technologies and companies involved in library automation. It consists of a number of interrelated databases, including a bibliographic database of citations and the full text of articles and press announcements, a directory of libraries and the automation products they use, and a directory of the companies and other organizations involved in producing library automation products.
Though quite different in terms of content, both of these resources face the same challenge of needing to deliver information through multiple vehicles. The content management systems of both Web sites have been developed with users' diverse needs in mind. Let's walk through some of the methods they employ.
Both Library Technology Guides and Abzu present information through dynamically generated browsable lists. Entries in any of the databases can be viewed by simply clicking through links provided in the interface. Abzu, for example, offers an A-Z list for browsing the entries by authors (see Figure 1); Library Technology Guides provides the ability to browse content through lists of subject categories (see Figure 2). Both provide a link to view recently added entries.
The current news link is one of the key features of Library Technology Guides, since it presents an up-to-date summary of the recent developments in the field spanning all the major companies and organizations. When a user clicks on the link to see recent additions, the Web site creates the list dynamically by generating a canned query to the database, which responds with the current information that resides in the database, sorted appropriately and delivered to the user's browser in XHTML formatted according to the presentation options specified in the cascading style sheet (CSS) linked to the resource. From the perspective of layered technical components, the search options specified in the link are passed to the Web server through the standard CGI (common gateway interface) as name/ value pairs, which are parsed by a Perl script and translated into a database query statement expressed in SQL. The SQL statement flows through the ODBC layer to the underlying database, which executes the query and delivers the results back up the stack (through the ODBC layer back to the Perl script, which then parses the fields of the records returned, wraps them in HTML tags, and delivers them back to the user's Web browser). This general process reflects the basic programming techniques that underlie almost all database-driven Web sites.
The search options offered by both resources follow the same process, but instead of using a canned query embedded in a link, they allow the user to type a query in a search box or to select an option from a drop-down list. Both Abzu and Library Technology Guides provide an interface for searching that divides the result set into multiple pages, allowing the user to move through the results by clicking on links to move forward, backward, or to a specific page of results. Clicking on the brief view shown in the initial result page displays the full record of a particular entry. All of these features are standard interface techniques employed by Web search engines, library online catalogs, abstractingand-indexing databases, and the like. This interface was designed according to the conventions that users have grown to expect when searching Webbased resources.
These browse and search options serve only those users that actually come and visit the Web site. It's often the case that a person has an interest in a topic and prefers to receive information automatically. Library Technology Guides has long had a notification service that allows users to register their e-mail addresses and receive a monthly digest of new developments in the field. This e-mail notification re lies on many of the same components as the searchand-browse interface. Instead of the query being launched interactively by a user, however, it is launched through a scheduled script using the "at" scheduling facility built into Windows Server (the equivalent of the "chron" scheduler native to UNIX systems). At the specified date and time, the operating system executes the Perl script, which sends a query to the database for all the entries added in the last month. The script formats the results in a text format appropriate to an e-mail message, with embedded links for each entry's corresponding full text. Once the text of the news digest has been formatted, the script performs a query to the subscriber's database to get the e-mail addresses of the recipients. The script then proceeds to generate a copy of the e-mail digest to each subscriber through a utility that processes outgoing messages (see Figure 3).
RSS has quickly risen to become one of the forms of distribution expected of a Web site. An increasing number of individuals want to receive content through some sort of RSS-enabled interface. RSS provides a mechanism for distributing content from your Web site as a set of XML documents to external systems. Once in RSS XML format, the content can easily be viewed in a number of environments ranging from RSS aggregators such as Bloglines to active bookmarks in Firefox to content services in portals such as Google and Yahoo! personalized pages. This process provides opportunities to expand a content service's audience and is very much part of Web 2.0.
From an implementation perspective, starting an RSS feed in a content management system varies only in small ways from delivering the same information in HTML. All the major blog applications and most content management systems have this capability. In my homegrown system, most of the work involved in creating an RSS feature was writing routines that formatted data in Resource Description Framework (RDF) and XML instead of XHTML and CSS. All of the components involved in querying the database in SQL and ODBC work just the same, with new code added to deliver the pages in XML. To get started, I studied the XML source for a number of RSS feeds to get a sense of the flavor of syntax most widely employed, since there are a number of different versions of RSS. Technical documentation on RSS can be found on a number of Web sites, including http:// web.resource.org/rss/l.O/spec.
XML programming requires considerable attention to detail. While Web browsers forgive almost all HTML errors and can display pages riddled with mistakes, most XML parsers abort when they encounter even a single error. Programmers must pay special attention to the correct syntax of the XML tags and to ensuring that only valid characters appear in the documents. Both Abzu and Library Technology Guides deliver content extracted from external sources that include special characters that need to be translated into ones that can be rendered as valid XML.
Most librarians won't have to get involved in heavy-duty programming. I offer these examples to illustrate some of the details that take place behind the scenes and to show you some of the general characteristics you should look for in an information resource to give it flexibility of delivery options. For those of you who do Web programming, this discussion shows you how easy it is to layer in multiple delivery mechanisms for any given source of content. The Web will inevitably evolve further. It's important that the ways that we manage access to information resources respond accordingly.
Finally, it's important to emphasize that content delivery options are cumulative. While we may implement additional delivery mechanisms, it hardly ever means that any of the existing ones fall away. This fact may add complexity, but your work pays off in terms of an expanded audience and user satisfaction.
|Type of Material:||Article|
Computers in Libraries|
|Volume 26 Number 1|
|Systems Librarian Column|
|Last Update:||2012-12-29 14:06:47|
|Date Created:||0000-00-00 00:00:00|