Home
Introduction
The Digital Image Defined
Standards
Metadata
The Image
Networks, System Architecture, and Storeage
Why Digitize
Project Planning
Selecting Scanners
Image Capture
Selecting a Metadata Schema
Quality Control
Delivery
Security Policies & Procedures
Long-Term Management & Preservation
Conclusion
Glossary
Online Resources
Bibliography
Contributors
Illustration Credits
Printer Friendly PDFs



Introduction to Art Image Access


Delivery


The investment in creating a digital image collection will be wasted if the chosen delivery method is ineffective. Successful delivery will depend on a number of elements, including user requirements, interface design, the consistency and quality of metadata, the choice of image-management or presentation software, and technical infrastructure. Almost all digital image collections are now distributed via the Web, even when they are intended for internal use. Beyond that common feature, delivery solutions vary greatly in complexity, performance, and cost and include the option of contributing images and metadata to a collaborative initiative that offers material from several different institutions. This has the advantage of transferring the burden of providing access to a third party but the disadvantage of allowing minimal or no customization and requiring some abdication of control. It is also possible, and increasingly common, to both independently offer a collection and simultaneously contribute it to one or more collective ventures, with each targeting a different user group.

The simplest and least technologically demanding way of delivering material is to generate static Web pages, but this will not provide sufficient functionality for many institutions, which will require some level of dynamic, interactive interface. The interrogation of an image collection's metadata then becomes a central issue. There is no single, perfect delivery solution; the chosen system should take into account the audience, the size of the collection, the complexity of its metadata, the predicted level of demand and performance expectation, and security and intellectual property requirements. Images and metadata may be contained in a single database or spread across a number of integrated systems. Some image-management systems, usually the less powerful desktop variety, are all-in-one solutions that include a Web publication module and largely predetermine how images can be shown and searched. Others, usually the more powerful solutions, do not come in a single package but require the separate selection of a search engine, an underlying database, and perhaps a Web delivery mechanism and other components. These allow greater flexibility and customization. However, the more idiosyncratic the solution, the greater the technical support it will require and the more difficult it will be to maintain in the long term. Standard systems, open architecture, and the documentation of all customization and integration are strongly recommended.

Web delivery requires connecting the client's browser, a Web server, and an underlying database (the image management system). A common method for achieving this is the use of CGI (Common Gateway Interface) scripts; alternatives include JSP, PHP, and the Microsoft-specific ASP. These all involve queries being entered on the client via a Web page and executed on the Web server before results are passed back to the client browser. Whichever method is chosen depends on the database software, the complexity of data, and the available programming skills.

The search engine used in the delivery of any collection, whether included in a complete image-management solution or chosen separately, should be selected on the basis of its ability to fulfill identified needs, such as performing keyword, truncated-term, Boolean, or natural-language searches. It may also be important that the search engine be able to link to complementary resources, integrate a controlled vocabulary, or search across different data formats, different metadata schemas, or different repositories. Another feature to consider is whether the search engine supports an information retrieval protocol, such as Z39.50. Such protocols provide the communication between the client user-interface and the search engine residing on the server and permit searching across multiple servers without the user having to learn the search syntax of each server. (Z39.50 is not the only such technology, and it has been criticized as overly complex and not well adapted for the Web environment. However, it is well established in the library and archival community.)

Note that there are two search-engine issues significant for successful delivery of images online: the ability of users to search the collection itself (discussed above) and the ability of external search engines such as Google, Alta Vista, and the like to discover the site and collection. Sites may be registered with such search engines, and it is also possible to optimize a site for discovery by external search engines by such measures as populating meta tags with appropriate descriptions and keywords. (Google is unusual in that it does not read meta tags, but only title tags and Web page text.) A number of metadata harvesting initiatives have experimented with ways of making the deep metadata buried within collections systems more accessible to generalized search engines.

The technological capabilities of user workstations (operating systems, chosen browser, internal memory, storage, display quality, networking capability, and speed) accessing the Web are sure to be highly variable. Access to digital image collections should be tested using both more and less powerful modes of connecting to the Internet and with Macintosh and IBM-compatible or Wintel personal computers using various browsers, because images do not display uniformly across different platforms, and different versions and types of browsers have diverse idiosyncrasies. Remember that although it is not possible to control the quality or calibration of user monitors, it is possible to provide guidelines on optimal viewing parameters. Testing should also analyze and adjust the design of the access interface for maximum ease of use. Interface design should take into account accessibility for the disabled and aim to be as inclusive as possible. Many funding agencies require adherence to federal guidelines on Web accessibility.