Now online in its initial release, the [Research Collections Viewer](http://www.getty.edu/research/collections/) offers a visual way to browse and search Getty’s archival collections. The Viewer aims to make it easier to see what we have in our research collections—rare primary source material such as artists’ papers, prints, and photographs—as well as contextual information such as related works by the same artist.\n\nAt Getty we’ve been digitizing archives on a major scale since 1997 (see [this guide to early photography of the Mediterranean](https://www.getty.edu/research/tools/guides_bibliographies/photography_greece/index.html) for one of our first forays), yet digital images and finding aids have always existed in separate systems, connected only through a carefully managed set of links.\n\nThe new Research Collections Viewer connects the finding aids with their associated digitized materials, enabling website users to browse archives in one place. Not only is this vastly more convenient, it also preserves the archival organization of the materials—how they are grouped within the physical archive, for example by chronology or topic. This is a digital feature that researchers have long requested, but that has been elusive due to the siloed data and systems typically used within libraries and archives.\n\nIn addition to connecting the finding aids with the digitized materials, the Research Collections Viewer connects archival materials *themselves* together by leveraging [Linked Open Data standards](https://linked.art/). Using the “Related Material” section, you can explore archives intuitively, investigating relationships between people, places, dates, and ideas.\n\nFor its initial launch, the Research Collections Viewer features access to information and many images from the correspondence of artist Sylvia Sleigh and critic Lawrence Alloway and from the Los Angeles photographs of artist Ed Ruscha. We chose these archives because they presented unique challenges that will inform how we present other large and complex collections going forward.\n\nRuscha’s photographs of L.A. are particularly interesting because they feature geospatial metadata (locations where the camera took the photo), as well as extensive metadata generated via machine learning. This metadata allows you to search by street name or location, as well as by text visible in the photos (try “Bank of America” for a good example).\n\nThe Alloway and Sleigh correspondence, meanwhile, has extensive [crowdsourced transcriptions](https://www.zooniverse.org/projects/melissaagill/mutual-muses), which will be added to the Viewer in the coming months. The images in the Research Collections Viewer are delivered using the International Image Interoperability Framework (IIIF), which offers lightning-quick zoom capabilities. In the case of the Ruscha images, you’ll discover a viewing experience vastly more detailed than what is possible with the physical materials. You can zoom in deep enough to see the grain of the film negative.\n\nOver the coming years, all archives from the Getty Research Institute’s special collections and Getty’s institutional archives will be added to the Research Collections Viewer. Each of Getty’s research collections is unique, and the team is moving carefully to ensure that every one is represented as fully and thoughtfully as possible.\n\nCreating the first release of the Viewer was the work of five years and involved the combined efforts of archivists, librarians, metadata specialists, imaging experts, UX designers, software engineers, project managers (that’s me), and members of the research community who offered invaluable feedback along the way.\n\nWork on the tool is ongoing. If you perform research with archives and have tried the Research Collections Viewer, we’d love to hear from you. Submit comments via the “Feedback” link in the tool so we can hear how it’s working and what you’d like to see next.