In the world of research and its institutions, there is a useful if simplistic distinction between developing knowledge "for its own sake" and developing knowledge that answers a practical question. I believe conservation science falls into the second category, and so results must be judged by how well they answer practical questions. For climate control, my preferred formulation of the guiding question is this: "What are the risks to a collection from the various types and levels of incorrect relative humidity (RH), and what tools can be provided to practitioners for assessing these risks?"

addressing relative humidity

In several important areas, considerable work has been done that gets us much closer to answering the question I posed above. Here are three of them.

Mold Re-Reviewed

In 1994 I reviewed the literature on conditions supporting mold growth and published two summary curves—the RH and temperature combinations that allowed mold growth, and the grace period between the onset of those conditions and the appearance of mold. In 1999 these graphs entered the ASHRAE Handbook as tools for estimating mold risk. In his 2012 doctoral thesis, Thomas Strang added vast amounts of data to show that my line of safe conditions, while in general agreement with later building industry results, must be pushed a few percent RH lower for worst-case scenarios, and my curve on the grace period must also be pushed a few percent RH lower. Strang did not need to "do" mold research any more than I needed to in 1994; his research was the tedious and painstaking work of finding, compiling, and organizing a lot of data from disparate sources spanning decades. The goal was not simply to find the typical vulnerability of a collection, but, more important, to determine the highest reliably measured vulnerability. For all practical purposes of risk estimation, Strang's plots are definitive in laying out the parameters for RH with respect to mold. Users can only wish now for an Internet tool based on these plots!

Magic for Archives

To those of us raised on the paper-and-film conservation science literature before 2000, the field seemed stuck in quibbling over how to use a century-old equation named after Swedish scientist Svante Arrhenius. Dependence on humidity was also mooted, and still is—Barry Knight in 2014 concluded that the available data is not good enough to settle which of several competing models is correct for the role of humidity in paper deterioration. For risk analysis purposes, however, these rivalries are not significant. We long ago reached the point of good-enough estimates to justify cool to cold storage for vulnerable material, to understand that low RH, while beneficial, was not necessary in addition to low temperature, and to calculate how operational parameters such as regular retrieval of archival material from a repository would compromise these benefits. Our big hole in advice was the absolute calibration of these lifetimes—and what we meant by lifetime.

Then a breath of fresh air. In 2008 Jana Kolar and Matija Strlič, with others, introduced a method to calculate the statistical distribution of object lifetimes in a library collection, based on a straightforward optical (spectroscopic) measurement, calibrated against a known collection. In 2009 they added a method based on sampling the library air. To us old fogies, it looked like black magic, but the approach simply had been borrowed from fields grappling with the same problem—finding reliable trends in variable populations of chemically complex things. These methods abandon the classical method of building a model of simplified reality based on carefully controlled experiments on carefully controlled bits of that reality. Like sociologists, these researchers observe complex reality itself, looking for correlations between the research findings on a certain topic—paper usability as it ages naturally, for example—and a suite of chemical measurements.

Once correlations have been discovered in a known population, the chemical measurements can be used to predict answers, such as the number of years left at the current temperature before books become too weak to handle. These methods come in mysterious flavors, like multivariate analysis, principal component analysis, and "-omics," but they all depend on the brute force of computer calculation, as well as on advances in portable tools that collect vast amounts of digitized chemical signatures.


Long ago, Robert Brill noted that unstable glass posed a dilemma for RH control and that even a Goldilocks RH—not too high for one damage mechanism, and not too low for another mechanism—could not resolve this dilemma perfectly. And David Scott clarified the multiple RH thresholds that a caretaker of bronzes needed to know. However, for iron, whose vulnerability can be compounded by the ubiquitous contaminant salt, there was no comprehensive overview of the RH decision until 2005, when David Watkinson and Mark Lewis of Cardiff University were asked the following: "Given capital and maintenance costs, what RH control should we use for our very big and very salty iron thing, the SS Great Britain?" Systematically measuring all the RH thresholds was only the first step; the second was answering the practical question: Which RH was most cost-effective? Their work exemplifies how to do our kind of research within the realities of museum budgets, time lines, teams, public outreach, and—increasingly important—sustainability.

fluctuations and mechanics

The beginning of useful mechanical modeling was Marion Mecklenburg's 1982 "hockey stick"-shaped plot of tension in a painting as the humidity changed from low to high and back again. The sparse literature prior to 1982 either contained vague appeals to terms such as rheology, or was simply wrongheaded. In the late 1980s, in response to the many risk questions arising about traveling exhibitions, research institutions in Canada, the United States, and the United Kingdom joined forces for the Art in Transit project. This was not simply a conference but a structured set of articles plus a handbook (1991) with authorship assigned to appropriate staff—not to mention some strenuous vetting of draft lectures by a users group. Reexamining climate specifications was not yet in our sights, but implicit in the project was a focus on reducing the worst hazards of transit and a recognition that the trough in the hockey stick plots represented a safe zone that might be wider than assumed.

While I was scouring technical journals back to the nineteenth century for my review in Art in Transit: Handbook for Packing and Transporting Paintings, it became clear to me that our field was woefully unaware of relevant work from even the obvious industries, let alone the less obvious. We confused our "special" profession with special science. Key insights of the 1980s had been made long ago: Humidity changes the elasticity of linseed oil paint (1920s); canvas, when stretched, tightens at high humidity, not low humidity (1920s); and linseed oil paints develop internal strain during curing (1950s). Not that a full understanding was available—in the 1980s the paint industry itself was still trying to understand paint failure. But equations describing stress in varnishes due to solvent curing and stress in paints from RH and temperature change were available. These were, in fact, the equations for our hockey stick plots.

The decade after Art in Transit was a bit of a lull. Data accumulated but not systematically. The necessary concepts of viscoelasticity and fatigue gained traction. Christina Young applied the research to designing a less vulnerable artists' support. Meanwhile, most of us in the first research wave were moved to other important tasks, and we were asked to draw the best advice we could from what we already knew.

The new millennium finally brought forth new players with new tools, such as the group led by Kozłowski, Bratasz, and Łukomski at the Jerzy Haber Institute in Krakow. They attacked the question of the climate response of panel paintings and polychromes not only energetically but systematically, from comprehensive measuring of expansion coefficients for many wood species, through acoustic emission and computer modeling, culminating in cyclic fatigue testing of real gesso on real wood. With this systematic research in hand, they concluded that panels with gesso will tolerate 15% RH cycles without cracking, even for the maximum number of humidity cycles possible in a century.

In the last few years, academics in university departments of mechanical engineering (Loughborough University in Leicester) and building physics (the Technical University of Eindhoven) have collaborated with museums to apply their state-of-the-art computer physics models. Such collaborations have produced promising nuggets, but the graduate student life cycle inhibits momentum in a topic; it is the interests of supervisors and their funders that sustain it. The task of assembling these nuggets into a practical whole will fall to our own community.

In his 2009 doctoral research, Eric Hagan resolved a key question for modelers: Could the strength of paints, as pigmentation and climate varied, be modeled within the same (viscoelastic) framework that was now well established for paint stiffness? His data for acrylic paints said clearly, "Yes." The full chain of response to fluctuations was now possible to model: from dimensional change and stiffness change to stress (the hockey stick curve) and, now, to fracture.

My own recent work uses a type of software developed for risk analysis, Analytica. It allows one to build up a model by linking bits and pieces of other models and available data, all within a user-friendly graphic called, appropriately enough, an influence diagram. At the same time, it permits one to simulate variability in all the factors considered by the model, such as variations in how artists mixed their paints, variations in wood strength, and variations in how objects were constructed—and to see how the possibility of fracture changes as these variables interact. These interacting variables are the crux at the heart of our climate advice dilemma.

What's Next?

The science of climate risk evaluation of potential damage from mold, metal corrosion, and chemical decay in archives is already sufficiently accurate for institutional decisions. In our experience, the dominant uncertainties during risk analysis for protection issues lie elsewhere, in the monitoring data or lack of it, in the inventory of vulnerable items or the lack thereof, and in the estimates of value loss due to predicted damage. The science of mechanical risk from fluctuations cannot claim that yet.

Every year, exciting new tools for the analysis and measurement of mechanical phenomena appear, and hundreds of probably relevant articles are published in the industrial and materials literature. We have perhaps half a dozen people worldwide who are supported long-term to work on this issue, and maybe a dozen graduate students working on directly related theses.

We must accept the strengths of both research tactics: synthesis using models and controlled experiments, as well as correlational studies of actual collections. And we will have to be honest and ruthless with ourselves whenever paths of inquiry prove of limited value in answering the practical questions.

For the modeling approach, we need to address some major holes in our data. The most important deficiency, I think, concerns the effect of natural aging of the materials on their mechanical properties, especially strength. No amount of studies on artificially aged samples will convince the users; we need to measure these properties on well-characterized natural samples. This means adopting microscopic mechanical analysis, one of the exciting new tools to appear in the last decade.

For the correlational approach, we need to engage the caretakers of collections as well as those who know the history of objects fabrication. We need educated guesses from scientists about what measurements to collect to look for patterns, but we scientists also need to be open to the serendipitous suggestions of those with a feeling, a hunch, about what indicators tell them about an object's vulnerability, and we must find a way to code that, too, for the machine. Not that every hunch will stand up under careful scrutiny, any more than every scientist's model will—but when the best of both coincide in their predictions, we will have found our useful advice.

It is an exciting time for collaboration.

Stefan Michalski is a senior conservation scientist at the Canadian Conservation Institute.