Tools for Characterizing Materials

This article appears in Joseph D. Martin and Cyrus C. M. Mody, eds., Between Making and Knowing: Tools in the History of Materials Research, WSPC Encyclopedia of the Development and History of Materials Science, vol. 1. Singapore: World Scientific, 2020.

 

“If you can spray them then they are real” is the philosopher of science Ian Hacking’s pithy answer to the question of when we should believe in the existence of those microscopic entities we cannot see.(10 p23) Much history and philosophy of science has concerned the second half of Hacking’s slogan. Historians have investigated how scientists came to believe that things like electrons, neutrons, and photons are real. Philosophers have wondered what it means to build science around belief in entities that are unobservable, or only indirectly observable. But the first half of the quote hints at other, more rarely told stories.(7) When we spray electrons, or neutrons, or photons, how do we spray them? At what? To what end? The history and philosophy of science have said a great deal about the things we spray, but much less about how and why we spray them.

The fourth and final section of this volume concerns tools used to understand the properties of materials. Many of these tools emerged because it is possible to learn a great deal about substances by spraying things at them and then watching how those things are absorbed and reemitted, scattered and reflected, diffracted and refracted. Like a handful of pebbles, which when tossed into a well sends back sounds that can help ascertain the well’s depth and wetness, subatomic particles, when directed at materials, return information about the unseen properties of those materials. Even while some scientists were struggling to comprehend the basic properties and behavior of electrons, neutrons, photons, and other particles, another group seized on these same particles as the basis for new techniques to probe more complex matter. Almost all of the basic phenomena that interest physicists and chemists are relevant to characterizing materials. Physicists, who study heat, light, sound, force, motion, electricity, magnetism, and quantum behavior, can examine all of these as they behave in materials. Similarly, materials offer the chemist an equally productive means of exploring bonding, valence, affinity, reactions, and structure. Investigating why materials behave the way they do is therefore relevant both to finding new things to do with them, as we have seen in part 3 of this book, and to developing a better foundational knowledge of the world around us.

Scientists and engineers have been characterizing materials for hundreds of years. No era in this branch of study has been so lively, however, as the twentieth century, when a new understanding of the microstructure of matter, itself guided by inventive new tools, made it possible to explain easily observable phenomena like magnetism, electrical and thermal conductivity, hardness, and brittleness in terms of the arrangement and interactions of atoms and molecules. The tools described in this section are products of the fruitful give and take between better theoretical understanding of the microstructure of matter and laboratory techniques that generated new ways of seeing materials.

To understand the world that made these tools possible and gave them their purpose we must step back a little over a century to consider how they fit with two related developments. The first is the growing interest in radiation of all types in the late nineteenth century. New tools and detection techniques, such as discharge tubes and phosphorescing screens, made it clear that certain substances could be induced to emit some kind of energy, which could then be detected at a distance. We now understand those and other emissions as photons, or electrons, or protons, or neutrons, or helium nuclei, depending on the particles that compose them. But at the time they had names like “cathode rays,” “rays of positive electricity,” and “X-rays” (the last of which has stood the test of time). We also now understand many them to be exceedingly hazardous, a realization that would lead to the safety regimes described in Amy Slaton’s contribution to part 1, but which arrived too late for many early experimenters.(30) These rays, which scientists sought to understand for their own sake at their own, unwitting peril in the late-1800s, would become the sprayed entities that made new tools to understand materials possible in the early 1900s.

Second, the 1910s and 1920s would see the advent of quantum theory. Before it would be possible to fully interpret the data generated when bombarding materials with microscopic particles, scientists would have to come to grips with the strange and counterintuitive way that matter and energy behave at very small scales. These two developments were closely connected. Studying the rays that had piqued scientists’ interest in the late nineteenth century generated a host of problems and questions that led to the development of quantum mechanics in the early twentieth, which in turn created the theoretical foundation that would allow a bounty of new tools based on bombarding matter with rays of some kind throughout the remainder of the twentieth century.

Both of these stories are standard fare in the history of science, where they are usually presented as a prologue to later developments in nuclear and high energy physics. As the study of rays evolved into the study of the particles that composed them, physicists embarked on a journey toward smaller and smaller length scales that took them inside the atom and then inside the nucleus. By the same token, the rise of quantum theory established the mathematical language that was necessary to talk about subatomic and sub-nuclear phenomena. But these early developments were just as important as precursors of materials research. Because of the tremendous influence, prestige, and visibility fields like nuclear and high energy physics gained later in the twentieth century, historians have naturally tended to look at earlier developments and see in them the seeds of this success. However, we should not be so quick to privilege this view over the perspective that shows these activities making equally important contributions to materials science. Early twentieth century physicists, such as Hans Bethe and Arnold Sommerfeld for example, often made contributions both to the quantum mechanics of single particles and to the characterization of complex matter.(11;19) The intellectual traditions we now call high energy physics and condensed matter physics were closely intertwined for much of the twentieth century, and so we must understand earlier contributions that formed the roots of one as giving rise to both.

Efforts to decipher the mysteries of cathode rays and X-rays were certainly prerequisite to later investigations of elementary particles. But they also led to valuable tools for investigating complex matter. Quantum mechanics was similarly critical for interpreting the information those tools returned. This introduction describes how late nineteenth and early twentieth century developments in physics and chemistry created the conditions that allowed the proliferation of tools for characterizing materials later in the twentieth century. The contributions that follow describe many of these tools, and collectively reinforce the point that the legacy of early microphysics and quantum theory is richer and more extensive than its evident contributions to nuclear and high energy physics.

 

The Days of Rays

The late nineteenth century was a heady time to be a scientist in Europe. The fin de siècle—French for “end of the century”—was not only an era of great political and economic change, it also brought exciting developments in the scientific understanding of matter. In the 1890s, it was still scientifically respectable to deny the existence of atoms.(1) By the early 1910s, it was possible to “see,” in a manner of speaking, atoms and their arrangement in materials with the help of new X-ray diffraction techniques.

FIg. 1. Sir William Crookes, depicted with his eponymous tube in Vanity Fair in 1903. Credit: Wikimedia Commons

How did scientists move from uncertainty about the microscopic world to confidently manipulating it in a search for new knowledge about materials? Important background conditions include nineteenth century imperialism and the Industrial Revolution. The pursuit of empire by the major European powers encouraged exploration and exploitation of natural resources, the construction of communication networks, and forms of travel that brought the growing scientific community together.(12) News traveled fast on railroads and steamships, and through telegraph cables, and new discoveries quickly generated excitement and elicited responses. After Wilhelm Conrad Röntgen discovered X-rays, he published his results in Austria in December 1895. The news reached the English journal Nature in January 1896. Science, the journal of the American Association for the Advancement of Science, reported Röntgen’s discovery in February, as did the French L’Eclairege Electrique.(8 p28) Industrialization encouraged scientific investigations of a host of materials, both new and old. When Marie Curie discovered the element radium in 1896, she did it by analyzing pitchblende, a uranium-rich mineral that was a byproduct of the Eastern European silver mining industry, which served the hunger in Europe for precious metals and took advantage of steam-powered mining equipment.(9)

But novel tools and techniques were perhaps the most important of all for the rise of the rays. Glassblowing and vacuum technology had become ubiquitous by the late 1900s (see contributions by Catherine Jackson in part 1 and Cyrus C. M. Mody in part 2). Through the mid-1800s, partially evacuated glass tubes, rigged with wires to allow an electrical current to be pumped through them, became parlor curiosities for the strange luminous glow they would produce. In the 1860s, William Crookes (figure 1), working in Britain, transformed these “discharge tubes,” as they were known, into more flexible tools for studying electrical phenomena by vastly improving the vacuum that could be obtained in them, creating what became known in the English-speaking world as the Crookes tube. (In the Germanic world, similar apparatus was more commonly named for Johann Hittorf.)

Crookes tubes quickly became a favored tool for investigating electricity. “The phenomena attending the electric discharge through gases are so beautiful and varied that they have attracted the attention of numerous observers,” wrote the Cambridge physicist J. J. Thomson, who explained that “while the passage of this agent [electricity] through a metal or an electrolyte is invisible, that through a gas is accompanied by the most brilliantly luminous effects, which in many cases are so much influenced by changes in the conditions of the discharge as to give us many opportunities of testing any view we may take of the nature of electricity, of the electric discharge, and of the relation between electricity and matter.”(27 p189)

FIg. 2. A modern Crookes tube, before (top) a current is applied and after (bottom). The shadow cast by the Maltese cross provided evidence that the phosphorescence was produced by some type of emanation from the cathode. Credit: Wikimedia Commons

The Crookes tube was responsible for two discoveries that would enable a wide range of future tools for materials research, electrons and X-rays, and Thomson was responsible for the former. Before there was the electron, there were cathode rays. One of the first things Crookes observed when he began evacuating his tubes to produce higher and higher vacuums was a dark patch near the cathode in what had previously been a fully illuminated tube. At better and better vacuums, the dark spot grew, until the glow was reduced to a luminous spot at the end of the tube. Because this spot could be found directly opposite the cathode, or negative electrode, the phenomenon was presumed to be produced by rays emanating from the cathode, which the German physicist Eugen Goldstein named cathode rays (figure 2). 

The precise nature of cathode rays remained mysterious through the late nineteenth century. Experiments showed that cathode rays were deflected by magnets, and that two cathode rays mutually repelled each other, supporting the conclusion that they were negatively charged. Crookes’s own theory was that the gas molecules remaining in the tube—even the very high vacuums Crookes could produce were far from perfect—were ionized by the electrical current and, at very low pressures, could be accelerated across the tube without losing too much momentum through collisions with others. Thomson, however, was unsatisfied with this explanation.(27 p119) He measured the charge-to-mass ratio of cathode rays, which he judged to be incompatible with Crookes’s theory of ion beams, and speculated instead that cathode rays consisted of “corpuscles,” which he supposed to be component parts of atoms that also bore some responsibility for their chemical characteristics. For this leap, we credit Thomson with the discovery of the electron, a term that George Johnstone Stoney had proposed for an elementary unit of electricity in 1891.(22) 

Thomson proposed his “corpuscles” in 1897. Thirty years later, his son, George Paget Thomson would pioneer electron diffraction techniques.(22 p1) At that point, by Hacking’s criterion, electrons were firmly real. More precisely, they were sufficiently understood to be used as the basis for new tools and techniques to characterize materials. The Crookes tube can therefore be regarded as the grandparent of the range of electron diffraction and microscopy techniques that have provided some of our most vivid images of materials at the atomic scale (figure 3). Tom Vogt and Pedro Ruiz-Castell offer further detail on these developments in their contributions to this section.

Fig. 3. On November 11, 1989, IBM researcher Don Eigler and his team used a custom-built scanning-tunneling microscope to spell out IBM with 35 xenon atoms. Credit: IBM News Room

A long and steady process of research into the nature of cathode rays yielded the electron; the discovery of X-rays was somewhat more abrupt. In November 1895, Wilhelm Röntgen stumbled upon X-rays when investigating the source of an unexpected glow on a phosphorescing screen—a chemically coated sheet that emitted a telltale incandescence in the presence of light or radiation, and which could be used to determine its presence, source, and intensity. The phosphorescence Röntgen observed appeared in the vicinity of an operating Crookes tube, which was covered by an opaque box. Röntgen was not the first to notice this phenomenon, but he was the first to become curious enough to investigate it systematically.

FIg. 4. Anna Bertha Ludwig’s hand, as seen by X-ray. Credit: The Wellcome Collection

Like any competent physicist of the 1890s, Röntgen had a good idea of what sorts of properties to look for when he suspected he was dealing with a new type of ray. After ascertaining that they originated from his discharge tube, and noticing that they were highly penetrative compared with normal light, he placed a number of objects between the source of the X-rays and the screen to determine what types of materials they were capable of penetrating and to what degree. He experimented with photographic plates as well as phosphorescing screens, producing the now-iconic image of his wife’s hand (figure 4). He subjected them to magnetic fields and determined that they did not undergo magnetic or electrostatic deflection. He attempted to interfere them, but observed no discernable interference patterns.(23 p147–50)

After working feverishly for several weeks, Röntgen sent a short letter, “On a New Type of Rays” (Ueber eine neue Art von Strahlen), to the Würzburg Medical Physics Society (Würzburger Physik.-medic. Gesellschaft) detailing his investigations.(25) Because the set of properties he observed did not align with any other type of emanation known at the time, Röntgen name them “X-rays” (X-Strahlen), highlighting their mysterious nature and the provisional state of his understanding. Further investigations, in particular those by Hermann Haga, Cornelis Wind, and Charles Barkla, indicated that X-rays exhibited behavior, such as diffraction and polarization, more commonly associated with light. By 1912, Max von Laue’s theoretical prediction that X-rays would behave like light waves with much shorter wavelength than visible light had been confirmed.(23 p148–49)

The medical applications of the new rays were the most immediately obvious. The ability to peer inside the body was a boon for doctors, who were beginning to standardize their practices and to embrace new technologies, of which X-rays were one. But many doctors were slow to adopt X-rays, in part because they had already developed other ways to view the interior of the body.(15) In contrast, X-rays represented a strikingly new possibility to peer into the sub-microscopic interior of matter.

Max von Laue’s astute prediction of the wavelength of X-rays relied on the additional assumption that their wavelength would be on the order of the spacing of an atomic lattice. If that were the case, he realized, X-rays should diffract through crystals—that is, they should behave like waves and produce interference patterns. Most naturally occurring, inorganic solids are crystals, which means that the atoms that make them up are spaced out regularly, in a recurring pattern. In the early twentieth century, physicists and chemists had begun to understand solids as crystals with just such a regular structure. The insight von Laue achieved was that an experiment demonstrating X-ray diffraction would, in one fell swoop, indicate both the wavelength of X-rays and the lattice spacing of crystals, and provide a flexible tool for investigating crystal structure.(5;6 p59–61) His lecture on the occasion of winning the 1914 Nobel Prize in Physics ended by pointing out the potential of X-ray diffraction for probing the nature of crystal lattices.(29)


Röntgen opened his report of his new discovery by writing:

“If one passes the discharges of a large induction coil through a Hittorf vacuum tube, or a sufficiently evacuated Lenard or Crookes tube or other similar apparatus, and the tube is covered by a thin, black cardboard, one sees that, in the completely darkened room, a paper screen coated with barium platino-cyanide located near the apparatus, will illuminate brightly at every discharge, whether the coated or the other side of the screen is facing the discharge apparatus. The fluorescence remains visible at a distance of 2 m from the apparatus.

“It is evident that the source of the fluorescence originates from the discharge apparatus and from nowhere else in the path.”

 

The Nobel committee would continue the X-ray theme in 1915, awarding the physics prize to the father-and-son team of William Henry and William Lawrence Bragg. With his collaborators, von Laue had observed the diffraction patterns from the passage of X-rays through crystals; the Braggs had figured out how to read them. Even relatively simple crystals, such as sodium chloride (NaCl), produced intricate diffraction patterns that could be difficult to interpret (figure 5). By figuring out how to reverse engineer these patterns to infer the atomic structure of crystalline solids, the Braggs opened new frontiers in the understanding of material structure.(13)

As Robin Scheffler discusses in his contribution to this section, X-ray diffraction would eventually become a prominent tool in biological as well as in physical and chemical research. Many of the tools discussed here followed a similar trajectory. Originating as a way to arbitrate a specific question, they later become tools for materials research when scientists recognize their flexibility. Often, that flexibility became clear as new tools helped to drive new theoretical understanding. In the early twentieth century, that understanding came in the form of quantum physics, which proved critical for making sense of the data new tools were making available.

Fig. 5. Left: schematic depiction of a diffraction experiment from course notes written by Mildred Dresselhaus for her MIT course on solid state physics. Credit: Courtesy of Randall Richardson. Right: X-ray diffraction pattern of NaCl. Credit: Robert Eisberg and Robert Resnick, Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles. New York: Wiley; 1974, 68, Copyright © 1974 by John Wiley & Sons, Inc.

Materials and the Quantum Revolution

Optical dispersion, the tendency of material arrangements such as prisms and diffraction gratings to decompose white light into a spectrum, has been studied since the seventeenth century. Dispersion by glass prisms was, for example, one of the major subjects of Isaac Newton’s Opticks. It was not until the nineteenth and twentieth centuries, however, that optical spectra became intimately relevant to the study of matter. In the early nineteenth century, Joseph von Fraunhofer, using diffraction gratings to produce better spectra, observed a series of dark lines in the spectrum of the sun, which came to be known as Fraunhofer lines (figure 6). Persistent efforts to understand these lines would lead to the quantum theory of matter and energy.

Fig. 6. Fraunhofer lines. Credit: Wikimedia commons

The realization that the dark Fraunhofer lines corresponded to bright bands of color in the spectra of light that substances gave off when they burn connected spectral lines to the study of matter. The dark lines in a spectrum of white light (an absorption spectrum) and the bright lines in a spectrum of colored light (an emission spectrum) were two sides of the same coin. Two German natural philosophers, Robert Bunsen and Gustav Kirchhoff, formed a collaboration that would cement the connection between absorption/emission spectra and the science of matter. They built their research program on new tools. An improved form of heating apparatus, now known as the Bunsen burner, permitted them to burn materials in such a way that they would give off clear spectra, and a new piece of optical equipment, the spectroscope (figure 7), made analyzing those spectra simpler and more precise.(20) 

Fig. 7. Bunsen and Kirchhoff’s spectroscope. The light produced by the lamps or burners E and D is decomposed by the prism P. C is a lensed tube with a scale inserted at the end. The user observes the spectrum superimposed on the scale through the scope B. Source: (17).

Their results led to the identification of a number of new materials, and their methods established a line of research that would foundationally change the scientific understanding of matter. Through the late nineteenth century and into the early twentieth, dispersion was understood to result from the interaction between light and matter, with phenomena like emission and absorption lines explained by some kind of resonance, or co-vibration, between light and matter. Investigating the nature of this interaction, and thereby explaining emission and absorption spectra, was instrumental to early quantum theory.(16)

Quantum physics, therefore, was a child of the spectroscope. Niels Bohr developed his model of the atom in 1913 to explain the Balmer series, the spectral lines that constitute the absorption/emission spectrum of hydrogen. Spectral analysis was equally important to the realization that Bohr’s atomic model, which neatly described hydrogen, could not account for the structure of other atoms, and so motivated the development of a full-fledged quantum mechanics. As the American quantum physicist John Hasbrouk Van Vleck described the situation in 1928: “The chemist is apt to conceive of the physicist as some one who is so entranced in spectral lines that he closes his eyes to other phenomena.”(28 p493)

Van Vleck’s quip, although it indicates the extent to which spectral data were critical for opening new lines of research, elides the extent to which the matter responsible for spectral lines was also a subject of physical fascination. One critical example is the Compton effect. In 1922, Arthur Holly Compton observed a correlation between the wavelength of radiation produced in X-ray scattering and the angle at which it was scattered. The change in wavelength implied a corresponding loss of energy, thus softening the scattered secondary rays. This correlation posed a problem from the classical wave theory perspective in which such effects would have been expected to disappear at the low intensities Compton was using, and he was not immediately sure how to explain his results. His crucial insight came when he recognized that the process could be treated as an inelastic collision between the scattering electron and the scattered X-ray quantum.(3 p221) His insight was possible because of apparatus descended from the Crookes tube that allowed him to generate X-rays with very precise wavelengths.

To many in the physics community, Compton scattering, and his explanation in terms of light quanta published in a 1922 Physical Review paper, provided the smoking gun needed to validate the reality of light quanta. Compton described the process like this: “Any particular quantum of x-rays is not scattered by all the electrons in the radiator, but spends all of its energy upon some particular electron. This electron will in turn scatter the ray in some definite direction, at an angle with the incident beam. This bending of the path of the quantum of radiation results in a change in its momentum. As a consequence, the scattering electron will recoil with a momentum equal to the change in momentum of the x-ray.”(4 p485) In the face of Compton’s strong experimental evidence, the bulk of the physics community finally accepted Albert Einstein’s 1905 light quanta hypothesis.(26)

As the Compton effect demonstrated, understanding light to be composed of individual quanta of electromagnetic energy was central to analyzing light-matter interactions. This is one of the reasons Einstein’s Nobel Prize came not for the theory of relativity, for which he is best remembered, but for his explanation of the photoelectric effect, which postulated light quanta. Quantum packets of light energy, and their interactions with the electrons confined within materials, would become the basis for a range of new spectroscopic techniques through the mid-twentieth century (see contributions to this section by Loïc Petitgirard, Catherine Westfall, Robin Scheffler, Indianara Lima Silva, and Benjamin Wilson).

Historians have often looked at quantum mechanics as a theory that was mostly worked out with appeal to elementary particles, such as the electron and the photon, and very simple systems such as the hydrogen atom, before it was applied to understand more complex physical and chemical systems. But recent scholarship has shown further how the investigation of materials was critical to the articulation of quantum mechanics itself. Concepts like tunneling, resonance, exchange, and others, which feature centrally in our contemporary understanding of quantum mechanics, especially its role in determining the properties of materials, were worked out in the late 1920s and early 1930 as the theory of quantum mechanics itself was maturing.(14)

Investigations of the quantum behavior of solids and molecules were in turn critical for interpreting the results returned by new tools. After World War II, neutron bombardment became a powerful way to understand materials (see Hallonsten and Kaiserfeld in this section). Neutrons, which are electrically neutral, are less likely to interact with or be deflected by charged electrons and can penetrate materials deeply, providing glimpses of properties and behavior that photons or electrons are likely to ignore. But interpreting the results of neutron bombardment requires an understanding of quantum features of the solid state such as phonons—quantized, vibrational normal modes in a crystal lattice, which carry energy through it.(18 pIII-51–III-52)

Materials helped build quantum mechanics, and quantum mechanics in turn motivated a better understanding of materials. Tools for characterizing materials likewise evolved as part of a rich give-and-take between theory and experiment. Devices like the spectroscope generated the rich data set on spectral lines that made it possible for Niels Bohr and others to imagine what the internal structure of the atom might be like. The quantum theory that emerged from their speculations matured into quantum mechanics as physicists and chemists grappled with the challenges posed by understanding the quantum properties of materials. That understanding in turn opened up a range of new possibilities for using the various species of rays discussed above to further probe the structure, behavior, and properties of matter.

 

Making Materials Real

We began with Hacking’s suggestion that things are real if we can spray them, and considered in turn what scientists were spraying (photons, electrons, neutrons, and other particles) and at what they were spraying them (matter, predominantly regular solid crystals, which were instrumental to both developing and testing quantum theory, but later in the century including amorphous solids and biological matter as well). It remains to consider the broader purposes for which material characterization tools were deployed. As the twentieth century matured into middle age, so did the scientific understanding of materials. It would be fair to extend Hacking’s observation that we take the reality of unobservable entities seriously when we spray them in service of another end to say that understanding materials provided that end. But the things scientists sprayed were far from the only thing whose reality was at issue.

A striking example comes, in fact, from the use of new tools for materials research to demonstrate an unreality. In 1912, just as X-ray diffraction was emerging as a technique and quantum theory was beginning to take hold, a new discovery shook the world of anthropology as well. Bone fragments discovered near Sussex, England, pointed toward a new human ancestor, dubbed Piltdown Man. The bones—a partial skull, jaw, and some teeth—threatened to upend the standard understanding of human evolution. The find was so surprising, in fact, that some anthropologists immediately suspected an elaborate and sophisticated hoax.

Despite longstanding suspicions, it was not until 1953 that the hoax was demonstrated conclusively. And an electron microscope would play a critical role in uncovering the Piltdown Man hoax. Anthropologists Kenneth Oakley, of the British Museum of Natural History, and Joseph Weiner, of Oxford University, both fascinated with the four-decade old case, joined forces to test the Piltdown artifacts using a range of methods that had emerged since 1912. These included inspection with an electron microscope, which showed the presence of organic material in the jawbone, indicating that it must be a modern specimen. Oakley and Weiner also used radiography, an X-ray imaging technique, to indicate the presence of modern dyestuffs in the bones. When combined with other physical and chemical investigative methods, these tests proved conclusively that the bones were both too young to be of evolutionary interest and most certainly the product of chicanery.(24)

Oakley and Weiner could make this determination in 1953, whereas the hoax could not be proven in 1912, because of the greater understanding of materials the characterizing tools that emerged in the interim allowed. The importance of spraying things, in other words, is not merely, or even mostly, that it lets us know that the things we are spraying are real. It also represents a level of control over materials that opens up new ways of knowing their structure, properties, and in some cases their histories. It was precisely that level of control that made tools of characterizing materials so integral to Cold War–era research and development. The development of the tools discussed in this section, indeed, entered its most urgent phase after World War II.

In 1951, the US National Research Council (NRC) formed a Materials Advisory Board (MAB) to evaluate how advances in materials research might address strategic bottlenecks, particularly in military development. In 1967, MAB issued a report, Characterization of Materials, which suggested: “Attempts to provide the superior materials that are critically needed in defense and industry are usually empirical and often wasteful of efforts and funds. That is so, chiefly because we do not yet have a fully developed science of materials that affords predictable and reliable results in devising and engineering new materials for specific tasks.” In order to achieve this goal, the report insisted that “a wider dissemination and a wider availability of the tools of characterization are needed.”(21 pI-ix, II-41) Successfully characterizing materials was essential to strategic development aims and attention to the tools that made those characterizations possible was a central part of achieving those aims.

The tools described in the following essays were responsible—largely within the context of the Cold War and building on early twentieth century developments in microphysics and quantum mechanics—for redefining what scientist mean when they talk about materials. Materials are often distinguished from other forms of matter because they can be or have been turned to human purposes. Nothing about that definition requires a robust scientific understanding of why the materials in question have properties that people find useful. The proliferation of tools for characterizing materials brought that knowledge within grasp. These tools helped fuse the many traditions of materials research into a new interdisciplinary fields of materials science.(2) In doing so, they made knowledge of the inner workings of matter essential to the concept of materials as the class of substances that humans use to achieve their aims and desires.

 

References

1.     Bächtold M. Saving Mach’s view on atoms. Journal for General Philosophy of Science. 2010;41(1):1–19.

2.     Bensaude-Vincent B. The construction of a discipline: materials science in the United States. Historical Studies in the Physical and Biological Sciences. 2001;31(2):223-48.

3.     Brush SG. How ideas became knowledge: the light-quantum hypothesis 1905–1935. Historical Studies in the Physical and Biological Sciences. 2007;37(2):205–46.

4.     Compton AH. A quantum theory of the scattering of X-rays by light elements. Physical Review. 1923;21(5):483–502.

5.     Eckert M. Max von Laue and the discovery of X-ray diffraction in 1912. Annalen der Physik. 2012;524(5):A83–A85.

6.     Eckert M, Schubert H. Crystals, electrons, transistors: from scholar’s study to industrial research. New York: American Institute of Physics; 1990.

7.     Franklin A. The neglect of experiment. Cambridge, UK: Cambridge University Press; 1986.

8.     Glasser O. Wilhelm Conrad Röntgen and the early history of the Röntgen rays. San Francisco: Norman; 1993.

9.     Goldsmith B. Obsessive genius: the inner world of Marie Curie. New York: W. W. Norton; 2005.

10.  Hacking I. Representing and intervening. Cambridge, UK: Cambridge University Press; 1983.

11.  Hoddeson LH, Baym G. The development of the quantum mechanical electron theory of metals: 1900–28. Proceedings of the Royal Society of London Series A, Mathematical and Physical Sciences. 1980;371(1744)8–23.

12.  Hunt B. Pursuing power and light: technology and physics from James Watt to Albert Einstein. Baltimore: Johns Hopkins University Press; 2010.

13.  Hunter GK. Light is a messenger: the life and science of William Lawrence Bragg. Oxford: Oxford University Press; 2004.

14.  James J, Joas C. Subsequent and subsidiary? rethinking the role of applications in establishing quantum mechanics. Historical Studies in the Natural Sciences. 2015;45(5):641–702.

15.  Jamieson A. More than meets the eye: revealing the therapeutic potential of “light”, 1896–1910. Social History of Medicine. 2013:26(4):715–37.

16.  Jordi Taltavull M. Transmitting knowledge across divides optical dispersion from classical to quantum physics. Historical Studies in the Natural Sciences. 2016;46(3):313–59.

17.  Kirchhoff G, Bunsen R. Chemical analysis by spectrum-observations—second memoir. Philosophical Magazine, suppl. 4. 1861;22(148):329–49.

18.  Materials Advisory Board. Characterization of materials. Washington, DC: National Academy of Sciences; 1967.

19.  Joas C, Eckert M. Arnold Sommerfeld and condensed matter physics. Annual Review of Condensed Matter Physics. 2017;8:31–49.

20.  Morris PJT. The matter factory: a history of the chemical laboratory. London: Science Museum; 2015.

21.  National Academy of Sciences—National Academy of Engineering, Characterizing materials. Publication MAB-229-M. Washington, DC: National Academies Press; 1967.

22.  Navarro J. A history of the electron: J. J. and G. P. Thomson. Cambridge, UK: Cambridge University Press; 2012.

23.  Nye MJ. Before big science: the pursuit of modern chemistry and physics. Cambridge, MA: Harvard University Press, 1996.

24.  Oakley KP, Weiner JS. Piltdown man. American Scientist. 1955;43(4):573–83.

25.  Röntgen WC. Ueber eine neue Art von Strahlen. Sitzungsberichten der Würzburger Physik.-medic. Gesellschaft. 1895;S1–9.

26.  Stuewer R. The Compton effect: turning point in physics. New York: Science History Publications; 1975.

27.  Thomson JJ. Recent researches in electricity and magnetism. Oxford: Clarendon; 1893.

28.  Van Vleck JH. The new quantum mechanics. Chemical Reviews. 1928;5:467–507.

29.  von Laue M. Concerning the detection of X-ray interferences. Nobel lecture, November 12, 1915. NobelPrize.org. https://www.nobelprize.org/nobel_prizes/physics/laureates/1914/laue-lecture.pdf.

30.  Womack JC. Uncertainty medicine: The development of radiation therapy, 1895–1925 [PhD dissertation]. Houston, TX: University of Houston; 2016.