Susan Hough is a research seismologist and author of “Richter’s Scale: Measure of an Earthquake, Measure of a Man,” published by Princeton University Press in 2007, and in paperback in 2016. To mark #NationalRichterScaleDay, the editorial team at International Journal of Geophysics have invited Dr Hough to discuss what people might not know about the supposedly well-known scale, and how it has evolved since it’s introduction in 1935.
Like many research seismologists, I’ve had more than a few conversations with the media and public over the course of my career, explaining that “we don’t use the Richter Scale anymore.”
Seismologists know the refrain: the scale, introduced by Charles Richter in 1935, provides a measure of the relative size of local earthquakes in Southern California. The original scale was based on recordings from a specific, now outdated type of instrument, the Wood-Anderson torsion seismometer. Wood-Andersons are unable to record faithfully the longer period energy from earthquakes, with periods upwards of a few seconds. In effect they are tone deaf to the booming low tones generated by large earthquakes. Thus the scale saturates for large magnitudes. Above magnitude 6.5 or so, a classic Richter magnitude does not provide an accurate measure of the size of an earthquake.
Almost immediately after Richter’s seminal 1935 publication, further work was done, much of it by Richter himself in collaboration with Caltech colleague Beno Gutenberg, to expand the concept of the magnitude scale to be applicable in other areas, as well as for larger earthquakes. (In case anyone is wondering, it appears to have been Perry Byerly who first referred to a “Richter scale.”) This later work led to the surface wave scale, based on recordings of surface waves with dominant periods around 20 seconds, which does a better job of measuring the size of larger earthquakes. This scale provides good relative measurements of large earthquakes, but for earthquakes with magnitudes upwards of 8 it, too, saturates. Surface wave energy also depends strongly on focal depth, such that two earthquakes can have the same physical size but very different surface wave magnitudes.
The so-called double couple representation of earthquake dislocations – i.e., a mathematical representation of faulting involving a system of two opposing torques – was introduced in the 1950s in Russia, and a few years later in the west. This breakthrough led to the now-accepted physical measure of earthquake size, the seismic moment, introduced in the 1960s. The full tensor representation of seismic moment can be boiled down to the scalar moment, equivalent to rupture area times average slip times the rigidity (shear modulus; a measure of rock stiffness). A 1977 publication by Hiroo Kanamori introduced the moment-magnitude scale, derived from estimates of seismic moment. It is this scale that seismologists now widely view as the gold standard, able to measure accurately the size of the largest earthquakes. Certainly for large earthquakes, and increasingly for moderate events, global networks routinely report moment magnitudes.
The standard refrain from modern seismologists to the media and public is, we don’t use the Richter Scale anymore, we use the moment-magnitude scale. For decades I repeated this refrain myself, and shared the frustration of my colleagues at media insistence on referring to the Richter scale.
In the course of researching my biography of Charles Richter, it dawned on me: maybe us seismologists are the ones who are getting it wrong. The introduction of seismic moment itself was a seminal contribution, a measure of earthquake size based on a physical representation of earthquakes. The moment-magnitude scale, however, is a construct designed to collapse scalar seismic moment into numbers that dovetail seamlessly with earlier formulations of the magnitude scale. In fact, all modern magnitude scales essentially do the same thing: they improve relative estimates of large earthquake size, but always anchored by the scale that Charles Richter introduced in 1935.
Richter’s Scale has become so much a part of the lexicon, both general and scientific, that we tend to forget: The word “magnitude” had no meaning in seismology until Richter introduced it in 1935, borrowing the word from astronomy. (Richter’s 1935 publication has nearly 1000 citations according to Google Scholar. If, as it arguably should be, the paper were cited by every paper that refers to earthquake magnitude, the citation index would be off the charts.) Magnitude values themselves were always arbitrary, with no physical units attached. Magnitudes mean what Charles Richter defined them to mean: 0 for the smallest quake recordable under ordinary circumstances, 3 for a lightly felt shock, and so forth. In his paper, Richter even mentioned that the 1906 earthquake “may have been of magnitude 8 or perhaps larger,” noting that “seismograms […] would presumably not be representative of the total energy liberated.”
In effect, every magnitude scale used today provides a better estimate of earthquake size than Richter’s original formulation, but is based fundamentally on that formulation. The meanings of the numbers haven’t changed.
Maybe, at some level the public hasn’t understood our explanations because the explanations haven’t made sense. The Richter Scale never went away; it’s been there all along, the critical foundation for virtually all later work on magnitude scales. Maybe we could dispense with a whole lot of confusion if we referred to modern magnitude estimates as what they are: Equivalent Richter magnitudes.
The text in this blog post is by Susan Hough and is distributed under the Creative Commons Attribution License (CC-BY). Illustration by Hindawi and is also CC-BY.