• December 2, 2023

Made to measure: why we can’t stop quantifying our lives | Science

If anything exemplifies the power of measurement in contemporary life, it is Standard Reference Peanut Butter. It’s the creation of the US National Institute of Standards and Technology (NIST) and sold to industry at a price of $1,069 for three 170g jars. The exorbitant cost is not due to rare ingredients or a complex production process. Instead, it is because of the rigour with which the contents of each jar have been analysed. This peanut butter has been frozen, heated, evaporated and saponified, all so it might be quantified and measured across multiple dimensions. When buyers purchase a jar, they can be certain not only of the exact proportion of carbohydrates, proteins, sugars and fibre in every spoonful, but of the prevalence – down to the milligram – of dozens of different organic molecules and trace elements, from copper and magnesium to docosanoic and tetradecanoic acid. Hardly an atom in these jars has avoided scrutiny and, as a result, they contain the most categorically known peanut butter in existence. It’s also smooth, not crunchy.

The peanut butter belongs to a library of more than 1,300 standard reference materials, or SRMs, created by NIST to meet the demands of industry and government. It is a bible of contemporary metrology – the science of measurement – and a testament to the importance of unseen measures in our lives. Whenever something needs to be verified, certified or calibrated – from the emission levels of a new diesel engine to the optical properties of glass destined for high-powered lasers – the SRM catalogue offers the standards against which checks can be made. Most items are mundane: concrete and iron for the construction trade; slurried spinach and powdered cocoa for food manufacturers. But others seem like ingredients lifted from God’s pantry: ingots of purified elements and pressurised canisters of gases, available in finely graded blends and mixtures. Some are just whimsical, as if they were the creation of an overly zealous bureaucracy determined to standardise even the most peculiar substances. Think: domestic sludge, whale blubber and powdered radioactive human lung, available as SRMs 2781, 1945 and 4351.

Each has a purpose, however. Domestic sludge, for example, is used as a reference by environmental agencies to check pollutant levels in factories. Standardised whale blubber helps scientists track the buildup of chemical contaminants in the ocean. Powdered lung, meanwhile, is used as a benchmark for human exposure to radioactive materials. It is both a byproduct of, and response to, cold war fears of nuclear annihilation. The samples were created from 70kg of human lung donated by employees of Los Alamos National Laboratory, the birthplace of the atomic bomb. Each donor had been exposed to radiation during their life and offered their bodies to science after death. Like many SRMs, the lungs had to be freeze-dried and pulverised into a fine powder to ensure homogeneity in each sample, which makes for some amusingly straight-faced lab notes. “Despite the sterilization process that ensured safe handling, it was more than unsettling to occasionally have bits of tissue sprayed on to the laboratory walls and us during the grinding stage, necessitating at least one necktie and lab coat change,” wrote the NIST researcher tasked with the grinding. “At one point, we even had a red ooze with bits of tissue floating in it making its way down the hall.” Such is the messy, unseen work of making measurements stick.

The purpose of all these materials is to offer clients “truth in a bottle” said Steve Choquette, director of the agency’s Office of Reference Materials (ORM). Enthusiastic and genial, it’s his job to ensure that customers can have total trust in the agency’s measurements. “We beat these things to death,” he said of the standards, noting that the quantities assigned to each material can ultimately be traced back to the metric system, which, handily, NIST also helps define. If Choquette has any questions on the finer points of measurement, he can “walk across the aisle and talk to the world’s experts”.

The SRMs themselves are stored in 25,000 sq feet of warehouse space equipped with various grades of freezers and containment areas for radioactive and hazardous material. Each sample falls into one of two camps, depending on its use: calibration or validation. Validation means using SRMs to ensure consistency in certain industry tests. Take, for example SRM 1196a: the standard cigarette, yours for $446.00 for two cartons of 100, and used to test the flammability of fabrics and upholstery. Fires started by stray smoking materials are the leading cause of deaths by fire in the home in the US, noted Choquette, killing hundreds every year.

Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

To reduce these hazards, there are various legal standards for flame-resistant fabrics and furnishings. But for those tests to be consistent across manufacturers, there needs to be a standardised cigarette to start each fire with. That’s SRM 1196a. The cigarettes are made by ordinary manufacturers and are no different to regular cigarettes, but their homogeneity – the predictability of their contents – has been tested and verified by NIST. That’s what the agency provides, says Choquette: sameness as a service. The result can save lives.

The other major use for SRMs is calibration, where the material provides a benchmark to verify other tests. Say you’re a food manufacturer who wants to check the nutritional values of your product. You can buy machines that test for the presence of certain molecules and compounds, but how do you know those tests are accurate? Well, you run them on SRMs bought from NIST, each of which has been precisely quantified by more thorough and expensive means. If your tests match NIST’s measurements, then you can trust your machinery.

The samples themselves are perfectly edible. In 2003, the food critic William Grimes had the chance to taste NIST’s peanut butter. He noted that it didn’t offer the creamy flavours of most consumer brands; that it looked more like industrial paste than actual food; and that it was, all told, entirely average. It’s an assessment NIST must have been happy with.

The discipline of measurement developed for millennia before it could scrape out the bottom of a jar of peanut butter. Around 6,000 years ago, the first standardised units were deployed in river valley civilisations such as ancient Egypt, where the cubit was defined by the length of the human arm, from elbow to the tip of the middle finger, and used to measure out the dimensions of the pyramids. In the Middle Ages, the task of regulating measurement to facilitate trade was both privilege and burden for rulers: a means of exercising power over their subjects, but a trigger for unrest if neglected. As the centuries passed, units multiplied, and in 18th-century France there were said to be some 250,000 variant units in use, leading to the revolutionary demand: “One king, one law, one weight and one measure.”

A plaque showing the ‘standard metre’ measurement introduced during the French revolution in Paris, France
A plaque showing the ‘standard metre’ measurement introduced during the French revolution in Paris, France. Photograph: PjrTravel/Alamy

It was this abundance of measures that led to the creation of the metric system by French savants. A unit like the metre – defined originally as one ten-millionth of the distance from the equator to the north pole – was intended not only to simplify metrology, but also to embody political ideals. Its value and authority were derived not from royal bodies, but scientific calculation, and were thus, supposedly, equal and accessible to all. Then as today, units of measurement are designed to create uniformity across time, space and culture; to enable control at a distance and ensure trust between strangers. What has changed since the time of the pyramids is that now they often span the whole globe.

Despite their abundance, international standards like those mandated by NIST and the International Organization for Standardization (ISO) are mostly invisible in our lives. Where measurement does intrude is via bureaucracies of various stripes, particularly in education and the workplace. It’s in school that we are first exposed to the harsh lessons of quantification – where we are sorted by grade and rank and number, and told that these are the measures by which our future success will be gauged.

Once we leave school and begin work, these tests reappear in the form of KPIs (key performance indicators) and OKRs (objectives and key results). In my own early career as a journalist, the value of my work was judged primarily by a pair of key statistics: the number of articles I wrote and the online page views they attracted. My peers and I were taught to value quantity over quality, a constant churn of clickable headlines. I’ve personally had to unlearn many of the lessons taught by these particular metrics.

The underlying principle – that any human endeavour can be usefully reduced to a set of statistics – has become one of the dominant paradigms of the 21st century. The historian of capitalism Jerry Z Muller calls it “metric fixation”, a ubiquitous concept that pervades not only the private sector, but also the less-quantifiable activities of the state, such as healthcare and policing.

“We live in the age of measured accountability, of reward for measured performance, and belief in the virtues of publicising those metrics through ‘transparency’,” writes Muller. And although, as he stresses, measurement itself is not a bad thing, “excessive measurement and inappropriate measurement” will distort, distract and destroy what we claim to value.

The roots of metric fixation can be traced back to the 19th century. Management, in the US particularly, was then emerging as a profession in its own right, rather than a proficiency learned by working inside an industry. A drive to rationalise the work of managers dovetailed with a transformation of industrial production itself. The US pioneered what became known as the “American system” of manufacturing, centred on the virtues of standardisation, precision and efficiency. Previously, the production of consumer goods had been the work of artisans who hand-crafted orders from start to finish. But with the advent of machines that could stamp, cut and mould many different components, manufacturing was turned into a series of rote tasks, with lower-skilled workers assembling products piece by piece. As one British engineer who toured US factories in the 1850s noted, wherever machinery could be used to replace manual labour, “it is universally and willingly resorted to”.

At the turn of the 20th century, this system was augmented further by two complementary concepts: scientific management and mass production. The latter is best encapsulated by the work of auto-maker Henry Ford, whose low-priced Model T reshaped not only industrial practice but American culture, helping create a prosperous middle class that defined itself by mass consumption. Ford claimed that his assembly lines, which kept workers static while material moved through their stations on conveyor belts, had been inspired by an aide’s visit to a Chicago slaughterhouse. There, the aide observed the opposite process: a “disassembly line” in which a row of butchers took apart pig carcasses, joint by joint, with each individual focusing on a single repetitive task.

This compartmentalisation of labour led to the scientific management movement, pioneered by efficiency-obsessed engineer Frederick Winslow Taylor, who advocated a set of working practices now known as Taylorism. Taylor and his followers observed labourers and broke down the flow of their work into constituent parts that could then be standardised. The aim, said Taylor, was to “develop a science to replace the old rule-of-thumb knowledge of the workmen”. Importantly, this also necessitated a transfer of knowledge – and a corresponding shift in power – from the labourers who carried out the work to the managers who oversaw it.

These kind of controls are exercised not just in the workplace but also in institutions such as prisons, armies and schools. French philosopher Michel Foucault wrote about the “disciplinary society”, a world in which compliance is enforced via strictly defined norms. Prisoners are given uniforms and numbers, told when and where to eat and sleep, and live in the uncertain knowledge that they are being watched by unseen guards. Eventually, they internalise this authority and police their own behaviour, said Foucault. Compliance is achieved without overt brutality, but the aim is “not to punish less, but to punish better”. The work of measurement and standardisation is essential to this control.

Writing in the New York Times in 2010, the technology journalist Gary Wolf heralded our age of quantification. Using data to make decisions is now the norm in nearly all spheres of life, he wrote. “A fetish for numbers is the defining trait of the modern manager. Corporate executives facing down hostile shareholders load their pockets full of numbers. So do politicians on the hustings, doctors counselling patients and fans abusing their local sports franchise on talk radio.” Business, politics and science are all steered by the wisdom of what can be measured, said Wolf, and the reason why is obvious: numbers get results, making problems “less resonant emotionally but more tractable intellectually”. Only one domain has resisted the lure of quantification: “the cozy confines of personal life”. That, said Wolf, would soon change.

Standard reference materials from the US National Institute of Standards and Technology
Standard reference materials from the US National Institute of Standards and Technology. Photograph: Mark Esser/NIST

Thanks to new technology – namely, the ability to digitise information, the ubiquity of smartphones and the proliferation of cheap sensors – humans now have historically unprecedented powers of self-measurement. At the turn of the 17th century, in order to better understand the workings of his metabolism, the Italian physician Santorio Santorio constructed a set of giant scales in which he could sit. Santorio would measure his weight constantly, particularly before and after meals and defecation. Today, we are rewarded with floods of comparable information with minimal effort. We can track our sleep, exercise, diet and productivity with apps and gadgets. We have become beacons of unseen measurement, emitting quantified data as heedlessly as uranium produces radiation.

For Wolf, the potential of this information is huge. “We use numbers when we want to tune up a car, analyse a chemical reaction, predict the outcome of an election,” he writes. “Why not use numbers on ourselves?” His article is the nearest thing to a manifesto for the Quantified Self movement: a loose affiliation of individuals whose pursuit of “self-knowledge through numbers” shows how far we have internalised the logic of measurement. The movement’s origins can be traced back to the 1970s, when enthusiasts cobbled together the clunky ancestors of today’s wearable tech. But the idea came to greater public attention after Wolf and fellow journalist Kevin Kelly coined the term “quantified self” in 2007 and founded a non-profit to proselytise their ideas.

Descriptions of the quantified self lend themselves to caricature, creating images of digital Gradgrinds obsessively pursuing the optimised life while their souls wither on the vine. And it’s true that many proponents of QS, as it is known, do nothing to dispel this image. They boast about shaving minutes off their day through rigorous self-surveillance, or discovering through sophisticated analyses that – surprise! – good sleep and regular exercise improve their mood. The quantified self is simply “Taylorism within”, writes the tech critic Evgeny Morozov, and another example of the “modern narcissistic quest for uniqueness and exceptionalism”.

Proponents of the movement defend it as a response to the “imposed generalities of official knowledge”. If quantification has turned the world into one-size-fits-all rules that do not fit the individual, why not create one’s own set of numbers that better capture the truth? They cite anecdotes of self-trackers whose chronic ailments – sleep apnoea, allergies and migraines – resisted the cures of mainstream medicine but yielded to their pattern-finding prowess. After months and years of diligent self-tracking, these individuals discover some previously hidden mechanism in their life, some food or habit that triggers their affliction, and make the changes necessary to live happily ever after. In this guise, the quantified self seems like an attempt to recapture the personal dimension of measurement; to resist the abstractions of statistics and tailor calculations to fit the contours of our lives.

In his 2010 article, Wolf said that a century ago we used psychoanalysis to unravel the mysteries of the self, relying on language and a culture of “prolix, literary humanism”. This, he implies, is not the world we live in today, so why rely on outdated methodologies? The question he never answers, though, is how the precision of numbers is supposed to match the complexity of language as a tool for self-exploration. One suspects this is a feature rather than a bug. By limiting the scope of self-investigation to what can be measured, practitioners are assured of finding answers. Supplicants on the therapist’s couch, meanwhile, have to return week after week to grapple with the inefficient complexities of language.

When I think about what measurement means in today’s society, how it’s used and misused, and how we internalise its logic, I often end up thinking about a single figure: 10,000 steps. It’s often cited as an ideal daily target for activity, and built into countless tracking apps and fitness programmes. Walk 10,000 steps a day, we’re told, and health and happiness awaits.

This number is presented with such authority and ubiquity you’d be forgiven for thinking it was the result of scientific enquiry, the distilled wisdom of numerous tests and trials. But no. Its origins are in a marketing campaign by a Japanese company called Yamasa Clock. In 1965, the company was promoting a then-novel gadget, a digital pedometer, and needed a snappy name for their new product. They settled on manpo-kei, or “10,000-steps meter”. But why was this number chosen? Because the kanji for 10,000 – and hence the first character in the product’s Japanese name, 万歩計 – looks like a figure striding forward with confidence. There was no science to justify 10,000 steps, it seems – just a visual pun.

An ant carrying a 1mm square microchip in its mandibles at Huddersfield University Precision Technology centre
An ant carrying a 1mm square microchip in its mandibles at Huddersfield University Precision Technology centre. Photograph: Reuters

If the 10,000 steps are an illusion, though, they are a useful one. Research into how many steps a day we should pursue offers more finely graded targets – they say 10,000 steps is too low for children and too high for many older adults. Still, it’s abundantly clear that any increased activity is good for us, and that people who do pursue a daily target of 10,000 steps have fewer signs of depression, stress and anxiety (even if they don’t hit that goal). In this light, the Quantified Self crowd seem to have a point: if you want to reach people, you need to speak in a language they understand.

When thinking about measurement in today’s world, the German sociologist Hartmut Rosa suggests it is characteristic of a particular 21st-century desire: to structure our lives through empirical observation, rendering our interests and ambitions as a series of challenges to overcome. “Mo untains have to be scaled, tests passed, career ladders climbed, lovers conquered, places visited, books read, films watched, and so on,” he writes. “More and more, for the average late modern subject in the ‘developed’ western world, everyday life revolves around and amounts to nothing more than tackling an ever-growing to-do list.”

This mindset, says Rosa, is the result of centuries of cultural, economic and scientific development, but has been “newly radicalised” in recent years by digitalisation and the ferocity of unbridled capitalist competition. Measurement has been rightly embraced as a tool to better understand and control reality, but as we measure more and more, we encounter the limits of this practice and wrestle with its disquieting effects on our lives.

My interest in the history of metrology began as a simple curiosity about the origin of certain units of measurement. Why is a kilogram a kilogram, why an inch an inch? But these questions have a deeper resonance, too: if measurement is the mode by which we interact with the world, it makes sense to ask where these systems come from, and if there is any logic to them.

The answer I’ve found is that there isn’t any – not really. Or rather, there is logic, but, as with the 10,000 steps, it’s as much the product of accident and happenstance as careful deliberation. The metre is a metre because hundreds of years ago certain intellectuals decided to define a unit of length by measuring the planet we live on. As it happens, they made mistakes in their calculations, and so the metre itself is around 0.2mm short: a minute discrepancy that has nevertheless been perpetuated in every metre ever since. In other words: it is the way it is because we say it is. Measures, then, are both meaningful and arbitrary; iron guides in our lives that are malleable if we want them to be. If they don’t work – if they don’t measure up – then they too can be remade.

This is an edited extract from Beyond Measure: The Hidden History of Measurement by James Vincent, published by Faber on 2 June and available at guardianbookshop.com

Follow the Long Read on Twitter at @gdnlongread, listen to our podcasts here and sign up to the long read weekly email here.

This article was amended on 26 May 2022 to better explain a cubit; commonly defined as being the length of the adult arm, from elbow to the tip of the middle finger.