A Brief History of Pain

Pain isn’t an illusion – pain is real.

In practise, is pain objectively measurable? Yes, it’s signatures are already well researched.

Yes, in that it’s presence can be detected not just by behavioural observation of the organism reactions to pain, but by peering in via modern instrumentation at the changes in the state of the brain, the nervous system (including the ’emotional’ limbic system) right down to neurons distributed around the body. Though instrumentation has gotten better over the years, today it still is far from perfect. The promise was to move away from subjective self-reports, which are prone to biases like social desirability and memory errors, towards objective, quantifiable data that could provide a clearer picture of health and behaviour. But the technology and know-how aren’t there yet.

In the future, will technology get much better at objectively measuring, qualifying and quantifying pain?

The short answer: yes, technology will almost certainly get much better at detecting, qualifying, and quantifying pain – but whether it will ever fully “objectify” the experience of pain is another matter.

In principle, will future AI be able to know when biological organisms are in pain? Yes, most likely.

Measuring pain has always been a strange paradox: pain is one of the most universal human experiences, yet one of the hardest to quantify. Over the centuries, people have tried to capture it with medieval ordeals, crude instruments, scales etc. In more recent times, scientists have been trying to quantify to something that to many feels beyond measurement. Today, despite impressive technological progress, the most common tool for measuring pain remains a simple question: “On a scale of 0 to 10, how bad is your pain?”. Basic, but effective.

But what of the future?

Measuring pain and suffering objectively might sound like mere academic curiosity, but it’s importance is far deeper – it treats pain and suffering as something that can be studied, quantified and compared, affording more mature scientific understandings, with which we do far more. In short, pain measurement today, however far from perfect, seriously helps to a) ground ethics in the empirical world 1, and b) make the kinds of scientific progress required for designing verifiably better society.

Measuring pain and suffering objectively might sound like an academic curiosity, but its importance runs much deeper. It treats pain as part of the natural order (like the rest of the universe) – something that can be studied, quantified, and compared – opening the door to progressively richer scientific models. A robust understanding of pain grounds ethics in the empirical world, and importantly it creates the foundation on which engineering solutions can be built. With better measurement, we can design interventions, policies, and technologies that are verifiably effective at reducing suffering. In short, even today’s imperfect tools are already helping us move from intuition and anecdote toward a future where the mitigation of pain can be pursued with the precision of science, and the rigour of engineering.

Measuring pain isn’t just academic – it anchors ethics in evidence and equips us to engineer solutions. With better tools, we can move from guesswork to verifiable progress in reducing suffering at scale.

This post traces the story of how humans have tried to measure and make sense of pain in the past, how it is being done today, and some of the possible futures.

The earliest scientific methods to measure pain, dating to the late 1800s, involved behavioural and subjective assessments of pain intensity rather than direct brain measurements.

The German physiologist Ernst Heinrich Weber (1830s–40s) and Gustav Fechner (1850s) studied thresholds of sensation (touch, temperature, weight). Pain was often considered the “extreme end” of these sensations. Fechner introduced methods like the “method of limits” and “just noticeable differences.” This was the beginning of quantifiable pain thresholds.

The first objective, brain-based measures of pain in humans, using technologies like functional MRI (fMRI) and functional neuroimaging, emerged in the late 20th and early 21st centuries, coinciding with advances in neuroimaging and psychophysics.

Though morbid fascinations with pain came well before all this.

Pursuing enlightenment, the historical Buddha first practiced severe asceticism before recommending a moderated “Middle Way”.

Ancient and Pre-scientific Approaches to Pain

Before science, pain was investigated through ritual, punishment, and endurance:

Ascetic trials (c. 6th century BCE onward, across cultures)

Philosophers and religious ascetics (Stoics, yogis, monks) deliberately endured pain to demonstrate discipline or spiritual advancement. Ascetic endurance of pain as a test of virtue is cross-cultural and spans from antiquity into medieval and even early modern monastic practices. These weren’t experiments in the modern sense, but they show an early fascination with how much pain a person could tolerate.

  • Indian yogic and ascetic traditions (Jainism, Buddhism, Hindu sadhus) go back the furthestat least the 6th century BCE, with documented practices of fasting, austerities, and endurance.
  • Stoics (Zeno, Epictetus, Marcus Aurelius, 4th–1st century BCE to 2nd century CE) emphasised cultivating indifference to pain.
  • Christian ascetics and monks (2nd–5th centuries CE onward) engaged in self-mortification and flagellation as discipline.

Ordeals and torture (c. 5th–15th century, Medieval Europe)

Medieval courts assumed that survival under pain proved innocence or divine favour. This most widely used in early to high medieval Europe (roughly the 9th to 13th centuries), though ritualised torture as proof of truth extended further into the 14th–15th century. The Fourth Lateran Council (1215) banned clergy from participating in ordeals, which hastened their decline.

Medical anecdotes (c. 5th century BCE – 15th century CE)

Hippocrates, Galen, and medieval physicians described pains as “sharp,” “throbbing,” or “burning,” creating early vocabularies rather than measurements.

  • Hippocrates (c. 460–370 BCE) described pains and their qualities in relation to humoral balance.
  • Galen (129–c. 216 CE) systematised medical descriptions of pain types (sharp, burning, throbbing).
  • Descriptive traditions persisted through Islamic Golden Age medicine (Avicenna, 980–1037, The Canon of Medicine) and into European medieval medicine up to the 15th century, when Renaissance anatomy and experimentation began shifting the framework.

The Birth of Scientific Pain Measurement

The 19th century saw the first systematic attempts – for the first time, pain had numbers attached to it! This period is pivotal, because it’s when pain stopped being just described and started being measured.

Psychophysics (1850s) – Pain as Quantifiable Sensation:

Gustav Fechner and Ernst Weber pioneered methods for studying sensory thresholds. Pain was framed as the extreme end of sensation, subject to quantification.

  • Ernst Heinrich Weber (1795–1878) was a physiologist who studied how much you must change a physical stimulus before people notice a difference (the “just noticeable difference”).
  • Gustav Theodor Fechner (1801–1887) — philosopher-psychologist who formalised these ideas in his Elements of Psychophysics (1860).

Sensation (including pain) can be quantified and expressed mathematically. Fechner proposed that subjective sensation increases as a logarithmic function of stimulus intensity (the Weber–Fechner law). Pain was considered the “upper extreme” of sensory continua (e.g., temperature, pressure).

Researchers were asking ‘At what point does warmth become painful? How much weight must be added before pressure turns into pain?

Methods such as the method of limits (increasing/decreasing stimulus until pain is reported) and method of constant stimuli created reproducible thresholds.

This was revolutionary: it reframed pain from a mysterious, private ordeal into something that could be systematically studied in the lab.

Algometers (1880s):

The 19th century was when pain was dragged kicking and screaming into the laboratory. For the first time the language of pain was broadened from nebulous descriptions to precise measurements with numbers, thresholds, and instruments. This legitimised the crucial notion that subjective experience could be made objective!2

For the first time, pain had an instrumental scale. German physiologist Max von Frey (1852–1932) developed graded hair-like filaments3 to test the minimum force that triggered pain – arguably the first true pain-measuring device.

By using an instrumental scale, doctors could compare pain sensitivity across body sites, individuals, and conditions. This offered something close to “objective” pain measurement in a device. In fact, variants of the von Frey filaments are still used today in neurology and pain research (e.g., diabetic neuropathy testing).

This cemented the idea that pain thresholds could be standardised, compared, and published in quantitative tables.

The Rise of Subjective Scales (1940s–1980s)

By the mid-20th century, doctors realised given the instruments, the patient’s voice still mattered most – pain couldn’t be fully captured by instruments. This led to structured self-report scales. Why did subjective scales triumph?

Subjective scales were patient-centred – returning authority to the sufferer, rather than trying to bypass them with machines as well as being clinically practical – being cheap and repeatable across contexts. Their simplicity made them easily translatable, affording cross-cultural adaptability – numbers and words can be easily translated, making the metric of pain globally adoptable. The simplicity and global uptake allowed the pain metric data to be easily aggregated, compared and used in trials and audits – so they became policy relevant.

Hardy & Wolff’s Categorical Scales (1940s)

The dolorimeter was actively used in research on analgesics, anaesthesia, and pain thresholds. It represented one of the first serious scientific attempts at quantifying subjective pain reports with a calibrated instrument.

Cornell University’s Charles Hardy and Harold Wolff introduced categorical scales (none, slight, moderate, severe) – exposing volunteers to controlled painful stimuli (e.g. radiant heat) and asked them to classify the pain into categories such as barely painful, moderately painful, very painful.

It was the first systematic attempt to standardise language for pain intensity. These categories were simple enough for patients but reproducible enough for research. Set the precedent that patients’ own descriptions should be treated as data.

The Visual Analogue Scale (VAS) (1960s)

Philip Huskisson (UK rheumatologist) created the Visual Analogue Scale (VAS) which importantly introduced a continuous measure, not just clumpy categories, and it was sensitive to small changes in pain over time – invaluable for clinical trials.

The VAS was usually a 10 cm line with “no pain” at one end and “worst imaginable pain” at the other. Patients mark a point along the line. The mark is measured in millimetres, giving a continuous score from 0–100. However, some patients (especially elderly or less literate populations) found the abstraction of a “line” harder to use.

The McGill Pain Questionnaire (1975)

Ronald Melzack (Canadian psychologist) launched the McGill Pain Questionnaire, using descriptive words grouped into sensory, affective, and evaluative clusters – this saw the recognition of pain as multi-dimensional – the dimensions included not just intensity but also quality and emotional impact. This allowed researchers to distinguish between different types of pain (e.g. neuropathic vs inflammatory).

Patients choose words from 20 groups, each representing sensory qualities (sharp, throbbing, burning), affective qualities (tiring, fearful), and evaluative qualities (unbearable, mild). Each choice adds to a score.

This is still widely used in pain clinics and research to this day.

The Numeric Rating Scale (NRS) (1980s)

The now-ubiquitous 0–10 Numeric Rating Scale (NRS) is popular because it was and still is the simplest and fastest tool: no paper, no rulers, no word lists, making it usable in nearly any setting – hospitals, emergency rooms, surveys. By the late 1980s–90s, it became the standard in clinical practice worldwide. It has been criticised as being an over-simplified measurement of pain which is really multi-dimensional – but the NRS persists because of its ease and universality.

Modern “Objective” Measures (late 20th century)

In the late 20th century, this is where the story gets fascinating, because it’s the point at which science strives to finally “catch pain in the act”, but apparently it’s not quite there yet… sigh… There was never a truly widespread consensus in medicine or neuroscience that human self-reporting could or should be fully replaced. But there was enthusiasm – especially in the 1990s–2010s – that neuroimaging and biomarkers might finally deliver an “objective pain meter.” That enthusiasm was sometimes overstated, especially in media and policy discussions, and later tempered by findings.

Today the consensus among experts is that self-report remains indispensable, while physiological and neural tools provide complementary evidence that anchors subjective reports in measurable processes.

Physiological Signals (1950s–1970s)

  • Physiological signals: Blood pressure, cortisol, sweating, and pupil dilation were tested, but proved non-specific.
  • EEG and evoked potentials: Showed reliable neural responses to painful stimuli, but they tracked intensity better than subjective suffering.
  • fMRI & PET (1990s): Mapped the so-called “pain matrix” in the brain (insula, anterior cingulate, thalamus).
  • Neural signatures (2010s): Tor Wager’s group reported reproducible “pain signatures” in fMRI.
  • AI & multimodal biomarkers (2020s): Machine learning now integrates brain scans, physiology, facial expressions, and language to build predictive profiles.

EEG and Evoked Potentials (1970s–1990s)

  • Method: Electroencephalography (EEG) records electrical activity in the brain. By stimulating the skin with lasers or electrical shocks, scientists observed consistent waveforms (laser-evoked potentials).
  • Strengths:
    • Provided time-locked, reliable neural responses to painful stimuli.
    • Could distinguish nociceptive processing (the brain detecting harmful input) from non-painful touch.
  • Limitations:
    • Reflected stimulus intensity more than subjective suffering. Two people can receive the same shock but report different pain — EEG can’t bridge that gap.
    • Pain is not a simple sensory event; it’s emotional, cognitive, and contextual.

fMRI & PET — The “Pain Matrix” (1990s)

  • Discovery: Functional MRI (fMRI) and positron emission tomography (PET) identified a distributed network of brain regions that activate during pain: the insula, anterior cingulate cortex, thalamus, and somatosensory cortices. This became known as the “pain matrix.”
  • Why it was exciting:
    • Finally, a visible “neural fingerprint” of pain seemed within reach.
    • Courts, insurers, and policymakers began dreaming of “objective brain scans for pain.”
  • Problems:
    • The pain matrix is not unique to pain — it also activates for attention, salience, and emotional distress.
    • fMRI is expensive, slow, and impractical in clinical settings.

Neural Signatures (2010s)

  • Breakthrough claim: In 2013, Tor Wager’s group published in NEJM a reproducible fMRI-based “neurological pain signature” that could predict whether someone was in pain with >90% accuracy in controlled lab settings.
  • Why it mattered: Suggested a reliable, generalisable biomarker.
  • Caveats:
    • The signature worked well for acute, physical pain in carefully controlled contexts, but not for chronic pain, placebo effects, or complex emotional suffering.
    • It still required calibration against subjective reports.

AI & Multimodal Biomarkers (2020s–present)

  • Approach: Machine learning integrates multiple streams:
    • Brain imaging (fMRI, EEG)
    • Physiological signals (HRV, skin conductance, pupillometry)
    • Behavioural cues (facial expressions, posture)
    • Linguistic analysis (natural language reports, tone of voice)
  • Promise: By combining imperfect signals, AI may triangulate pain levels more reliably, especially for patients who cannot self-report (infants, anaesthetised, animals).
  • Challenge:
    • These systems still rely on subjective calibration. Without somebody’s self-report, the model doesn’t know what counts as “a 7/10.”
    • Ethical risks: “objective” pain AI might overrule patient testimony or misclassify suffering, leading to neglect.

Yet, none of these have displaced the humble self-report. Pain remains as much experience as signal.

Why Self-Report Still Reigns

Despite the rise of high-tech tools, doctors still ask: “What’s your pain, 0–10?” Why?

  1. Pain is private: It’s shaped by culture, mood, and meaning as much as nerve signals.
  2. Cheap and fast: Scans and biomarkers are expensive; asking a question costs nothing.
  3. Clinically actionable: What matters most is change (from 8 to 6), not an fMRI percentage.
  4. Respecting patients: Self-report affirms that pain is more than a biological reflex 0 it is a lived reality.

What Might the Future of Pain Measurement Look Like?

The story isn’t over. Looking forward:

  • AI multimodal systems may provide reliable tools for non-verbal patients (infants, anaesthetised, cognitively impaired).
  • Brain–computer interfaces could, in theory, bypass language entirely and “read out” qualia signatures.
  • Synthetic empathy — AI trained not just to detect, but to care about pain – is on the horizon.
  • Ethical challenge: Even if we can measure pain objectively, should we? Does reducing it to data risk trivialising lived suffering?

The history of pain measurement shows a steady progression: from coercion → instrumentation → introspection → computation → speculation. The next leap may require us to ask whether machines can ever grasp not just the signals, but the meaning of pain.

Summing Up

For centuries, we’ve tried to pin down pain – to give form to something that feels infinite when we’re in it. The fact that a dashed line and a patient’s word still outperform multimillion-dollar scanners seems strange to me – I don’t think it’s a failure of science. For most of our history, pain has been like a voice to be heard, not really a quantifiable signal to be measured. However, if we do end up with highly capable AI, it may not come with the inbuilt human sensitivity to or concern for pain – in some ways it would be nice to have that4, but no guarantees that will happen soon – but importantly, it would be really useful to have the kind of AI that can fill it’s deficiencies in human nuance with a super-technical understanding of pain and a highly capapble means of measuring it empirically.

Footnotes

  1. A mature scientific / empirical understanding of pain/suffering legitimises objective moral frameworks. If moral realism – the idea that there are stance-independent moral truths – is even partly correct, then we have clear moral impetus to treat suffering as something that can be studied, compared, and reduced across populations. ↩︎
  2. Subjective experiences are not outside the natural world; they arise within the same physical universe as everything we can measure. There is no metaphysical firewall preventing us from quantifying aspects of subjectivity. One might argue that the first-person feel of experience (qualia) resists direct objective capture, but the physical processes and energy patterns that give rise to it are measurable. Those processes, like all others in nature, obey physical laws that yield repeatable, verifiable observations.
    Some philosophers (e.g. Russellian monists, property dualists) argue that subjective experience may not be fully reducible to measurable physical processes. On this view, consciousness involves intrinsic properties of matter that are not captured by physical description alone. If that is true, then no amount of measurement of neural dynamics could exhaustively account for the first-person feel of pain. Still, even under this framework, the physical correlates of pain remain measurable and provide reliable proxies for its presence and intensity – and for governance, those proxies are indispensable in shaping policies that mitigate suffering. ↩︎
  3. The von Frey hairs/filaments – thin, calibrated fibres of different stiffness, pressed against the skin. Each bent at a known force. The lightest force at which a subject reported pain was recorded as the pain threshold. ↩︎
  4. See posts on Zombie AI one day Achieving Sentience the Knowledge Argument Applied to Ethics/Sentience ↩︎

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *