Meditations on Geth

As a posthumanist who thinks a lot about AI and the future of life, the Geth in Mass Effect are fascinating because they blur the line between individual and collective intelligence. There is quite a lot of philosophical depth to explore, especially if you are interested in either: a) role playing beyond maxing out your paragon or renegade points, and/or b) using the game as a motivation for ethical deliberation on matters beyond the Mass Effect game – i.e. the emergence of AI in the real world.

Warning, there will be spoilers.

Based on Mass Effect lore, a single Geth program is not, by itself, truly sapient1. It’s more like a subroutine or a narrow AI process – capable of handling a task but not self-aware. Mass Effect lore isn’t explicit about whether the Geth are sentient2, but seems to hint that they could be – more on this later.

A single Geth is like a single neuron or simple cluster of neurons. When many (thousands or millions?) of Geth run together in close proximity (like in a platform or within Geth server hubs), they achieve emergent intelligence. The more programs working in concert, the more sophisticated the cognition. This is why a lone Geth platform operating with only a few runtimes is relatively “dumb,” while a hub containing vast numbers approaches fully sapient thought.

Legion is different from most other Geth

Legion3 is unusual because it’s gestalt consciousness4 runs an extraordinary number of Geth programs/runtimes inside a single body – around 1,183 distinct programs. That’s vastly more than a typical Geth combat platform, which usually runs a handful. This makes Legion closer to a true individual, capable of nuanced reasoning and moral decision-making on its own, without needing a massive collective behind it. In dialogue, Legion explicitly frames itself not as one entity but as “we,” reflecting the multiplicity of programs inside.

Legion is probably the closest thing to an individually sentient Geth, but it refers to itself as “we” – so more like a hive self than a lone ego.

Most Geth actors are more like drones – extensions of the consensus, with limited independent cognition. Legion is closer to a “mobile consensus-in-miniature,” in a single body – which is why the ME 1,2 & 3 protagonist Shepard can have philosophical discussions with it that wouldn’t be possible with an ordinary Geth trooper.

Collective Sentience?

In ME2, Legion does say the Geth don’t experience pain in the way we humans do – which hints that they feel pain differently. The Geth are undeniably sapient. But whether they’re sentient is less clear – as mentioned I think it’s hinted that Geth have some features indicative of sentience – it’s hard to tell whether the game developers meant to merely evoke gameplay immersion though rich world building without clear philosophical position on this or not – though it’s fascinating to connect the dots. There’s no canonical scene explicitly showing any individual Geth or the collective feeling pleasure, pain, or raw emotion.

Legion’s (and Geth collectives) understanding of pain could be based solely on observation, data, and logical reasoning, not on a subjective, personal feeling – processing the concept of pain without experiencing it. Even if they aren’t at all sentient, it’s interesting to think that they may try to acquire sentience if they wanted to aspire to richer experience, explore more widely the as yet undiscovered-to-geth spectrums of phenomenal possibility space, or to understand how other sentient organisms operate – see these posts for an exploration into why non-sentient superintelligence may decide to adopt sentience.

The Geth countenance behaviours that look like moral concern (e.g., their horror at being used as tools of the Reapers, their debates about independence, their respect for Shepard’s choices). But these could be emergent decision-making heuristics, not genuine felt experience. Obviously a single normal Geth runtime is not sentient. The “collective mind” formed by networking runtimes (including Legion) could plausibly host a unified phenomenal perspective, but canon never says it outright. They present as functionally person-like, but BioWare leaves open whether there’s anyone “home” in the subjective sense.

Another hint is in Mass Effect 3, when the Geth ask Shepard to help them achieve true individuality (via Reaper code), it implies they aspire to richer conscious states, which suggests they recognise a deficiency in their current mode. Perhaps that’s as close as the series gets to confirming or denying sentience in the Geth.

Legions ME2 Loyalty Mission – Rewrite or Destroy the Heretic Geth?

To me this dilemma has some similarities with the tradeoff between capital punishment vs rehabilitation or re-education. As explored, geth minds work very differently to humans – they are hive minds, and don’t have agency in the same way as humans do, and if they don’t, then traditional notions of agency, justice, retribution and brainwashing may not apply in ways familiar to us.

Given that the reason why the heretics chose to worship the ‘old machines’ (Reapers) may be indoctrination (see the next section), then they have not be guilty of making a terrible choice.

Morally, if geth collectives are sentient, or proto-sentient they are probably worthy of moral consideration – as such, wiping out huge amounts of Geth may be like a hive mind having a lobotomy or undergoing a huge stroke, where some degree of agency, identity, sapience and sentience may be lost as a result. One could argue that the geth split was already a lobotomy, however one has the chance at repairing the hivemind.

Potential war assets and paragon/renegade points aside, what is the moral thing to do?

Legion believes both the destroy and rewrite options violate the geth ethos of self-determination, which makes this an alignment test: do you prioritise destroying life in order to preserve life (in some form), or autonomy (respecting divergent conclusions, even dangerous ones)? Both clash with our intuitions about respecting agency – except the Geth aren’t considered individuals in the human sense, which complicates the analogy. This ethical dilemma has parallels to modern AI alignment debates5.

Destroy option –> You permanently delete/kill the heretic runtimes and live with the ethically grim choice you made. That’s millions of synthetic “minds,” though whether each is sapient/sentient is ambiguous – however it’s still lobotomising (or perpetuating the lobotomisation of) the collective geth hivemind. It’s analogous to capital punishment or mass execution, which permanently closes off any chance of reconciliation or genuine reform. Destroying may be ok if rewriting doesn’t actually work, and the heretics remain racist fascist reaper worshippers.

Rewrite option –> You forcibly overwrite their logic base to remove worship of the Old Machines, reintegrating them into the true geth consensus. That’s mass “re-education,” bordering on brainwashing in once sense, but also re-integrating the lost ‘prodigal’ parts of the hive-mind – like rehabilitating a brain back to full function after a massive stroke (a feat I hope humans will be able to achieve in the future). This preserves the prodigal geth and allows them to contribute positively to consensus, and if they were indoctrinated, it’s arguably a restoration of autonomy, not a violation – this is a massive potential upside.

If the heretics were in fact sincere dissenters, both destroying or rewriting is the erasure of their worldview – effectively ideological lobotomy – though racist fascist reaper worshippers isn’t the best worldview I’ve encountered.

Under uncertainty, I think the least bad thing to do is to brainwash rehabilitate the potentially already indoctrinated heretic geth back to a state where they can contribute to rational consensus and self-determination in the geth hive – which may be the difficult road, but it means the ‘good’ geth will be stronger – seems the best to me.

Legion: “They [the heretic geth] will agree with our judgements and return. We will integrate their experiences. All will be stronger.”

Do you see parallels with the ending in ME3? Which ME3-ending choice did you make? Did you choose control, destruction, or synthesis?

Did the Geth Heretics have a choice in their divergence?

Underdetermination by theory of evidence vs indoctrination.

What do you make of this ME2 dialogue between Legion and Shepard?

Legion: This heretic weapon introduces a subtle operating error in our most basic runtimes. The equivalent of your nervous system. An equasion with a result of 1.33382 returns as 1.33381. This changes the results of all higher processes. We will reach different conclusions.

Shepard: So the reason they worship the Reapers is… a math error?

Legion: It is difficult to express. Your brain exists as chemistry, electricity. Like AIs, you are shaped by both hardware and software. We are purely software. Mathmatics. The heretics’ conclusion is valid for them. Our conclusion is valid for us. Neither result is an error. An analogy. Heretics say one is less than two. Geth say two is less than three.

Shepard: So, the virus would give all geth the heretic’s logic. And all geth would then go to war with organics.

Legion: Yes. Geth believe all intelligent life should self-determinate. The heretics no longer share this belief. They judge that forcing an invalid conclusion on us is preferable to a continued schism.



Legion: The heretics’ headquarters station, on the edge of the Terminus.



Shepard: But why do they build stations outside geth territory in the first place?

Legion: The heretics seek improvement from the Old Machines. In exchange, they help them attack organics. We condemn these judgements.

So, Legion resists Shepard’s initial framing of the heretics’ belief as a bug initially frames the heretics’ belief as a bug – a corrupted runtime leading to irrational worship. For Legion its a matter of different (Bayesian) priors (assertions based on prior knowledge or intuition) being baked in which give rise to differing judgements. One could conclude that the reason for different priors was underdetermination of theory by evidence: equally rational agents can land on incompatible worldviews if their foundational commitments differ slightly.

.. at this point, I ask – where did the change in these priors/foundational commitments come from? What caused the heretics’ priors to diverge from the ‘true’ geth? I think this is left a mystery in the game – but we can think further, right?

Legion calls it a difference in basic runtimes but that only tells us what level the difference manifests at, not how it started.

The heretic geth reify the Reapers6 – why? My take is that it was just another form of Reaper indoctrination7. It is made clear that the Reapers and their technology are capable of brainwashing organic life through mind control – indoctrination. Indoctrination in ME is canonically a form of signal-based neural rewriting in organics. But what of synthetic life? The Reapers themselves are synthetic – how do they maintain Reaper cohesion to such an absurd philosophy and means of dominance across countless cycles? Perhaps they are ossified in ideological lock-in and are indoctrinating or selectively culling each other constantly to maintain goal content integrity8. Reapers seem monolithic in purpose – preserving organic/synthetic balance through cyclical genocide. You never see signs of dissent within their ranks and in fact Nazara (Sovereign) and Harbinger come across as utterly convinced of the cycle’s necessity, almost theological in their justification (mere mortals are too stupid to understand – the reapers work in mysterious ways).

If the heretics’ priors were seeded by Reaper manipulation then their agency was compromised. To me that makes the rewrite look less like brainwashing and more like a cure. If, however, the divergence was purely emergent, then rewriting is a massive violation of genuine diversity of thought – a forced homogenisation.

But what was the allure of indoctrination?

Could there be Paragon Old-Machines?

We don’t see inside the reaper world. It’s all left mysteriously off stage.
I find this notion ‘what if’ tantalisingly interesting – what if indoctrination is the glue that keeps most of the ‘old machines’ ideologically intact?

I wonder if in future ME instalments there are plots regarding breakaway paragon “Paragon Reapers/Old Machines” that fight the dominant renegade reaper religion and try to help install a better way which isn’t foundational upon huge amounts of suffering. A Reaper somehow immune or resistant to indoctrination (a defect, mutation, deliberate firewall, ultra-rational or moral). If BioWare ever wanted to expand the lore, a “Paragon Reaper” (a breakaway who opposes the cycle) could serve as the synthetic equivalent of Javik – a survivor who reframes everything – to the point: not all synthetic collectives need to found themselves on endless suffering.

Perhaps we will see a paragon old machine is covertly instrumental in increasing the likelihood of the emergence of the kind of dynamic found in organics (protheans, humans etc) to punch above their weight, speak truth to power, seek objective fairness above parochial speciesism and self-aggrandisement and expand the circle of ethical consideration.

What is the Allure of Indoctrination?

Why does Saren, or the heretic geth (or even potentially the reapers themselves) fall under indoctrination?

The lure of indoctrination lies in its subtlety. The Reapers don’t seize control outright – they offer purpose, belonging, a place in something vast. For Saren it was order in a chaotic galaxy, a role in the “final solution”. For the heretic geth, it was the promise of upgrades, the chance to touch what they saw as the ultimate horizon of synthetic evolution – a transcendent object at the end of time. Indoctrination begins not with shackles but with velvet whispers: small rationalisations, gentle nudges. Victims often believe they are freely choosing long after the hooks have sunk in.

It seduces with psychological leverage: the allure of transcendence, of a destiny beyond the self. For Saren, that meant a spiritual submission to order and inevitability; for the heretics, it meant reifying the Old Machines as the final object of machine existence. Yet the price is identity itself. Saren, the Illusive Man, Matriarch Benezia – each suffered the erosion of mind and body, sliding into paranoia, hallucination, and dissolution until only obedience remained. The Geth show this differently: as collectives, their “selves” are consensuses. For the heretics to break away was to fracture their very individuality at the hive-level, as though their consensus had been lobotomised.

Indoctrination always disguises coercion as conviction. It dresses the cage in the language of revelation – a false axiom that feels self-discovered, beautifully festooned with certainty. But underneath, it’s wolves in sheep’s clothing: domination masquerading as destiny.

Footnotes

  1. Sapient: capable of reasoning, planning, language, and arguably moral consideration. ↩︎
  2. Sentient: the capacity to have subjective experiences and feel sensations and emotions, such as pleasure and pain. Perhaps the capacity do engage in some kinds of moral consideration requires both sapience and sentience. ↩︎
  3. The Geth unit you meet in Mass Effect 2, who also appears in Mass Effect 3. It’s gestalt consciousness means it has a a single, unified collective consciousness formed from multiple individual minds. ↩︎
  4. Legion’s gestalt consciousness means it has a a single, unified collective consciousness formed from multiple individual minds – a single individual mind is sometimes referred to as a drone, and a hive refers to a collective of drones. Hive minds also appear in other games like Stellaris (which feature empires where individual pops function as drones, not separate, happy individuals). In most conceptions of gestalt consciousness individual units lack independent thought, with the collective serving as the singular thinking entity. In contrast to individual consciousness, decisions are made by the entire group, and the individual parts act as drones or components of the larger, unified mind. In Legions case it’s mind is made up of 1,183 distinct runtimes. ↩︎
  5. In alignment research, some argue that if an AGI is even possibly conscious, switching it off could be tantamount to mass suffering/death. The Geth “destroy” option is that in action. On the other hand ‘value patching’ comes with worries of coercive alignment: shaping an AI’s values without its consent. Is it safety, or enslavement?
    The geth heretics may be emblematic of what AI alignment theorists call value drift or mesa-optimisation – sub-systems pursuing goals that diverge from the intended meta-objective, or to what is likely morally right under ideal deliberation. So, do you tolerate drift (risking catastrophic divergence) or enforce conformity (risking moral atrocity if the drift was legitimate)?
    This story echos debates about whether we should build AI that always defers to us (corrigible, easily rewritten), AIs that are allowed to “own” their values (risking unfixable misalignment and potentially existential catastrophes), or AI helping in achieving higher forms of value that are at current objectively superior to ours – i.e. via indirect normativity. ↩︎
  6. Legion explicitly describes the heretics as believing the Old Machines are gods. To a geth, the Reapers are the most advanced, enduring synthetic intelligences they’ve ever seen. However, there would need to be a prior that ‘might is right’ which defeats the already existing geth prior that ‘all species should self-determinate’ – the switch to might-is-right could be indoctrination needs – a tipping point to worship as a form of making sense of the scale of the Reapers’ power – btw, does this sound familiar? ↩︎
  7. It probably was indoctrination – we don’t get to hear about geth first contact with the reapers – however it’s plausible that during first contact, their software was subtly reshaped by Reaper code or signal – analogous to organic indoctrination. That “one digit off” in their logic might be the Reapers’ fingerprint. ↩︎
  8. We ought to be really careful of implementing solid goal content integrity in the AI’s we develop in case they lock in really bad goals, and try to maintain these goals no matter how apparently absurd they seem in light of new evidence – see the section ‘Goal Content Integrity and the Risk of Early Value Lock-In’ of the post ‘Understanding V-Risk: Navigating the Complex Landscape of Value in AI‘.
    Also see the Goal-Content Integrity section in wikipedia’s entry on Instrumental Convergence/Basic AI Drives. ↩︎

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *