Can AI Help Us Discover Real Moral Structure? A Critical Realist Hunch

I recently read about Critical Realism (CR)1, it’s interesting – I like a lot of it, though I’m not sure I fully buy the layered picture with latent causal structure at the bottom, the “actual” layer floating above that, and the “empirical” layer on top.

So CR back against a certain kind of reductive physicalism (that, for explanatory purposes, the important stuff happens at the micro-physics layer). In contrast, CR leans into emergent causal powers: social roles, structures, institutions and so on having genuine influence 😃. As to this day, I’m still open to both reductive physicalism and emergentism in that they seem to both have useful explanatory power.

I agree that there is real stuff we can’t see and struggle to justifiably infer – hopefully technological progress can help with better tooling to expand what can be seen by us; extend the empirical level, widening the range of what can be observed and measured. Advances in AI might then inhabit kinds of minds that can think past our biases and cognitive limitations, enhancing our epistemic grip on the real world. Humans are pretty bad at deeply searching vast hypothesis spaces, detecting non-obvious regularities, and proposing new candidate theories about how reality works. AI could be far better at deeply searching through vast hypotheses spaces, detect non-obvious regularities and make propositions about new candidate theories about how reality works, especially if it constrains its search by retroducing from observed phenomena (whose richness increases with better tech) to the most plausible underlying hypotheses.

CR doesn’t do away with the is/ought bridgework required – I think we still need normative arguments to get from ‘is’ to ‘ought’: to explain why certain structural facts count as reasons. But it does provide a nice metaphysical stage to dance and play on.

So the moral realist in me may agree with CR in that moral facts might be situated at the higher-level, which are emergent facts about what follows from the deep structure of sentience, minded beings and their relations (this may share something with object oriented ontology that Stelarc talks about).. so these can be seen as facts about what there is reason to do. So moral facts aren’t arbitrary social constructions, or free floating queer/spooky stuff – but exist as stance independent patterns in the space of all possible minds (and relationships (to please the sociologists)) and this all is grounded the real/causal/structural stuff of which agents, welfare and cooperation is are made from.

So, IF there IS real moral structure: robust patterns of better/worse forms of life given the deep facts about minds, suffering, cooperation, etc… THEN AI could help us uncover moral structure… THEREBY making moral inquiry like scientific inquiry: modelling, experimenting (indirect normativity: thought-experiments, simulations etc…), comparing theories by coherence, explanatory power, game-theoretic stability under reflection.

As such if we have an adequately aligned superintelligence, it can be motivated to engage in robust indirect normativity – mapping the “actual” moral domain, searching the moral theory space, approximating ideal reflection and so on – lather rinse repeat…

I’m not sure whether CR constrains moral knowledge to be always socially situated or not – and I’m not even sure this should be a concern. The risk, of course, is that badly aimed or perversely motivated AI will happily excavate the CR iceberg while steering the ship straight into it. The important ‘critical’ aspect to me is it may be a concern if AI is beholden to existing power structures that could distort what it ‘discovers’, biasing morality in favour of the powerful. Thus motivating AI to be un/less biased may require infrastructure redesign so as not to inject bias into the extrapolation base, and avoid the pitfalls and potential backfiring of selfish AI use (the exploiters paradox). But conceptually, critical realism seems a pretty natural ally to the project of using AI to discover more of what’s really there – rather than treating everything as mere projection or convention.

  1. Critical realism is a philosophical approach to understanding science, and in particular social science, initially developed by Roy Bhaskar (1944–2014). It specifically opposes forms of empiricism and positivism by viewing science as concerned with identifying causal mechanisms.” – See wikipedia entry on Critical Realism ↩︎

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *