|

Securing Tomorrow: An Iterative Framework for Achieving Utopia

There is so much wrong with the world right now – I don’t know where to begin. I wake up early with dread sometimes – it’s hard to know what thread to tug at – the amorphous confusion festooning the great big everything seems so hopeless.

It would be a lot easier if everyone were to cooperate – less energy burned in friction and competition.. yada yada. People want different things, and people have the right to want what they want. But how to do the most good when people have different ideas of what good is?

How to motivate people politically or culturally to be on the same page about the best way to balance individual rights with the greater good is probably far beyond me – but I’ll take a stab and invoke indirect normativity – it’s the answer for when you don’t know the answer or how to explicitly find the answer. In this case rights and greater goods come down to values – indirect normativity is form of motivation selection that attempts to indirectly find values by deferring some of the process of value discovery to powerful AI.

I often muck about with LLMs to get a feel for how good they are at philosophy, ethics and how they may shape up to achieve some kind of indirect normativity. So I was interacting with the Peter Singer AI who asked me how to tradeoff individual rights with the greater good.

My hedgy answer was that a lot of this depends on how a society is structured and governed, the level of understanding of ethics people have, and their intrinsic motivations for caring about individual rights and the greater good – in my not being sure about the specifics, I have ideas about how to get there. As a software engineer/data architect I’d like to think solutions are both theoretically sound and practically implementable.

Ideally, society would be game theoretically structured to incentivize things like individual rights (and other aspects of person effecting views) and the greater good (whatever it is that this entails) – ideally the game theory should be structured to be resilient to shifts in power dynamics, as this is where a lot of failures occur (I cringe at the cyclical undoing of progress of previous political parties by the new ones).

Because we aren’t ideal observers, we can’t be sure what the best tradeoffs between whatever the relevant features of ethics end up being (which I myself believe includes greater good and person effecting views) – therefore we ought to temper our considerations with large helpings of epistemic humility; with the acknowledgement of uncertainty – keeping the doors open to revisiting foundational values as understanding evolves, which might prevent rigidity in ethical norms.
Perhaps we can solve this in a staged based approach to indirect normativity – steeply reducing catastrophic risks and over time maximize opportunities:

  • 1) seek to implement an ethically permissible existentially secure global state of affairs – one which mitigates the risk of civilizational collapse, extinction or lock-in to a sub-optimal state for eternity <- this first step safeguards humanity’s potential for further ethical progress
  • 2) once 1) is achieved, investing energy to better (deeper & wider) understand the nature of value and ethics via “long reflections” and how to adapt civilization to accommodate effective change <- helping ensure that society moves toward genuinely valuable goals, not merely proximate or contingent ones
  • 3) take civilization in the direction that 2) discovered, and then repeat 2) & 3) <- allowing for continuous improvement

In essence this approach hopes to achieve (1) safety and security, and iteratively discover and implement increasingly permissible civilizations though 2) & 3) until an ideal best of all possible utopias is achieved (if an ideal utopia is at all possible).

On a practical note, I assume achieving 1, 2 and 3 will be made far more practical with the aid of transformative oracle AI (and eventually superintelligence).

It would be a lot easier if values/ethics could be grounded in the point of view of the universe; that moral realism or something like it is true – this makes values discoverable and not merely arbitrary (much like physical laws solutions to difficult math problems). If values were real they would be amenable to all the wonderful tools of science – falsifiability, testability, reproducability etc – we could know which of our values are valid, and which ones need to be phased out, and the alien mind of AI could load it’s values from the same mutual ground upon which our values reside.

This would probably mean aligning AI would be far less fraught with danger – as aligning AIs with moral realism tempered with appropriate epistemic humility would reduce the risk of value drift away from ethical permissibility. This iterative path could also benefit from leveraging transformative AI for simulating possible futures, assessing the potential risks/opportunities, and evaluating near to long-term impacts of each stage’s decisions without preempting human values.

Interactions with Peter Singer AI

What prompted me to write this post was me interacting with Peter Singer AI about sacrificing one to save the many. I was asked: “What do you think is the best way to balance individual rights with the greater good in such scenarios?”. Note I have written about these topics before, but thought it would be good to record my thoughts today. See my interactions with the AI Peter Singer in images below:

I hope someday to discuss these ideas with Peter Singer in person. Though I have in the past interviewed Peter on other topics (see here and here).

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *