Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.


Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Why Technology Favors a Singleton over a Tyranny

Is democracy loosing its credibility, will it cede to dictatorship?  Will AI out-compete us in all areas of economic usefulness – making us the future useless class?

It’s difficult to get around the bottlenecks of networking and coordination in distributed democracies. In the past quite naturally distributed systems being scattered are more redundant wer in many ways fault tolerant and adaptive – though these payoffs for most of us may dwindle if humans become less and less able to compete with Ex Machina. If the relative efficiency of democracies to dictatorships tips towards the latter nudging a transition to centralized dictatorships, while solving some distribution & coordination problems, the concentration of resource allocation may be exaggerated beyond historical examples of tyranny. Where the ‘once was proletariat’ now new ‘useless class’ have little to no utility to the concentration of power – the top 0.001% – the would be tyrants will likely give up on ruling and tyrannizing – and instead find it easier to cull the resource hungry and rights demanding horde – more efficient that way. Ethics is fundamental to fair progress – ethics is philosophy with a deadline creeping closer – what can we do to increase the odds of a future where the value of life is evaluated beyond it’s economic usefulness?
I found ‘Why Technology Favors Tyranny by Yuval Noah Harari‘ was a good read – I enjoy his writing, and it provokes me to think.  About 5 years ago I did the ‘A Brief History of Humankind’ course via coursera – urging my friends to join me.  Since then Yuval has taken the world by storm.
The biggest and most frightening impact of the AI revolution might be on the relative efficiency of democracies and dictatorships. […] We tend to think about the conflict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a conflict between two different data-processing systems. Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all available information fast enough and make the right decisions. […]Why Technology Favors Tyranny
I assume AI superintelligence is highly probable if we don’t go extinct first.  For the same reason that the proletariat’s become useless I think ultimately the AI-Human combination will likely become useless too, and cede to Superintelligent AI – so all humans becomes useless. The bourgeoisie elite may initially feel safe in the idea that they don’t need to be useful, they just need to maintain control of power. Though the sliding relative dumbness of bourgeoisie to superintelligence will worry them.. perhaps not long after wiping out the useless class, the elite bourgeoisie will then see the importance of the AI control problem, and that their days are numbered too – at which point will they see ethics and the value of life beyond economic usefulness as important?
However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more efficient than diffuse systems, because machine learning works better when the machine has more information to analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you’ll wind up with much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. An authoritarian government that orders all its citizens to have their DNA sequenced and to share their medical data with some central authority would gain an immense advantage in genetics and medical research over societies in which medical data are strictly private. The main handicap of authoritarian regimes in the 20th century—the desire to concentrate all information and power in one place—may become their decisive advantage in the 21st century.Why Technology Favors Tyranny
Yuval Noah Harari believes that we could be heading for a technologically enabled tyranny as AI automates all jobs away – and we become the useless class. Though if superintellignece is likely, then human’s will likely to be a bottleneck in any AI/Human hybrid use case – if tyranny happens, it won’t last for long – what use is a useless class to the elite?

Technology without ethics favors singleton utility monsters – not a tyranny – what use is it to tyrannize over a useless class?

Can We Improve the Science of Solving Global Coordination Problems? Anders Sandberg

Anders Sandberg discusses solving coordination problems:

anders-s-02_40_16_03-still042Includes discussion on game theory including:the prisoners dilemma (and the iterated form), the tit-for-tat strategy, and reciprocal altruism. He then discusses politics, and why he considers himself a ‘heretical libertarian’ – then contrasts the benefits and risks of centralized planning vs distributed trial & error and links this in with discussion on Existential Risk – centralizing very risky projects at the risk of disastrous coordination failures. He discusses groupthink and what forms of coordination work best. Finally he emphasises the need for a science of coordination – a multidisciplinary approach including:

  1. Philosophy
  2. Political Science
  3. Economics
  4. Game Theory

Also see the tutorial on the Prisoners Dilemma:

And Anders paper on AGI models.

A metasystem transition is the evolutionary emergence of a higher level of organisation or control in a system. A number of systems become integrated into a higher-order system, producing a multi-level hierarchy of control. Within biology such evolutionary transitions have occurred through the evolution of self-replication, multicellularity, sexual reproduction, societies etc. where smaller subsystems merge without losing differentiation yet often become dependent on the larger entity. At the beginning of the process the control mechanism is rudimentary, mainly coordinating the subsystems. As the whole system develops further the subsystems specialize and the control systems become more effective. While metasystem transitions in biology are seen as caused by biological evolution, other systems might exhibit other forms of evolution (e.g. social change or deliberate organisation) to cause metasystem transitions. Extrapolated to humans, future transitions might involve parts or the whole of the human species becoming a super-organism.Anders Sandberg

Anders discusses similar issues in ‘The thermodynamics of advanced civilizations‘ – Is the current era the only chance at setting up the game rules for our future light cone? (Also see here)


Further reading
The Coordination Game: