While the world wakes up to the huge potential impacts of AI in the future, how will national worries about other nations gaining ‘AI Supremacy’ effect development?
Especially development in AI Ethics & safety?
Claire-AI is a new European confederation.
Self described as
A Competitive Vision
Their vision admits of a fear that Europe may be the losers in a race to achieve AI Supremacy, and this is worrisome – seen as a race between tribes, AI development could be a race to the bottom of the barrel of AI safety and alignment.
However, in terms of investment in talent, research, technology and innovation in AI, Europe lags far behind its competitors. As a result, the EU and associated countries are increasingly losing talent to academia and industry elsewhere. Europe needs to play a key role in shaping how AI changes the world, and, of course, benefit from the results of AI research. The reason is obvious: AI is crucial for meeting Europe’s needs to address complex challenges as well as for positioning Europe and its nations in the global market.
Also the FAQ page reflects this sentiment:
Claire-AI’s vision of Ethics
There is mention of ‘humane’ AI – but this is not described in detail anywhere on their site.
What is meant by ‘human-centred’?
So, what are their goals?
Strong AI when achieved, will be extremely powerful because intelligence is powerful. Over the last few years the interest in AI has ramped up significantly – with new companies and initiatives sprouting like mushrooms. The more competitiveness and economy of attention focusing on AI development in a race dynamic to achieve ‘AI supremacy’ will likely result in Strong AI being achieved sooner than previously expected by experts, as well as motivation to precautionary measures.
This race dynamic is good reason to focus on researching how we should think about the strategy to cope with global coordination problems in AI safety as well as its possible impact on an intelligence explosion.
Humanity has a history of falling into Hobbsian Traps – since a first mover advantage of Strong AI could be overpowered compared to other economic focuses, a race to achieve such a powerful general purpose optimiser as Strong AI, could result in military arms races.
What could be done to mitigate against an AI arms race?