## Is Infinity Real?

There is an interesting discussion at Quanta “Solution: ‘Is Infinity Real?’” – Is infinity a real physical phenomenon outside our models? Max Tegmark doesn’t think so – while admitting it is indisputably useful for mathematical models of physics, he believes that nothing is truly continuous – including space and time.
Would an infinitely X* phenomenon be amenable to observational evidence? Perhaps not – and if so, we can never count one infinity, making it difficult to assign a likelihood that infinity exists in the territory and not as just convenient approximations in our maps.
Max believes also there are good philosophical reasons to ditch infinity and pitfalls in assuming infinity in mathematical models. Four points that should be understood (which are detailed in the linked Quanta article):
1. The map is not the territory.
2. Infinity is valid in mathematical models and can be very useful.
3. In the physical world, there are compelling practical and philosophical reasons to reject infinity as a default assumption.
4. There will be limiting cases where the mathematical infinity assumption and the physical absence of infinity result in different answers.

Finite models are proposed as solutions to replace infinite solutions for a few mathematical problems: Hilbert’s hotel, the 100, 200, 300 Triangle, and the Elliptical Pool Table.
“So the bottom line is: Infinity is permissible in mathematics applied to physics because it makes things convenient and tractable in most cases. However, we must be alert for limiting cases where our models are bound to fail, and we will then need to apply different methods.”

*X could represent huge, small, powerful etc..

I had a discussion about this with a friend Adam Karlovsky – and I was surprised when this just came up on my radar – it’s an interesting read.  We discussed the possibility that infinite randomness would produce an infinite amount of copies of Adam Karlovsky – doing an infinite amount of things.  He said that at one stage this thought kept him up at night.  I have had my doubts about the realism of infinity.

So what do you think?

Is Infinity Real?

## Should We Fear or Welcome the Singularity? Nobel Week Dialogue 2015 – The Future of Intelligence

Should science and society welcome ‘the singularity’ – the idea of the hypothetical moment in time when artificial intelligence surpasses human intelligence?
The discussion has been growing over decades, institutes dedicated to solving AI friendliness have popped up, and more recently the ideas have found popular advocates. Certainly super intelligent machines could help solve classes of problems that humans struggle with, and also if not designed well may cause more problems that they solve.

Is the question of fear or hope in AI a false dichotomy?

Ray Kurzweil

While Kurzweil agrees that AI risks are real argues that we already face risks involving biotechnology – I think Kurzweil believes we can solve the biotech threat and other risks though building superintelligence.

Stuart Russell believes that a) we should be exactly sure what we want before we let the AI genie out of the bottle, and b) it’s a technological problem in much the same way as the containment of nuclear fusion is a technological problem.

Max Tegmark says we should both welcome and fear the Technological Singularity. We shouldn’t just bumble into it unprepared. All technologies have been double edged swords – in the past we learned from mistakes (i.e. with out of control fires) but with AI we may only get one chance.

Harry Shum says we should be focussing on what we believe we can develop with AI in the next few decades. We find it difficult to talk about AGI. Most of the social fears are around killer robots.

Maggie Boden

Maggie Boden poses an audience question about how will AI cope with our lack of development in ethical and moral norms?

Stuart Russell answers that machines have to come to understand what human values are. If the first sudo-general-purpose AI’s don’t get human values well enough they may end up cooking it’s owners cat – this could irreparably tarnish the AI and home robot industry.

Kurzweil adds that human society is getting more ethical – it seems that statistically we are making ethical progress.

Max Tegmark

Max Tegmark brings up that intelligence is defined by the degree of ability to achieve goals – so we can’t ignore the question of what goals to give the system if we are building highly intelligent AI. We need to make AI systems understand what humans really want, not what they say they want.

Harry Shum says that the important ethical question for AI systems needs to address data and user privacy.

Panelists: Harry Shum (Microsoft Research EVP of Tech), Max Tegmark (Cosmologist, MIT) Stuart Russell (Prof. of Computer Science, UC Berkeley) and Ray Kurzweil (Futurist, Google Director of Engineering). Moderator: Margaret Boden (Prof. of Cognitive Science, Uni. of Sussex).

This debate is from the 2015 edition of the meeting, held in Gothenburg, Sweden on 9 Dec.