Posts

Why did Sam Altman join OpenAI as CEO?

Sam Altman leaves role as president at YCombinator and joins OpenAI as CEO – why?

Elon Musk created OpenAI to to ensure that artificial intelligence, especially powerful artificial general intelligence (AGI), is “developed in a way that is safe and is beneficial to humanity,” – it’s an interesting bet – because AGI doesn’t exist yet – and the tech industries forecasts about when AGI will be realised spans across a wide spectrum of relatively soon to perhaps never.

We are trying to build safe artificial general intelligence. So it is my belief that in the next few decades, someone; some group of humans, will build a software system that is smarter and more capable than humans in every way. And so it will very quickly go from being a little bit more capable than humans, to something that is like a million, or a billion times more capable than humans… So we’re trying to figure out how to do that technically, make it safe and equitable, share the benefits of it – the decision making of it – over the world…Sam Altman

Sam and others believe that developing AGI is a large project, and won’t be cheap – and could require upwards of billions of dollars “in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers”. OpenAI was once a non-profit org, but recently it restructred as a for-profit with caveats.. Sam tells investors that it isn’t clear on the specifics of how return on investment will work in the short term, though ‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.’

So, first create AGI and then use it to money… But how much money?

Capped profit at 100x investment – then excess profit goes to the rest of the world. 100x is quite a high bar no? The thought is that AGI could be so powerful it could..

“maybe capture the light cone of all future value in the universe, and that’s for sure not okay for one group of investors to have.”

If we take the high standards of Future of Humanity Institute* for due diligence in perusing safe AI – are these standards being met at OpenAI? While Sam seems to have some sympathy for the arguments for these standards, he seems to believe it’s more important to focus on societal consequences of superintelligent AI. Perhaps convincing key players of this in the short term will help incubate an environment where it’s easier to pursue strict safety standards for AGI development.

I really do believe that the work we are doing at OpenAI will not only far eclipse the work I did at YC, but any of the work anyone in the tech industry does…Sam Altman

See this video (at approx 25.30 minute mark and onwards)

 

* See Nick Bostrom’s book ‘Superintelligence

Elon Musk on the Future of AI

elon-musk-sml2Elon Musk discusses possible best of alternative AI futures – in that the advanced AI tech is democratized – no one company has complete control over the AI tech – it could become a very unstable situation if powerful AI is concentrated in the hands of a few.
Elon also discusses improving the neural link between humans and AI – because humans are so slow – and also believes that merging with the AI will solve the control problem with AI.
Open AI seems to have a good team – as a 501c non-profit (unlike many non-profits) does have a sense of urgency in increasing the odds of a friendly AI outcome.

Transcript of the section of the interview where Elon Musk discusses Artificial Intelligence:
Interviewer: Speaking of really important problems, AI. You have been outspoken about AI. Could you talk about what you think the positive future for AI looks like and how we get there?
Elon: Okay, I mean I do want to emphasize that this is not really something that I advocate or this is not prescriptive. This is simply, hopefully, predictive. Because you will hear some say, well, like this is something that I want to occur instead of this is something I think that probably is the best of the available alternatives. The best of the available alternatives that I can come up with, and maybe someone else can come up with a better approach or better outcome, is that we achieve democratization of AI technology. Meaning that no one company or small set of individuals has control over advanced AI technology. I think that’s very dangerous. It could also get stolen by somebody bad, like some evil dictator or country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation, I think, if you’ve got any incredibly powerful AI. You just don’t know who’s going to control that. So it’s not that I think that the risk is that the AI would develop a will of its own right off the bat. I think the concern is that someone may use it in a way that is bad. Or even if they weren’t going to use it in a way that’s bad but somebody could take it from them and use it in a way that’s bad, that, I think, is quite a big danger. So I think we must have democratization of AI technology to make it widely available. And that’s the reason that obviously you, me, and the rest of the team created OpenAI was to help spread out AI technology so it doesn’t get concentrated in the hands of a few. But then, of course, that needs to be combined with solving the high-bandwidth interface to the cortex.
Interviewer: Humans are so slow.
Elon: Humans are so slow. Yes, exactly. But we already have a situation in our brain where we’ve got the cortex and the limbic system… The limbic system is kind of a…I mean, that’s the primitive brain. That’s kind of like your instincts and whatnot. And the cortex is the thinking upper part of the brain. Those two seem to work together quite well. Occasionally, your cortex and limbic system will disagree, but they…
Interviewer: It generally works pretty well.
Elon: Generally works pretty well, and it’s like rare to find someone who…I’ve not found someone wishes to either get rid of the cortex or get rid of the limbic system.
Interviewer: Very true.
Elon: Yeah, that’s unusual. So I think if we can effectively merge with AI by improving the neural link between your cortex and your digital extension of yourself, which already, like I said, already exists, just has a bandwidth issue. And then effectively you become an AI-human symbiote. And if that then is widespread, with anyone who wants it can have it, then we solve the control problem as well, we don’t have to worry about some evil dictator AI because we are the AI collectively. That seems like the best outcome I can think of.
Interviewer: So, you’ve seen other companies in their early days that start small and get really successful. I hope I never get this asked on camera, but how do you think OpenAI is going as a six-month-old company?
Elon: I think it’s going pretty well. I think we’ve got a really talented group at OpenAI.
Interviewer: Seems like it.
Elon: Yeah, a really talented team and they’re working hard. OpenAI is structured as a 501(c)(3) non-profit. But many non-profits do not have a sense of urgency. It’s fine, they don’t have to have a sense of urgency, but OpenAI does because I think people really believe in the mission. I think it’s important. And it’s about minimizing the risk of existential harm in the future. And so I think it’s going well. I’m pretty impressed with what people are doing and the talent level. And obviously, we’re always looking for great people to join in the mission.

The full interview is available in video/audio and text format at Y Combinator as part of the How to Build the Future series : https://www.ycombinator.com/future/elon/

elon-musk_future-of-ai