Why did Sam Altman join OpenAI as CEO?

Sam Altman leaves role as president at YCombinator and joins OpenAI as CEO – why?

Elon Musk created OpenAI to to ensure that artificial intelligence, especially powerful artificial general intelligence (AGI), is “developed in a way that is safe and is beneficial to humanity,” – it’s an interesting bet – because AGI doesn’t exist yet – and the tech industries forecasts about when AGI will be realised spans across a wide spectrum of relatively soon to perhaps never.

We are trying to build safe artificial general intelligence. So it is my belief that in the next few decades, someone; some group of humans, will build a software system that is smarter and more capable than humans in every way. And so it will very quickly go from being a little bit more capable than humans, to something that is like a million, or a billion times more capable than humans… So we’re trying to figure out how to do that technically, make it safe and equitable, share the benefits of it – the decision making of it – over the world…Sam Altman

Sam and others believe that developing AGI is a large project, and won’t be cheap – and could require upwards of billions of dollars “in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers”. OpenAI was once a non-profit org, but recently it restructred as a for-profit with caveats.. Sam tells investors that it isn’t clear on the specifics of how return on investment will work in the short term, though ‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.’

So, first create AGI and then use it to money… But how much money?

Capped profit at 100x investment – then excess profit goes to the rest of the world. 100x is quite a high bar no? The thought is that AGI could be so powerful it could..

“maybe capture the light cone of all future value in the universe, and that’s for sure not okay for one group of investors to have.”

If we take the high standards of Future of Humanity Institute* for due diligence in perusing safe AI – are these standards being met at OpenAI? While Sam seems to have some sympathy for the arguments for these standards, he seems to believe it’s more important to focus on societal consequences of superintelligent AI. Perhaps convincing key players of this in the short term will help incubate an environment where it’s easier to pursue strict safety standards for AGI development.

I really do believe that the work we are doing at OpenAI will not only far eclipse the work I did at YC, but any of the work anyone in the tech industry does…Sam Altman

See this video (at approx 25.30 minute mark and onwards)

 

* See Nick Bostrom’s book ‘Superintelligence

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *