Sam Altman leaves role as president at YCombinator and joins OpenAI as CEO – why?
Elon Musk created OpenAI to to ensure that artificial intelligence, especially powerful artificial general intelligence (AGI), is “developed in a way that is safe and is beneficial to humanity,” – it’s an interesting bet – because AGI doesn’t exist yet – and the tech industries forecasts about when AGI will be realised spans across a wide spectrum of relatively soon to perhaps never.
Sam and others believe that developing AGI is a large project, and won’t be cheap – and could require upwards of billions of dollars “in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers”. OpenAI was once a non-profit org, but recently it restructred as a for-profit with caveats.. Sam tells investors that it isn’t clear on the specifics of how return on investment will work in the short term, though ‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.’
So, first create AGI and then use it to money… But how much money?
Capped profit at 100x investment – then excess profit goes to the rest of the world. 100x is quite a high bar no? The thought is that AGI could be so powerful it could..
“maybe capture the light cone of all future value in the universe, and that’s for sure not okay for one group of investors to have.”
If we take the high standards of Future of Humanity Institute* for due diligence in perusing safe AI – are these standards being met at OpenAI? While Sam seems to have some sympathy for the arguments for these standards, he seems to believe it’s more important to focus on societal consequences of superintelligent AI. Perhaps convincing key players of this in the short term will help incubate an environment where it’s easier to pursue strict safety standards for AGI development.
See this video (at approx 25.30 minute mark and onwards)
* See Nick Bostrom’s book ‘Superintelligence‘