Some scientists, such as Stephen Hawking and Stuart Russell, believe that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction. Business magnate Elon Musk characterizes AI as humanity's biggest existential threat. OpenAI's founders structured it as a non-profit free of financial stockholder obligations, so that they could focus its research on creating a positive long-term human impact.[4]
OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it's equally difficult to comprehend "how much it could damage society if built or used incorrectly".[4] Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach."[5] OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible..."[4] Co-chair Sam Altman expects the decades-long project to surpass human intelligence.[6]
Vishal Sikka, the CEO of Infosys, stated that an "openness" where the endeavor would "produce results generally in the greater interest of humanity" was a fundamental requirement for his support, and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work".[7] Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations like Google and Facebook that own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI