Hi Elon,
I was interviewed for a Wired article on OpenAI, and the fact checker sent me some questions. Wanted to sync with you on two in particular to make sure they sound reasonable / aligned with what what you’d say:
Would it be accurate to say that OpenAI is giving away ALL of its research?
At any given time, we will take the action that is likely to most strongly benefit the world. In the short term, we believe the best approach is giving away our research. But longer-term, this might not be the best approach: for example, it might be better not to immediately share a potentially dangerous technology. In all cases, we will be giving away all the benefits of all of our research, and want those to accrue to the world rather than any one institution.
Does OpenAI believe that getting the most sophisticated AI possible in as many hands as possible is humanity’s best chance at preventing a too-smart AI in private hands that could find a way to unleash itself on the world for malicious ends?
We believe that using AI to extend individual human wills is the most promising path to ensuring AI remains beneficial. This is appealing because if there are many agents with about the same capabilities they could keep any one bad actor in check. But I wouldn’t claim we have all the answers: instead, we’re building an organization that can both seek those answers, and take the best possible action regardless of what the answer turns out to be.
Thanks!
- gdb