OpenAI, which started the artificial intelligence (AI) craze late last year, has announced that it is forming a dedicated team to manage the risks of super-intelligent AI. This AI model will compete much more with a human, and it is quite likely that it will far surpass its performance in many areas. OpenAI states that superintelligent artificial intelligence is smarter than even the most gifted and intelligent human and excels in several areas at once. The company believes that such an AI model could be released before 2030.
OpenAI explains that “superintelligence will be the most powerful technology ever invented by humanity, and could help solve many of the world’s most pressing problems.” But at the same time, the company warns that such technology, if not properly controlled, could become dangerous and even pose risks for the extinction of humanity. That is why OpenAI is now building a team of specialists whose task will be to manage the risks posed by superintelligent AI.
Advertising
The company has also announced that it will devote 20% of its currently available computing power to developing an “automated alignment explorer”. Such a system would theoretically help OpenAI ensure that superintelligence is safe to use and aligned with human values.
It should be noted that this OpenAI announcement comes at a time when governments around the world are considering how to regulate the AI industry. The company’s CEO, Sam Altman, also said that AI regulation is “vital” and that OpenAI wants to work with policymakers on the issue.
2023-07-07 06:30:00
#OpenAI #believes #superintelligent #model