The rapid development of artificial intelligence (AI) brings with it a series of more or less serious risks, which both companies and governments are currently trying to identify and find solutions for. In connection with this, the company “OpenAI” has announced that it has created a team that will study and reduce the “catastrophic risks” associated with AI.
OpenAI said the new team will “track, assess, predict and protect” against potentially major AI-driven challenges, including nuclear threats. The company explains that the team will work to mitigate “chemical, biological, radiological” and other threats. Some other risks that will receive particular attention include the ability of AI to fool people, as well as cyber threats.
The company believes that advanced AI models, which will be far more advanced than the current state-of-the-art AI models, can benefit all of humanity. At the same time, it should be taken into account that they can create greater risks, which must be recognized and a solution must be found. OpenAI notes that the team will also develop and maintain a policy outlining what the company does to evaluate and monitor AI models.
It should be noted that Sam Altman, head of OpenAI, has previously warned that AI can pose catastrophic risks. He has also suggested that governments should treat artificial intelligence as seriously as nuclear weapons.
2023-10-29 07:00:00
#OpenAI #formed #team #study #catastrophic #risks