Sam Altman, president of the American company OpenAI, developer of the artificial intelligence conversation platform GBT Chat, believes that content created using artificial intelligence technologies in the future will outnumber content produced by humans.
Sam Altman, the company’s president, said at a conference organized by the American chip giant Intel, that societies must prepare to face the negative consequences of the spread of artificial intelligence before this happens.
Altman expects the negative effects of artificial intelligence on elections to appear in the near future. His statements come at a time when the use of fabricated video footage using what is known as deepfake appeared in the Indian elections that will be held later this year.
He added that the risks associated with the use of artificial intelligence in cyber warfare and biological weapons must be taken seriously.
He said that artificial intelligence will not only be a beautiful story, but it will also be a story with more advantages than disadvantages.
With the help of artificial intelligence, it is possible to accelerate the pace of scientific research, cure diseases, and improve education, according to Altman, adding that it is now difficult to imagine the extent of improvement that the future can witness thanks to new technology.
At the same time, there must be a strong regulatory role for the state in new technologies, especially now when artificial intelligence models are still relatively weak.
Altman said that a company like OpenAI should not secretly develop artificial intelligence that is smarter than humans and then release it to the world. Society and its institutions must have time to gradually adapt to developments.
It is noteworthy that the GPT chat platform, which was launched last year and sparked strong enthusiasm about artificial intelligence, relies on using huge amounts of information to train it, and it can write texts at a linguistic level close to human writing, as well as write program codes and summarize information.
2024-03-03 10:39:01
#OpenAI #calls #preparation #negative #consequences #artificial #intelligence