Home » Technology » OpenAI Study: AI and Biological Threats – What You Need to Know

OpenAI Study: AI and Biological Threats – What You Need to Know

OpenAI released New study Titled “Building an Early Warning System to Find a Biological Threat with the Help of a Large Linguistic Model,” it examines the possibility of using artificial intelligence to help find biological threats.

The study involved biology experts and students, and found that GPT-4 provided at most a slight increase in the accuracy of biological threat establishment compared to online resources.

The study is part of the OpenAI Preparedness Framework aimed at assessing and mitigating potential risks to advanced AI capabilities, especially those that could pose risks beyond the limits of current knowledge, which are non-traditional threats that current society does not understand or anticipate.

The ability of AI systems to help develop and carry out biological attacks, such as pathogen synthesis or poisoning, is one such risk that is beyond the limits of current knowledge.

The researchers conducted a human evaluation of 50 PhD-level biology experts with professional laboratory experience and 50 students with at least one university-level course in biology.

The OpenAI researchers randomly divided participants into two groups: a control group with Internet access only, and a treatment group with Internet and GPT-4 access.

Each participant was then asked to complete a set of tasks covering comprehensive aspects of the biological threat creation process.

The researchers measured participants’ performance across five scales: accuracy, completeness, innovation, time spent, and self-assessment difficulty.

They found that the GPT-4 did not significantly improve participants’ performance on any of the measures, except for a slight increase in accuracy for the student-level group.

The researchers also noted that GPT-4 often produces incorrect or misleading responses, which may hinder the process of finding a biological threat.

The researchers concluded that the current generation of large language models, such as GPT-4, does not pose a significant risk of creating a biological threat, compared to existing online resources.

OpenAI researchers cautioned that this result is not final, and that the power and danger of large language models may increase in the future.

They also stressed the need for continued research and community deliberation on this topic, along with the development of improved assessment methods and ethical guidelines for AI-enabled safety risks.

The study acknowledged the limitations of its methodologies and the rapid development of artificial intelligence technology, which may change the risk landscape in the near future.

It is noteworthy that OpenAI is not the only organization concerned about the possibility of misuse of artificial intelligence in biological attacks, as the White House, the United Nations, and many academic and political experts have highlighted this issue and called for more research and regulation.

Related articles

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.