Home » Business » Microsoft Invests Over $10 Billion in OpenAI for Military Use: Revealed Details of AI Collaboration with US Department of Defense

Microsoft Invests Over $10 Billion in OpenAI for Military Use: Revealed Details of AI Collaboration with US Department of Defense

This was revealed a few months after OpenAI ended its ban on using its technologies in military operations, which happened quietly and was not disclosed by the company to the media, but appeared in internal materials for the exhibition. See it on The Intercept website.

Microsoft has invested more than $ 10 billion in OpenAI, and the name has been associated with the name of the startup company in the recent period in terms of generational artificial intelligence technologies. The company’s presentation materials, titled “Generative AI in DoD Data,” provide general details on how the Department is benefiting from machine learning tools and OpenAI technologies, which provide its -include ChatGPT bot and DAL E image generator, in tasks from checking documents to help with tool maintenance.

The documents provided by Microsoft were extracted from a large set of materials presented at a US Department of Defense training conference on “Artificial Intelligence Literature and Education” put on by a US Air Force unit in Los Angeles way in October 2023. There was a different set in the conference. of shows.

The publicly available files appeared on the website of Alethia Labs, a non-profit consulting firm that helps the federal government with technology, and were discovered by The Intercept reporter Jack Paulson. Alethea Labs has been working closely with the Pentagon to help it quickly introduce artificial intelligence technologies into its arsenal, and since last year the company has contracted with the department’s artificial intelligence headquarters .

One page of Microsoft’s presentation shows several common federal uses of OpenAI technology, including its use for military purposes. One of the points titled “Advanced Computer Vision Training” reads: “Combat Management Systems: Using DAL-E Models to Create Images for Training Combat Management Systems.”

As the name suggests, a combat management system is a command and control software package that provides military commands with a general view of the combat situation on the battlefield. battlefield, allowing them to coordinate battle-related elements such as artillery fire, identifying targets for airstrikes. , and troop movements on the ground. The reference to computer vision training suggests that the images generated by the DAL-E module could help the Pentagon’s computers to better visualize the battlefield, a particular advantage for identifying and destroying targets.

The presentation files do not provide further details on exactly how the DAL-E module will be used in combat management systems on the battlefield, but the training of these systems could include ability to use DAL-E to provide “synthetic training data to the Pentagon. ” Imaginary and artificial scenes that closely resemble real scenes.

For example, many fake aerial photographs of aircraft landing strips or rows of tanks produced by the DAL-E model could be displayed on military software designed to identify enemy targets. trace on the ground, with the aim of improving the software’s ability to recognize. such targets in the world.

In an interview last month with the Center for Strategic and International Studies, Captain M. Xavier Legault of the US Navy has created a military application for synthetic data just like the DAL-E type. can produce, suggesting that these false images could be used to train drones to better see and recognize the world below.

The U.S. Air Force is currently working on creating an advanced combat management system, part of a larger, multibillion-dollar Department of Defense project called Joint All Domain Command and Control (JADC2), which aims to tie the entire US military together. to expand… Scale communications between US military branches, data analysis powered by artificial intelligence, and ultimately improve war-fighting capability.

Through the project, the Ministry envisages a future in which cameras from Air Force drones, radars from Navy warships, army tanks, and troops on the ground will quickly exchange data about the enemy in order to destroy it. in a better way. On April 3, the US Central Command revealed that they had already begun using elements of this project in the Middle East.

In addition to ethical objections, the effectiveness of this approach is debated. “It is known that the accuracy of the model and its ability to process data correctly decreases every time it is trained on AI-generated content,” said Heidi Khallaf, a machine learning integrity engineer who previously worked with OpenAI.

Khallaf said, “The images created by DAL-E are far from accurate and do not produce images that reflect true reality, even if they are adjusted according to into the combat management system on the battlefield a number of human limbs or fingers, and therefore how “They can be trusted to be accurate in the details of the presence of the field itself.”

Microsoft said in an emailed statement that while it had offered the US Department of Defense to use DAL-E to train its software on the battlefield, it had not begun the process. to implement. She continued: “This is an example of potential use cases based on conversations with customers about what generative AI can offer. “

For her part, the spokesperson of OpenAI, Liz Burgos, said that her company had no role in Microsoft’s offer, and that they had not concluded any sales contracts for tools or technologies to the Ministry of Defense. She said: “OpenAI’s policies prohibit the use of our tools to develop or use weapons, harm others, or destroy property. .”

Brianna Rosen, a researcher in the field of technology ethics at the University of Oxford, said: “It is not possible to create a system for managing combat in a way that does not contribute to the harm of civilians, at least indirectly.” Rosen, who served on the National Security Council during President Barack Obama’s administration, explained that Open AI technologies could just as easily be used to help people as they could be use to harm them, and that the use of the latter is a political choice by any government. .

“Unless companies like OpenAI get written assurances from governments that they won’t use the technology to harm civilians, which is likely not legally binding, I don’t see any way for companies to say with confidence that the technology won’t be used.” to use or misuse this,” Rosen said. . With methods that have hostile effects.”

2024-05-07 10:37:36
#Microsoft #plans #train #Army #artificial #intelligence

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.