Home » World » Minister: Apple’s ChatGPT integration could pose security risk

Minister: Apple’s ChatGPT integration could pose security risk

Apple’s plans to integrate ChatGPT into its own operating system could pose a security risk, Minister Beljaarts of Economic Affairs said in response to questions from the PVV. The party had submitted parliamentary questions after Elon Musk announced that he would ban Apple devices in all his companies if Apple integrates ChatGPT at OS level.

“Do you share Mr Musk’s view that the integration of OpenAI at the OS level of Apple products, as announced by Apple at the WWDC (Worldwide Developers Conference), could pose a potential security risk to these devices? If not, why not?”, PVV MP Valize wanted to know. “Integrating third-party products into Apple products, but also products from similar parties, is a common process. This integration process can entail security risks, with potential vulnerabilities generally increasing as the integration becomes more in-depth,” the minister responded (pdf).

According to Beljaarts, the risks in the specific case of AI integration within an operating system, such as the integration of OpenAI at OS level, are further intensified. “This form of integration requires access to sensitive data and core components of the OS, which goes beyond conventional integrations. Apple has indicated in public communications that data is exchanged with external parties in this process.” The tech company has indicated that it will guarantee privacy and security, but technical details are not yet known, the minister adds.

Valize also asked whether the potential security risks are undesirable for the government. “In general, security risks that arise from integrations of third-party products in an operating system can be undesirable for the government. For example, a risk of improper sharing and processing of data from smartphone users is undesirable. At the same time, measures are possible to mitigate these risks,” the minister replied.

Beljaarts states that for the use of generative AI applications, such as those of Apple, the provisional position of generative AI on the use of generative AI by government organizations applies. “This states that before deploying an AI application, it must be subject to an Impact Assessment Human Rights Algorithm (IAMA) and a Data Protection Impact Assessment (DPIA).” These assessments are then submitted to the Chief Information Officer (CIO) and the Data Protection Officer (FG). “Through these measures, potential security risks are identified in time and can be intervened effectively,” the minister explains.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.