Home » today » Business » Three questions and answers: Shadow AI has arrived in companies

Three questions and answers: Shadow AI has arrived in companies

Shadow IT is considered a major problem in many companies, with employees using tools such as Google Drive, OneNote or Slack without the approval of management or the IT department. Now a new phenomenon is emerging, shadow AI, with risks for compliance, data protection or copyright.

Advertisement

Especially in Germany, many companies are critical of AI assistants for developers, as a recent study has shown. This increases the risk that employees will resort to shadow AI themselves. We spoke to Peter Guagenti, President and Chief Marketing Officer of the AI ​​coding assistant Tabnine, about the situation in German companies.

(Bild:

Peter Guagenti

)

Peter Guagenti is an experienced business strategist and entrepreneur with a proven track record in strategy, product development, and driving the growth of technology startups through to exit. With a strong commitment to privacy and ethical standards, he has successfully guided companies through AI adoption while ensuring security, privacy, and compliance.

How widespread is the phenomenon of shadow AI in German development departments? Does it play a role?

Shadow AI probably plays a role in almost every development department in a company today. The corresponding tools are advancing so quickly and strongly that they have probably already found their way in before the companies themselves have done the research. For example, since the beginning of this year, more than ten percent of new users of our product have come from Germany. This is second place worldwide after the USA. Many are most likely using our AI code assistant for company projects without their employer having issued a corresponding policy.

We’ve seen this kind of use of products and tools before shadow AI. If companies refuse to adopt the new technologies their employees want, they will find a way to gain access to them themselves. This is what happened with the advent of the internet, open source, mobile devices, the cloud, and other innovations. However, AI tools amplify some of the risks we’ve seen with other technologies, such as loss of privacy and inadvertent leakage of confidential company data, as well as potential misuse of intellectual property and copyright infringement, and inaccuracies in the output of these tools. With this in mind, shadow AI is far more challenging than shadow IT used to be.

By the way, this is not a purely German phenomenon. It can be observed in many countries around the world. Its actual prevalence is largely uncertain.

Can companies even tell if their developers are using unauthorized AI tools?

There are tools that can help companies detect the use of unauthorized technologies, but they only provide a small glimpse into what is happening. Even if they work, it would only be after the fact. Sensitive information could already have been disclosed in an unwanted way.

Communication and education are far better long-term strategies to keep unauthorized AI products and tools out of the enterprise. One important step is to involve developers from the beginning. This includes: identifying tools, evaluating them, and actually purchasing them – all in line with the enterprise’s data security, privacy, and compliance requirements.

As an entrepreneur, embrace AI and find out what problems the developers and administrators are trying to solve with it. What features and tools are they missing? Then offer solutions to those problems! The call to education is to better understand the AI ​​tools themselves. This includes questions like: Where is your internal data stored and shared? How do you make sure you get good enough answers? What do you know about how the AI ​​system was trained? Are you sure you are not violating data protection regulations?

In short, accept that shadow AI is already in your organization. It is very unlikely that you can stop its adoption. So now is the time to take control and eliminate shadow AI with better and approved AI products, communication and education.

In our opinion, the AI ​​boom can hardly be stopped. What is the best way for companies that are rather critical of AI tools to behave?

All CIOs and CEOs are now called to action: Develop your own standards and requirements for where and how you want to use AI tools, which data can and cannot be shared. In particular, be prepared to restrict the use of AI providers that do not meet your standards. To do this, however, you must offer your employees a suitable alternative.

Companies should know the data on which generative AI models are trained and whether that data is acceptable for their use cases. They also need to ensure that the AI ​​technologies they choose include clear and appropriately enforced terms of use and meet their company’s specific privacy expectations.

On the other hand, developers must maintain responsible control over the tools they use. This is especially true when the technology landscape is evolving rapidly – as in the field of AI. This includes: avoiding vendor lock-in, demanding transparency and maintaining flexibility. These are important steps to future-proof IT and AI investments.

Mr. Guagenti, thank you very much for the answers!

In the series “Three Questions and Answers”, iX wants to get to the heart of today’s IT challenges – regardless of whether it is the view of the user in front of the PC, the view of the manager or the everyday life of an administrator. Do you have suggestions from your daily practice or that of your users? Whose tips on which topic would you like to read in a short and concise manner? Then feel free to write to us or leave a comment in the forum.

(who)

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.