Home » Business » How companies prevent shadow AI

How companies prevent shadow AI

The benefits of AI are tempting, but if employees use AI tools unregulated, it can quickly cause problems for companies. Chris Noon, Director of Commercial Intelligence and Analytics at Dropbox, shows how companies can prevent internal data from circulating through shadow AI and instead ensure that the entire workforce benefits from the new technology.

71% of employees were already using AI tools before their managers knew about them. But an individual, fragmented approach to introducing generative AI into existing processes contributes to dividing the company: On one side are the employees who use the latest technology without communicating with their colleagues or managers. On the other side are the hesitant non-users who lag behind their supposedly more advanced colleagues.

Without a holistic strategy and training of employees in how to use new AI tools, those who experiment with the new technology alone could quickly become frustrated because they are unable to gain useful insights or accurate results. In addition, these initial negative experiences pose a great risk that employees will quickly abandon the AI ​​tools altogether.

Unauthorized use endangers sensitive data

A current Study by Veritas found that 31% of respondents admitted to sharing potentially sensitive information with generative AI tools. Business accounts for AI products typically include agreements that ensure corporate data is not used to train AI models. However, personal accounts, which are often used in shadow AI, typically do not have these agreements. This means that any corporate data shared through a personal account could inadvertently be used to train the AI ​​model.

Securing company data should therefore always be a top concern. Moreover, serious consequences can arise when employees use these powerful tools without guidance or judgment. AI tools are still prone to erroneous or inaccurate results and even “hallucinations.” Relying on erroneous results without questioning them can lead to wrong decisions and potential legal or financial implications for the company.

AI strategy that sets rules but also invites experimentation

To address these challenges, organizations should pursue a coordinated AI strategy. It is important that IT teams identify trusted vendors and agree on clear terms for handling sensitive data. Working with vendors that have solid AI principles, including rules for data security and data breach prevention, will minimize cyber risks and legal liabilities. For organizations with sufficient resources, developing a customized AI solution by leveraging existing large language models is also a viable option. This creates powerful AI that fits seamlessly into the organization’s data ecosystem and processes, increasing productivity and freeing up time for strategic tasks.

To get the most out of their AI investments, companies should also develop a comprehensive program that continuously educates employees on best practices for integrating AI into their daily work. This will ensure that all employees can reap the benefits of AI technology. On every team there is an “early tech adopter” whose curiosity and passion puts them ahead of others who are more hesitant to experiment. Such employees can work with their IT teams to become AI champions within the company, sharing learnings, best practices and insights with colleagues and fostering a collaborative learning environment.

Combining ethics and innovation

Within the confines of the company’s AI strategy, automating routine tasks can help employees improve their performance and save time to focus on the work that brings the most value to the company. However, it is important to remember that AI should not be used as a replacement for human intelligence. AI is now able to automate numerous tasks and generate large amounts of content in seconds. But employees still need to apply their own critical thinking. Because if they haven’t really read the text generated by the AI ​​or really thought through the problem they are trying to solve, they will only face bigger problems in the future. That’s why, despite all the AI ​​euphoria, companies need to keep thinking about the long-term ethical and social impact of AI on the workforce, while also ensuring that AI complements human skills in a balanced way.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.