A recent study reveals that half of the workforce utilises Shadow AI tools.
- These tools are not authorised by their companies, raising security concerns.
- Despite potential bans, 46% of employees would continue using personal AI tools.
- AI usage is predicted to rise to 90% of workers soon, posing increased risks.
- Current business strategies may not adequately address these rapidly evolving challenges.
The recent study conducted by Software AG highlights that 50% of employees are using tools classified as ‘Shadow AI.’ These are AI tools that are not officially sanctioned or issued by their respective organisations. Despite their unofficial status, the perceived value of these tools is significant, as nearly half of the workforce expressed a willingness to continue using them even if they were explicitly banned by their employers.
Steve Ponting, Director at Software AG, commented on the evolving landscape of AI in the workplace by stating, “If 2023 was a year of experimentation, 2024 will be defined as the year that GenAI took hold.” His remarks underscore the growing importance of AI tools which are expected to be utilised by 90% of the workforce imminently. The primary reasons for this increased adoption are time savings, ease of task execution, and productivity enhancements.
However, this widespread increase in ‘Shadow AI’ usage raises critical issues regarding cybersecurity, data leakage, and compliance with regulations. Despite a high level of awareness among employees about these risks, with 72% acknowledging cybersecurity threats and 70% recognising data governance challenges, a relatively small fraction undertake the necessary precautions. Only 27% conduct security scans and a mere 29% review data usage policies.
The study further indicates a discrepancy between the tools provided by IT departments and those employees feel they require. 53% of employees prefer personal AI tools for their independence, whereas 33% cite inadequate provisioning by their IT teams as a key reason for their choices. As such, this suggests a potential necessity for companies to reassess their AI tool offerings and align them more closely with employee needs.
Moreover, frequent users of AI reportedly possess better risk management capabilities compared to their less experienced counterparts. J-M Erlendson, Global Evangelist at Software AG, believes that organisations should implement more comprehensive training programmes to address this gap. He stresses that the looming future of widespread AI utilisation necessitates robust preparation to prevent operational chaos, particularly amongst less adept users.
Addressing the rapid integration of Shadow AI necessitates immediate, well-structured strategies to mitigate significant business risks.
