The use of AI and machine learning in enterprises has increased more than 3,000 percent in the past year, according to a report by Zscaler. This explosive growth stems from concerns about the impact of AI on cybersecurity. Consequently, AI use is blocked by companies six out of 10 times due to fears of data breaches and unauthorized access.
According to the Zscaler ThreatLabz 2025 AI Security Report, based on insights from more than 536 billion AI transactions, enterprise adoption of AI technologies has increased exponentially over the past year. Companies have sent significant amounts of data to AI tools, totaling approximately 4,500 terabytes.
Contradiction at ChatGPT
The numbers surrounding ChatGPT are particularly striking. On one hand, with a 47.1 percent market share, it is the most widely used AI tool. On the other hand, it is the most frequently blocked application among companies. This contradiction appears to be primarily explained by the AI policies that companies have implemented. Many organizations still lack such policies, but where they do exist, ChatGPT may be designated as a tool that employees are not permitted to use. Other frequently blocked tools include Grammarly, Microsoft Copilot, QuillBot, and Wordtune.
The substantial market share of OpenAI’s AI tool, despite being frequently blocked, can likely be attributed to business subscriptions. Due to the brand recognition of the tool and the availability of business subscriptions from OpenAI, an increasing number of companies have formalized the use of the tool within their organizations. This approach provides greater control over the privacy of shared data.
Keeping up
“While AI offers tremendous potential for innovation and efficiency, it also brings new risks,” Zscaler researchers write. “Companies and cybersecurity leaders must effectively navigate the rapidly evolving AI landscape to harness its revolutionary potential while mitigating risks and defending against AI-powered attacks.”
The report highlights the risks associated with agentic AI and China’s open-source DeepSeek, which enable attackers to scale their operations. While these developments are not inherently negative, they do introduce significant security concerns.
Agentic AI, for example, is a development built on trust. AI-driven platforms can analyze and respond to threats based on historical data and pre-trained models. This level of automation allows systems to function independently for routine tasks, operating on a foundation of trust. Therefore, it is more prudent to view AI as an additional tool for cybersecurity rather than a comprehensive solution.