TELUS Digital Survey Reveals Enterprise Employees Are Entering Sensitive Data Into AI Assistants More Than You Think

In This Article:

TELUS Digital survey reveals security gaps in employee use of generative AI in enterprises (Graphic: Business Wire)
TELUS Digital survey reveals security gaps in employee use of generative AI in enterprises (Graphic: Business Wire)
TELUS Digital survey reveals security gaps in employee use of generative AI in enterprises (Graphic: Business Wire)
TELUS Digital survey reveals security gaps in employee use of generative AI in enterprises (Graphic: Business Wire)
TELUS Digital survey reveals security gaps in employee use of generative AI in enterprises (Graphic: Business Wire)
TELUS Digital survey reveals security gaps in employee use of generative AI in enterprises (Graphic: Business Wire)

57% of enterprise employees admit to entering high-risk information into publicly available generative AI assistants, exposing critical security gaps in enterprise AI usage

Findings reveal a critical need for enterprise AI solutions that prioritize security, data sovereignty, and compliance to mitigate shadow AI risks

VANCOUVER, British Columbia, February 26, 2025--(BUSINESS WIRE)--Nearly seven out of 10 (68%) enterprise employees who use generative AI (GenAI) at work say they access publicly available GenAI assistants such as ChatGPT, Microsoft Copilot or Google Gemini through personal accounts, and more than half (57%) have admitted to entering sensitive information into them. The findings come from a new survey by TELUS Digital Experience (TELUS Digital) (NYSE and TSX: TIXT), a global technology company whose proprietary GenAI platform, Fuel iX™, is built with data sovereignty at its core—allowing organizations to give employees access to GenAI while keeping company data safe. Fuel iX provides enterprises with the flexibility, control, and compliance safeguards needed to integrate AI securely and responsibly.

Many employees who bring their own AI (BYOAI) to work are inputting confidential information into public GenAI assistants, creating potential security and compliance risks. The widespread use of public GenAI tools is fueling the rise of "shadow AI", which obscures enterprise risks from IT and security managers.

Surveyed employees admitted to entering the following types of information into publicly available GenAI assistants:

  • Personal data, such as names, addresses, emails and phone numbers (31%).

  • Product or project details, including unreleased details and prototypes (29%).

  • Customer information, including names, contact details, order history, chat logs, emails, or recorded calls (21%).

  • Confidential company financial information, such as revenue, profit margins, budgets, or forecasts (11%).

This happens despite nearly a third (29%) of employees acknowledging their companies have policies in place that prohibit them from inputting company, client or other sensitive information into GenAI assistants.

Regardless of the risks, many employees in the survey indicated that their company is falling short on providing them with information and training to use GenAI safely:

  • Only 24% of employees said their company requires mandatory AI assistant training.

  • 44% said their company does not have AI guidelines or policies in place, or they don’t know if their company does.

  • 50% said they are not sure if they're adhering to their company’s AI guidelines.

  • 42% said there are no repercussions for not following their company’s AI guidelines.