Close Menu
The CISO Brief
  • Home
  • Cyberattacks
    • Ransomware
    • Cybercrime
    • Data Breach
  • Emerging Tech
  • Threat Intelligence
    • Vulnerabilities
    • Cyber Risk
  • Expert Insights
  • Careers and Learning
  • Compliance

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Microsoft Links Ongoing SharePoint Exploits to Chinese Hacker Groups

July 22, 2025

Dell Declares Data Leak a Hoax

July 22, 2025

Securing the Future: Enterprise AI Lockdown Strategies

July 22, 2025
Facebook X (Twitter) Instagram
The CISO Brief
  • Home
  • Cyberattacks
    • Ransomware
    • Cybercrime
    • Data Breach
  • Emerging Tech
  • Threat Intelligence
    • Vulnerabilities
    • Cyber Risk
  • Expert Insights
  • Careers and Learning
  • Compliance
The CISO Brief
Home » How to Mitigate the Hidden Risks of Generative AI at Work
Insights

How to Mitigate the Hidden Risks of Generative AI at Work

Staff WriterBy Staff WriterJuly 7, 2025No Comments4 Mins Read0 Views
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Hidden Risks of Generative AI at Work

Sub: GenAI is here to stay. The organizations that thrive will be those that understand its risks, implement the right safeguards, and empower their employees to harness it safely and responsibly.

For many people, generative AI (GenAI) began as personal experimentation in homes and on personal devices. Now, however, AI has become deeply ingrained in workplace habits, creating productivity gains, but also exposing organizations to significant security gaps. Sensitive company data, inadvertently or otherwise, regularly finds its way into public AI systems, leaving IT and cybersecurity leaders scrambling to respond.

Once proprietary data is processed by a public AI tool, it may become part of the model’s training data, serving other users down the line. For example, in March 2023, a multinational electronics manufacturer was reported to have experienced several incidents of employees entering confidential data, including product source code, into ChatGPT. Generative AI applications, such as large language models, are designed to learn from interactions. No company wants to train public AI apps with proprietary data.

Faced with the risk of losing trade secrets or other valuable data, the default approach for many organizations became blocking access to gen AI applications. This appeared to allow companies to stem the flow of sensitive information into unsanctioned platforms, but has proven ineffective and simply drives risky behavior underground, leading to a growing blind spot known as “Shadow AI.” Employees find workarounds by using personal devices, emailing data to private accounts, or even taking screenshots to upload outside of monitored systems.

Worse, by blocking access, IT and security leaders lose visibility into what is really happening, without actually managing data security and privacy risks. The move stifles innovation and productivity gains.

A strategic approach to tackling AI risks

Effective mitigation of the risks posed by employee use of AI requires a multifaceted approach focused on visibility, governance, and employee enablement.

The first step is obtaining a complete picture of how AI tools are being used across your organization. Visibility enables IT leaders to identify patterns of employee activity, flag risky behaviors (such as attempts to upload sensitive data), and evaluate the true impact of public AI app usage. Without this foundational knowledge, governance measures are destined to fail because they won’t address the real scope of employee interactions with AI.

Developing tailored policies is the next critical step. Organizations should avoid blanket bans, and instead, policies should emphasize context-aware controls. For public AI applications, you might implement browser isolation techniques that allow employees to use these apps for general tasks without being able to upload certain types of company data. Alternatively, employees can be redirected to sanctioned, enterprise-approved AI platforms that deliver comparable capabilities, ensuring productivity without exposing proprietary information. While some roles or teams may require nuanced access to specific apps, others may warrant stronger restrictions.

To prevent misuse, organizations should enforce robust data loss prevention mechanisms that identify and block attempts to share sensitive information with public or unsanctioned AI platforms. Since accidental disclosure is a leading driver of AI-related data breaches, enabling real-time DLP enforcement can be a safety net, reducing the potential for harm to the organization.

Finally, employees must be educated about the inherent risks of AI and the policies designed to mitigate them. Training should emphasize practical guidance –- what can and cannot be done safely using AI -– alongside clear communication about the consequences of exposing sensitive data. Awareness and accountability go hand in hand with technology-driven protections to complete your defense strategy.

Balancing innovation and security

Gen AI has fundamentally changed how employees work and organizations function, offering transformative opportunities alongside notable risks. The answer isn’t to reject this technology but to embrace it responsibly. Organizations that focus on visibility, deploy thoughtful governance policies, and educate their employees can achieve a balance that fosters innovation while protecting sensitive data.

The goal shouldn’t be to choose between security and productivity, rather it’s about creating an environment where both coexist. Organizations that successfully achieve this balance will position themselves at the forefront of a rapidly evolving digital landscape. By mitigating the risks of Shadow AI and enabling safe, productive AI adoption, enterprises can turn gen AI into an opportunity rather than a liability, future-proofing their success in the process.

To learn more visit zscaler.com/security

Zscaler
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5tcyNkDr4lqeP29jJNeCWF7kpEp9LwP3RzzSWfuUOFMaPW7S8-zchAQOKHwKACLloe355K90RHstIaWvrnkJuxGoJQtCKP44XS5JJQU36WGArLSf7QXCUE3MRASA1Qk_MZ3AxYBq_C12RjVs9WiQi7aloY8ydnL8_kU40-XLZkTUDpw4BgmMMOrjAMnA/s728-rw-e365/zz.png

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThe AI Summit Returns to Black Hat USA 2025 on August 5
Next Article Bunge, Viterra Merge to Form Leading Global Agribusiness
Avatar photo
Staff Writer
  • Website

John Marcelli is a staff writer for the CISO Brief, with a passion for exploring and writing about the ever-evolving world of technology. From emerging trends to in-depth reviews of the latest gadgets, John stays at the forefront of innovation, delivering engaging content that informs and inspires readers. When he's not writing, he enjoys experimenting with new tech tools and diving into the digital landscape.

Related Posts

Empower Users and Protect Against GenAI Data Loss

July 22, 2025

How to “Go Passwordless” Without Getting Rid of Passwords

July 21, 2025

Identity Challenges in Mergers and Acquisitions

July 14, 2025
Leave A Reply Cancel Reply

Latest Posts

Microsoft Links Ongoing SharePoint Exploits to Chinese Hacker Groups

July 22, 20250 Views

Dell Declares Data Leak a Hoax

July 22, 20250 Views

"Reclaiming Control: Fixing Broken Security Operations"

July 22, 20250 Views

Cisco Alert: Active Exploits Targeting ISE Vulnerabilities for Unauthenticated Root Access

July 22, 20250 Views
Don't Miss

Big Risks for Malicious Code, Vulns

By Staff WriterFebruary 14, 2025

Attackers are finding more and more ways to post malicious projects to Hugging Face and…

North Korea’s Kimsuky Attacks Rivals’ Trusted Platforms

February 19, 2025

Deepwatch Acquires Dassana to Boost Cyber Resilience With AI

February 18, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to The CISO Brief, your trusted source for the latest news, expert insights, and developments in the cybersecurity world.

In today’s rapidly evolving digital landscape, staying informed about cyber threats, innovations, and industry trends is critical for professionals and organizations alike. At The CISO Brief, we are committed to providing timely, accurate, and insightful content that helps security leaders navigate the complexities of cybersecurity.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Microsoft Links Ongoing SharePoint Exploits to Chinese Hacker Groups

July 22, 2025

Dell Declares Data Leak a Hoax

July 22, 2025

Securing the Future: Enterprise AI Lockdown Strategies

July 22, 2025
Most Popular

Designing and Building Defenses for the Future

February 13, 202515 Views

United Natural Foods Faces Cyberattack Disruption

June 10, 20256 Views

Attackers lodge backdoors into Ivanti Connect Secure devices

February 15, 20255 Views
© 2025 thecisobrief. Designed by thecisobrief.
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.