Artificial Intelligence

Data privacy in the age of generative AI

27 de March de 2025

Companies are rapidly adopting generative AI tools such as ChatGPT, Gemini, and Copilot to automate tasks, make quicker decisions, and gain efficiency in various areas. Their use has spread naturally to marketing, legal, and customer service departments, making these solutions part of day-to-day operations.

However, along with the gains, there is growing concern about the data privacy shared with these models. Protecting this information no longer lies solely with IT—it has become a strategic issue for the business. Without proper governance, using AI can lead to privacy violations, regulatory problems, and serious impacts on the company’s reputation. The challenge now is clear: to take advantage of AI without losing the trust of those who rely on the organization’s data.

One piece of information to set off alarm bells

A recent Harmonic report highlights how easily sensitive data can be overlooked. According to the study, 8.5% of employee-entered prompts in generative AI tools contained confidential information, including customer records, credentials, financial data, contracts, and payroll details.

Most of these incidents weren’t malicious. In many cases, employees used AI to review legal documents, summarize reports, or draft communications, unaware that they were exposing protected data to external platforms. The risk is amplified when using public models, where the content submitted may be stored, analyzed, or even used to train future model versions.

This raises a red flag for any organization handling regulated data or operating under frameworks like GDPR, CCPA, or HIPAA. Privacy breaches — intentional or not — can trigger serious consequences, from fines and audits to loss of client confidence.

Practical risks for business

Uncontrolled use of AI can lead to:

  • Exposure of sensitive data in cloud-based platforms, often without encryption or access logs.
  • Lack of governance over the content shared with AI tools, making it hard to audit or contain misuse.
  • Use of company data to train third-party models, depending on the terms of service accepted by users.

IBM reinforces that these risks go beyond cybersecurity. They directly impact compliance, brand reputation, and ESG commitments, making AI governance a strategic issue that needs executive-level attention

Data privacy is a strategic issue

Data privacy has evolved from a technical requirement to a strategic pillar. In a context where trust drives customer retention and long-term value, protecting sensitive information is not optional — it’s part of the business model.

Companies are implementing stricter controls to reduce exposure, like blocking public AI tools, deploying private models on internal infrastructure, and building custom solutions that offer visibility and control. These decisions reflect a broader shift toward balancing innovation with accountability.

Regardless of the approach, the goal remains to protect what is critical without slowing progress.

How data culture supports secure innovation

Technology alone won’t solve this problem, as users’ lack of knowledge is still one of the main points of vulnerability. That’s why more and more companies are investing in training, awareness, and clear AI usage policies. 

Some practical actions include:

  • Avoid inserting any identifiable data into public models
  • Only use tools with adequate privacy controls
  • Create specific flows for using AI with anonymized data
  • Ensuring that suppliers respect data use and retention rules

These practices reduce risks and show maturity in how the organization deals with innovation and governance.

Moving forward with AI without giving up responsibility

As generative AI becomes more integrated into business operations, moving forward with a clear understanding of risk, value, and responsibility is essential. Key questions can help guide strategic decisions:

  • Which areas are using AI, and for what purposes?
  • Is there transparency around the type of data being entered into these models?
  • Does the organization control storing, accessing, and using this data?
  • Are there clear internal policies on the safe use of AI?
  • Is there a defined response plan for data exposure incidents?

Asking these questions helps align technology, compliance, and business objectives. It ensures that AI supports the organization in a sustainable, secure, and legally compliant way — with data privacy at the foundation of trust.

A responsible path to innovation

At Luby, we combine expertise in data, engineering, and AI to help companies adopt emerging technologies with security and control. We work closely with our partners to implement solutions prioritizing data privacy from day one, aligning performance, regulation, and long-term value.

Talk to our team to explore how to apply generative AI responsibly without losing speed or compromising what matters most.

Artigos relacionados

Subscribe to
our Newsletter

Sign up for our newsletter and stay updated with the latest news from the world of technology.

    I authorize Luby to use my data to send personalized content.