Securing Public GenAI Apps and LLMs in Your Organisation

GenAI and LLMs are powerful tools, but they come with new risks that classic security tools can’t handle.


Public generative AI applications (GenAI apps) like ChatGPT, Microsoft Copilot, and Google Gemini are now widely used by employees in many organisations. Almost 70% of enterprise workers use these tools to save time, improve productivity, and boost creativity.

But while GenAI apps bring many benefits, they also introduce new security and compliance risks. These tools work differently from traditional software. Their answers can be unpredictable, they may store or process data in unclear ways, and sometimes even invent information. Data protection policies from providers are often unclear or incomplete—especially regarding how user data is handled.

In this blog post, we explain why traditional security methods are not enough for GenAI and large language models (LLMs), and how to improve security when using them inside an organisation.

 

Why Classic Security Doesn’t Work for GenAI

GenAI tools are a new type of software. Users can ask them anything, and the tools can give highly varied answers. This flexibility creates several security challenges:

1. Lack of visibility

Most security tools cannot see how GenAI is used. They only allow or block websites without understanding the content. For example, network filters may not see if someone is sending personal data or company secrets into an AI chatbot.

2. User-driven data exposure

Unlike standard software, GenAI tools accept almost any input—such as internal documents, personal data, or source code. Traditional data loss prevention (DLP) systems often miss this, especially when users copy-paste information into a web chat.

3. Incorrect or misleading outputs

LLMs sometimes generate wrong or misleading content. This creates a risk that employees may trust and reuse information that is factually incorrect or even harmful.

4. No protection from AI-specific threats

Some users can trick AI systems into giving unsafe responses. Traditional antivirus or firewall tools don’t detect this kind of attack.

5. Weak identity and access controls

Employees may use public GenAI apps with personal accounts. In those cases, company data may not be protected under corporate agreements. This risk is not covered by standard identity or access policies.

6. User friction

Security tools that are too strict can discourage AI use altogether—or worse, cause employees to bypass controls by using personal devices or accounts.

 

Best Practices for Securing GenAI in the Enterprise

To use GenAI apps safely in your organisation, your security and compliance teams should go beyond traditional tools. Here’s how:

1. Set clear policies, training, and governance

Create clear rules for how AI can be used:

  • What tools are allowed?
  • What data must not be shared?
  • What types of outputs must be checked by experts?
  • Who is responsible for monitoring and enforcement?

Make sure these policies are aligned with EU regulations like GDPR, NIS 2, DORA, and the AI Act. Provide regular training to staff and set up an internal AI governance group to keep policies up to date.

Also, check if your GenAI provider offers business controls (such as a “no training” guarantee for enterprise data). Add this to your policies and help users understand how to activate these features.

2. Gain visibility into GenAI usage

You need to understand how GenAI is being used:

  • Are users entering personal data or sensitive company information?
  • Which tools are popular?
  • What kind of content is being created?

Use monitoring tools that go beyond simple website logs and can analyse actual prompts and replies. This helps you identify both risks and value-creating use cases.

3. Protect personal and sensitive data

Set up automatic checks for risky prompts, like when someone enters contracts, personal data, or code. Use your existing data classification tools or dictionaries to help detect this.

A good approach is to show a pop-up that warns users when they try to enter sensitive information. AI tools should also block dangerous or manipulative prompts, like those that try to bypass safety rules.

4. Control AI responses

Some GenAI tools might give unsafe or confusing answers. Set rules for what kind of content is allowed:

  • If a tool is not meant for software coding, block code responses.
  • Add a reminder before users accept legal or technical content from AI without expert review.

Adapt the rules based on the team’s role—what’s safe for a marketing team might not be safe for legal or engineering use.

5. Safely connect GenAI to your internal data

AI is more useful when it can access company data—but this must be done carefully. First, identify which data is useful for AI, and which must be kept private. Use content-based filtering, not just file types or sensitivity labels.

Modern tools can use similarity searches to recognise important documents like contracts or CVs in real time. Guide users by connecting them to approved internal tools (e.g., “Let’s use Copilot for this document”).

6. Enforce corporate authentication

Make sure employees use GenAI apps with their work accounts, not personal ones. Where possible, use Single Sign-On (SSO) to link GenAI usage to corporate identity.

This allows safer access, better activity tracking, group-based policies and easier integration into incident response.

7. Monitor and log usage

Store prompt and response logs—carefully and legally—to support policy enforcement, compliance with audits and trust in AI systems.

Tag AI-generated content with IDs that link back to who created it and what the original prompt was. This helps check accuracy and reduce errors over time.

 

To use GenAI and LLMs safely, organisations must combine modern technical controls, strong policies, and user education. Securing GenAI is not just about risk reduction—it’s about enabling trust, insight, and responsible innovation. Done right, it allows teams to safely explore new AI-powered ways of working, helping the entire organisation grow.

CREAPLUS experts in AI and cybersecurity can help your organisation stay safe while using GenAI—so you get the benefits of AI, without the risks.