There's no doubt that public AI tools like ChatGPT, Claude and others are incredibly powerful. They can draft emails, write code, summarise long documents and even help with creative ideas. We absolutely encourage you to explore these tools to make your work easier and more innovative. However, you'll know from our Public Generative AI Usage Policy that we have a very strict rule: you must not use any sensitive, confidential or personal information with these public services.
This isn't about being difficult or standing in the way of progress. It's a critical measure to protect our company, our customers and ourselves. So, let's break down exactly why this rule is in place.
Think of using a public AI service like disposing of something in the ocean. You might throw a bottle into the water, and chances are it will sink to the bottom and never be seen again. But you can't be sure. It could get caught in a current and wash up on a beach halfway across the world. A fisherman might pull it up in their net. You simply lose control over it and you have no idea where it will end up or who might find it.
Pasting sensitive information into a public AI tool is a lot like that. Once you hit 'enter', you've sent our data to a third-party company's servers, and we lose control. We can't know for sure what happens next.
How 'Our' Data Becomes 'Their' Data
One of the biggest risks is that many of these AI services use the data you provide to train their models. This is how they get smarter and more accurate. While it might seem harmless, it means our confidential information, be it strategic plans, financial figures or internal discussions, could become part of the model's underlying knowledge.
What are the real-world implications? It's not that a competitor can just ask the AI, "What is Pepkor's Q4 strategy?" and get a straight answer. It's more subtle and unpredictable. For example, a completely unrelated user from another company could be asking the AI to draft a report and a sentence or a specific data point from the information you entered could be woven into their generated text. Our confidential data could inadvertently leak out, piece by piece, to anyone.
The POPIA Problem
Beyond the risk of data exposure, there's a major legal reason for our policy. The Protection of Personal Information Act (POPIA) governs how we handle the personal information of our customers, staff, suppliers, etc. It requires us to have a formal, legally binding Data Processing Agreement with any third party that handles this data on our behalf.
We simply do not have these agreements in place with public AI companies.
This means that if any employee pastes any Personal Identifiable Information (PII) into one of these tools, we are in direct violation of the POPI Act. This exposes the company to significant legal and financial risk, not to mention the reputational damage and the breach of trust we have with our customers.
So, What Can We Use?
This doesn't mean you can't use AI with company data. Our policy makes a clear distinction between public services and the enterprise AI tools available within our approved platforms, like Google Workspace and Microsoft 365. These enterprise services have been vetted by our security and legal teams, and we have the necessary data processing agreements in place with them. They are built with privacy and security at their core, ensuring our data remains our data. We still prefer that you don't put real customer data into Gemini or Co-Pilot, as there are still some risks and sound judgement is needed, but it is approved for other sensitive company data such as financial or strategic information.
The goal isn't to stop you from using these brilliant new technologies. It's to make sure we use them smartly and safely. By understanding the risks and sticking to the policy, we can all innovate responsibly while protecting the information that's entrusted to us.
If you have any questions, please don't hesitate to reach out to the InfoSec Team.