A practical guide to the risks, the rules, and the real opportunities of working with AI.
Artificial Intelligence is transforming how teams work, decide, and deliver results; it's now part of daily business. Drafting documents, summarising meetings, analysing data, responding to clients: AI accelerates workflows across almost every profession and industry.
As AI becomes more capable, data security concerns become more urgent. Could your data be used to train the AI models your business relies on? What happens to client data processed by a third-party model? How does your duty of care change in this new landscape?
These are practical questions that organisations are navigating right now, often without clear guidance. This article aims to provide a grounded and honest perspective, clarifying where the key risks exist, what privacy law and contractual obligations demand, and why the opportunities ahead remain compelling.
How AI is reshaping the data security landscape
Understanding AI's risks means knowing how it differs from conventional software.
Unlike traditional business tools that process data in controlled systems or cloud environments, most AI tools transmit content you input to the provider's servers. This creates new data flows that many organisations haven't fully mapped, meaning the usual security assumptions may no longer hold true.
That said, AI is also one of the most powerful instruments available for strengthening security. AI-driven threat detection systems can identify abnormal access patterns and flag suspicious behaviour across enormous datasets, at a speed and scale no human team could replicate. For organisations that use it well, AI can make their data posture significantly more robust.
The challenge, and the opportunity, is that AI changes the rules of the game in both directions simultaneously. The same technology that can protect your data can also expose it if deployed without appropriate care.
When AI comes to you: the hidden risk of embedded AI
Most conversations about AI data security focus on the tools people actively choose - ChatGPT, Copilot, Gemini. But for many organisations, the more immediate risk isn't the AI their people are seeking out. It's the AI being quietly built into the software they already use.
Microsoft Copilot is now part of Office 365. Salesforce has added AI to its CRM. Adobe, HubSpot, and other enterprise software providers are adding AI features to existing products, often as part of standard upgrades. Practice management systems used by law firms, accounting practices, and healthcare providers are following suit.
This risk is distinct, and in some ways more significant, because it's largely invisible.
When an employee opens ChatGPT, they make a conscious choice. When AI is embedded into a familiar and trusted tool - an email client, CRM, or document management system - data flows to AI in the background. Employees may not realise the AI layer exists.
Several specific risks follow:
Existing agreements may not cover AI. Providers periodically update their terms of service, and continued use of the platform constitutes acceptance. Your existing vendor terms may simply predate the AI feature entirely.
AI features are sometimes enabled by default. When a provider launches an AI upgrade, it doesn’t always request opt-in. Data may flow to an AI model before anyone has reviewed whether that's appropriate, given the information the platform holds.
Employees may be unaware of embedded AI. They may not realise that meeting summaries or CRM AI assistants send content to third-party AI. Awareness of data flow is essential, and cannot be assumed with embedded AI.
Audit trails are more complex. Standalone AI tools keep records of submissions; embedded AI makes data flows harder to trace and evidence, which matters considerably if a breach occurs.
The practical implication is to extend your AI review process beyond the obvious tools. For every platform that holds sensitive data, ask: has this provider introduced AI features, are they active in our environment, and do our current terms adequately govern how that data is handled?
Using standalone AI tools: what you need to know
When people deliberately choose to use an AI tool - submitting a prompt, uploading a document, or asking for analysis - a different but equally important set of risks applies. This section covers what organisations and individuals need to understand before sharing information with standalone AI platforms.
The risks: what can go wrong
The most common source of AI-related data risk isn't a sophisticated cyberattack. It's an employee doing something entirely reasonable, without realising the implications. This applies to both standalone and embedded AI tools and highlights specific issues when people choose to submit information to AI.
Most consumer AI tools use submitted content to help train and improve their models. This means sensitive information - a client's financial position, a legal matter, a personnel issue -could become part of an AI training dataset. While direct exposure of specific inputs is uncommon, the principle remains: data given to consumer AI tools may not stay private.
The professional implications are significant. Consider:
- A lawyer seeking drafting help or research support via a consumer AI tool may inadvertently expose privileged client information, potentially breaching legal professional obligations.
- An accountant or HR manager feeding a spreadsheet into an AI assistant to reformat or analyse data may be sharing salary figures, performance records, or commercially sensitive numbers with a third-party system.
- A healthcare provider using AI for notes or patient correspondence may be handling personal health information in ways that fall outside the purposes for which it was collected.
There is also the issue of AI hallucination: the tendency of AI models to produce confident but inaccurate outputs. This is well-documented across major AI platforms. Acting on incorrect AI-generated information - particularly in legal, financial, or medical contexts - can create real and serious liability.
Does upgrading to a paid account reduce data security risks?
Yes, but the difference is more specific than most people realise, and it's easy to overestimate the protection a paid subscription actually provides.
On free consumer tiers of major AI platforms, the terms are typically straightforward: your inputs may be used to train and improve the model, data retention is at the provider's discretion, and there is no meaningful contractual protection around how your information is handled. There is no Data Processing Agreement available, no security certification commitment, and no recourse if something goes wrong.
On paid plans, particularly at an enterprise or API level, the key difference is that the terms include a Data Processing Agreement. The DPA establishes the provider's commitments around data retention, training data exclusions, geographic processing boundaries, and security standards. The DPA is the mechanism through which legal protections become real.
A paid account alone does not automatically deliver those protections. What's included varies significantly between providers and between plan tiers. Before using any AI tool with sensitive information, organisations must verify:
- Whether a Data Processing Agreement (DPA) is available on their specific plan, and whether it has been signed.
- Whether their data is excluded from model training, and whether that exclusion is automatic or requires an explicit opt-out.
- Where data is processed and stored, and whether that meets their legal or regulatory requirements.
- A clear zero data retention policy, or explicit limits on how long data is retained, and under what circumstances it can be accessed by the provider.
- Whether the provider holds third-party security certifications, such as SOC 2 Type II or ISO 27001.
- Whether the provider is transparent about subprocessors - the third-party services used within their own infrastructure.
Enterprise agreements from major AI providers (including those from OpenAI, Microsoft, Google, and Anthropic) do typically offer these protections at their enterprise tiers. The key is to review the data processing agreement, not just the pricing page.
Consider your obligations to clients when introducing AI
Privacy legislation in New Zealand (the Privacy Act 2020), Australia (the Australian Privacy Principles under the Privacy Act 1988), and across many other jurisdictions (including the EU's GDPR) requires that personal information is only used for the purposes for which it was collected, or for directly related purposes. Feeding client data into a third-party AI tool is, in most interpretations, a new purpose.
Without explicit consent, a contractual provision, or a legitimate interest that has been properly assessed, sharing client personal information with an AI platform may constitute a breach of your privacy obligations. It may also breach confidentiality obligations that exist independently of privacy law, particularly in regulated professions such as law, medicine, and financial advice.
The practical steps organisations should consider:
- Review and update client engagement letters and terms of service to reference the use of AI tools in service delivery.
- Establish an internal policy on which categories of information may be used with AI, and under what conditions.
- Ensure any AI tools used to process client data are covered by a Data Processing Agreement that meets the requirements of applicable law.
- Seek legal advice if operating in a regulated industry or handling particularly sensitive categories of personal information.
It’s worth noting that this is an evolving area. Regulatory guidance on AI and privacy is developing quickly, and what constitutes best practice today may become a formal requirement tomorrow. Getting ahead of this now is considerably less costly than addressing a breach after the fact.
Security risks that apply to all AI
Beyond the data privacy questions, AI introduces several new attack surfaces that security teams and business leaders should be aware of.
Prompt injection
Malicious actors can embed instructions within documents, emails, or web pages that manipulate AI agents into taking unintended actions, including leaking data, bypassing controls, or executing harmful commands. As AI agents become more autonomous and capable, this attack vector is growing in significance.
Shadow AI
Employees adopting AI tools independently, without IT or security sign-off, create unmonitored data flows outside the organisation’s control. This is one of the fastest-growing risk categories currently facing organisations. A 2023 survey by Salesforce found that 28% of employees use generative AI at work, and of those, 55% use tools not approved by their employer.
Vendor risk
Your AI provider’s security posture is as important as your own. A breach at their end can expose your data even if your internal systems are fully secured. Vendor security reviews and contractual protections are, therefore, a critical part of AI risk management.
It’s also worth acknowledging that many of the traditional threats - phishing, social engineering, account compromise - are being supercharged by AI. Deepfake audio and video, highly personalised phishing emails, and AI-generated impersonation attacks are becoming more convincing and more common.
A 2023 survey by Salesforce found that 28% of employees use generative AI at work, and of those, 55% use tools not approved by their employer.
What else should businesses be mindful of?
AI governance frameworks
Organisations that haven’t yet established a clear AI policy are increasingly at risk - not just of security incidents, but of inconsistent use that creates legal and reputational exposure. A basic AI governance framework should cover: which tools are approved for use, how data should be classified before being used with AI, and what sign-off is required for new AI tools.
Contractual gaps
Many supplier contracts and client agreements were written before AI was a commercial reality. A review may reveal gaps in confidentiality clauses, data ownership provisions, and indemnity arrangements that need to be addressed.
Regulatory direction
Globally, AI regulation is accelerating. The EU AI Act, which came into force in 2024, is the most comprehensive AI regulatory framework yet enacted, and its influence is being felt far beyond Europe. New Zealand is actively monitoring this space, and while domestic AI-specific regulation remains limited, existing privacy, consumer, and sector-specific laws already impose meaningful obligations. Organisations that build good data governance habits now will be considerably better placed as formal requirements develop.
Insurance and liability
Standard business insurance policies may not cover AI-related incidents. It is worth reviewing coverage with your broker and considering whether cyber liability insurance adequately addresses the specific risks that AI introduces.
The opportunity is real, so is the competitive advantage
It would be easy to read through a list of AI risks and conclude that caution is the only reasonable response. But the organisations that will thrive in the next decade are those that engage with AI thoughtfully — not those that avoid it entirely.
AI done well, with clear governance, appropriate tooling, and informed teams, gives businesses access to analytical power and operational efficiency that was simply out of reach five years ago. It can identify patterns in data that humans would miss, automate high-volume, low-value tasks, and surface insights that drive better decisions. Applied to security itself, it can detect threats earlier and respond faster than any manual process.
The goal isn’t to be afraid of AI. It’s to be informed. Informed organisations ask better questions of their vendors, set clearer expectations with their teams, and make evidence-based decisions about AI adoption.
That’s a significant competitive advantage - and one that’s available to any organisation willing to engage seriously with the questions this technology raises.
At DATAMetrics, we help businesses make sense of their data. Increasingly, that includes navigating where AI fits in safely and strategically. We believe this is an important conversation, and we’d welcome the opportunity to have it with you.
Disclaimer: This article is intended as general information only and does not constitute legal or professional advice. Organisations should seek independent advice appropriate to their circumstances.



