Insights
Privacy enforcement in Australia is tightening dramatically. The OAIC (The Office of the Australian Information Commissioner) has received a record 1,113 data breach notifications in 2024, a 25% surge from the previous year, and 2025 looks certain to break that figure, highlighting an aggressive regulatory upswing.
Meanwhile, sweeping reforms to the Privacy Act, effective June 2025, bolster enforcement powers, introduce a statutory privacy tort, and mandate greater transparency around AI and automated decision-making.
Record-Breaking Breaches and Rising Risk
- Notifiable Data Breaches Scheme – Since 2018, businesses must report eligible data breaches. In 2024 alone, 1,113 breaches (more than 20 per week) were notified, up 25% from 2023, with malicious attacks making up 69%, and 61% of those being cybersecurity incidents.
- High-risk sectors – Health providers (20%) and government agencies (17%) dominate breach figures, underscoring the critical need for stronger defences.
- Privileged data at risk – Personal, health, and identity information are most frequently compromised. The OAIC warns this trend is likely to continue and is urging businesses to act now.
OAIC Armed with Fresh Powers
- Privacy Act reforms (Privacy and Other Legislation Amendment Act 2024) came into force on 10 June 2025 –
- Introduces a statutory tort for serious invasions of privacy, individuals can now sue for emotional harm without proving economic loss.
- Expands enforcement tools, enabling infringement notices (up to AUD 330,000) and civil penalties (max AUD 3.3 million for mid-tier breaches; up to AUD 50 million or 30% of turnover for serious or repeated breaches).
- Mandates transparency in automated decision-making, anti-doxxing laws, and a Children’s Online Privacy Code.
High-Profile Regulatory Actions
- Medibank proceedings – OAIC commenced civil penalty action in June 2024 for Medibank’s massive 2022 breach affecting 9.7 million Australians, alleging failure to take “reasonable steps” to secure sensitive personal information. Penalties could reach up to AUD 21 trillion, symbolic of how seriously non-compliance is now taken.
- Meta’s $50 million settlement – The OAIC secured an enforceable undertaking from Meta over the 2018 Cambridge Analytica scandal, representing the largest privacy compensation in Australia. This sends a clear signal that global tech giants must comply.
- ACL (Australian Clinical Labs Limited) – The OAIC launched civil proceedings over a 223,000-person breach due to delayed reporting and inadequate security, a rare but telling use of newly enhanced powers.
AI Under the OAIC Microscope
- In late 2024 and early 2025, the OAIC released two key AI guidance notes:
- For AI product users – Obligations apply to personal data both input to and produced by AI systems. Privacy policies must be updated, and human oversight must be baked in.
- For AI developers – Extra caution urged in training generative AI models. Data must be lawfully collected, accurate, and purpose-aligned under APPs (Application Privacy Principles)see note 1.
- Best practice reminders include avoiding use of public generative AI for sensitive data and conducting Privacy Impact Assessments before deployment.
What are the Implications for Australian Organisations
- Legal and financial exposure – Civil penalties and lawsuit risks have skyrocketed. Breaches of APPs now come with tangible legal consequences, not just reputational damage.
- Operational urgency – Data mapping, security audits, impact assessments and fast breach response aren’t optional, they are business-critical.
- AI accountability – Any use of AI involving personal data must comply with APP requirements and broader transparency reforms.
- Public trust – Customers now expect accountability. Fines and legal actions grab headlines, but loss of trust is the costliest fallout.
What You Must Do about Privacy Resilience
Reinforceyour Cyber Defences
- Upgrade security controls, threat detection, and incident response.
- In sectors like health or government, assume constant threat actors and plan for rapid breach identification and containment.
Embed Privacy Governance
- Implement a robust privacy framework – Privacy Impact Assessments (PIAs), APP audits, cross-border controls, and contractual safeguards with vendors.
- Adopt regular training on breach handling, consent, and AI privacy.
Prepare for New Powers
- Monitor changes to enforcement notices and the private statutory tort.
- Review policies related to automated decision-making and AI, ensure transparency and consent are visible and accessible.
Incident Preparedness
- Update your breach response plans and communication protocols.
- Automate detection, documentation, and reporting aligned with NDB (Notifiable Data Breach) timelines (30 days).
Manage Reputation and Trust
- Be transparent: clear breach disclosures under NDB build trust.
- In high-risk sectors, consider proactive public communication and compensation mechanisms.
This Northwick Cybersecurity thought leadership piece explores how Australia’s privacy landscape has fundamentally shifted. Businesses, from startups to enterprise, face elevated expectations, tougher rules, and real-world consequences. Privacy must now be integrated as a core business function, driven from the boardroom, embedded in every process, hardened in technology, and championed as a competitive differentiator. (www.northwickcyber.com)
Northwick Cybersecurity delivers comprehensive protection for businesses by combining advanced threat detection, proactive risk management, and strategic security consulting. Our services cover everything from vulnerability assessments and penetration testing to incident response and compliance support, ensuring enterprises stay resilient against evolving cyber threats. We focus on safeguarding critical infrastructure, securing cloud environments, and implementing robust governance frameworks, all tailored to meet your unique needs.
Note 1 – These are a set of 13 principles under the Privacy Act 1988 (Cth) that govern how Australian organisations (and some overseas entities) handle personal information. They cover areas such as:
- Lawful and fair collection of personal data
- Accuracy and security of the data
- Purpose limitation (data should only be used for the purpose it was collected for)
- Transparency and access rights for individuals
For AI developers, this means any data used to train generative AI models must comply with these principles, especially around consent, purpose alignment, and avoiding misuse of personal information.