28 May 2025
Key takeaways
As part of the Digital Economy Live series, Clayton Utz hosted a live interview with Carly Kind, Australia’s Privacy Commissioner, moderated by privacy partner Steven Klimt. The session explored the evolving regulatory posture of the OAIC and the role of privacy as both a human right and a strategic business imperative. Topics included AI governance, reform priorities, biometrics, data breach response, the OAIC's enforcement strategy, and what "good" privacy practice looks like in a modern digital enterprise.
Commissioner Kind drew on her legal, human rights, and international policy background to offer a regulator’s perspective that emphasises strategic privacy leadership, ethical innovation, and practical regulatory engagement.
1. Privacy as both a right and a strategic asset
Kind positioned privacy as both a foundational human right and a strategic asset.
“First and foremost, I would say privacy is a right and should be protected for its inherent value.”
But she also emphasised that in a digital economy, strong privacy practices offer competitive advantage and resilience.
“Privacy is highly tied up with security… it's clearly in entities’ best interests to ensure robust privacy practices, because that's going to mitigate or limit the risk of or damage from cyber attack and data breaches.”
2. Reframing compliance: Proportionate, contextual, and governance-led
Kind reiterated that the Privacy Act is, for the most part, adaptable and provides entities with an open flexible framework. The APPs provide businesses with space for context-driven decision-making.
“I think actually the APPs are quite flexible… They don't make any value judgment about what is a legitimate function and activity of an organisation.”
Rather than mechanical compliance, Kind urged entities to use internal governance and proportionality assessments to guide decision-making, particularly around reasonableness, necessity, and fairness.
3. Redefining consent: Choice, agency, and the “privacy paradox”
Kind addressed the disconnect between public attitudes and behaviours, often referred to as the privacy paradox, where individuals express concern about privacy but routinely accept intrusive data practices.
“Trying to derive something about people’s intentions and values based on choices they have no choice but to make… is problematic.”
She argued that many digital services are effectively mandatory for participation in modern life, stripping users of meaningful agency.
“People don’t really have a choice but to enrol and subscribe in a range of digital services… they’re key entry points to participate in the digital economy.”
For businesses, this raises important questions about how meaningful user choice and consent really is in digital environments, particularly for children and vulnerable users.
4. Consent: a concept under strain, and under review
Kind raised concerns that consent, while essential in some contexts, has become less meaningful in practice, particularly in digital environments where users are often overwhelmed with complex, unreadable terms.
“Consent has become essentially meaningless through the proliferation of these multi-page consent documents that nobody reads.”
She noted that consent is only explicitly required under the Privacy Act for sensitive information, and that the broader framework relies more on notice and reasonableness.
Kind confirmed that the OAIC is currently developing new guidance, including a discussion paper on consent in the context of biometrics.
She also acknowledged that opt-out consent may be appropriate in some cases, even for sensitive information, provided it is truly freely given, informed, specific, and voluntary.
“In theory…if we can establish that it meets the requisite threshold, then yes.”
Organisations should re-evaluate whether their current consent models are meaningful, particularly in contexts involving children, biometrics, and AI-driven systems. Regulatory scrutiny of consent mechanisms is likely to intensify.
5. Privacy reform is already reshaping expectations
With the first tranche of privacy reforms now enacted, Kind highlighted two areas of immediate focus:
- Children’s Online Privacy Code, under development
- Transparency around automated decision-making, taking effect in late 2026
In respect of the new requirement for entities to stipulate in their privacy policy when they are using automated decision making programs that significantly affect the rights and interests of individuals, Kind outlines that the OAIC expects to find that many entities do not have good visibility currently of when or where they are using automated decision making systems internally. In response to the new requirement, the OAIC expect:
“that that requirement should catalyse… internal work with regulated entities to identify which systems are likely to be caught.”
Kind encouraged organisations to proactively map data holdings, identify AI and automated decision points, and clean up legacy data, noting that reform will increasingly require clear line-of-sight to systems and data assets.
6. Privacy reform: a fair and reasonable test
Looking ahead to Tranche 2 reforms, Kind discussed the proposed “fair and reasonable” test as a pivotal development. While still under consultation, this overarching standard could act as a pub test for lawful data use.
“My sense is that it would act as a kind of pub test, a smell test… if the public were to find out we’re doing this, would it cause a scandal or would it not?”
She acknowledged the business desire clarity in relation to the meaning of "fair and reasonable", but noted that until jurisprudence develops, a level of uncertainty in terms of the application of the law is inevitable
“We would need some application and enforcement situations. We’d need judicial interpretation and over time we’d get a finer and finer sense of exactly what the requirement is.”
7. AI Governance: Value chain risks and visibility gaps
Kind urged organisations to be cautious when deploying AI tools, particularly generative AI, noting they are still experimental and carry complex, often opaque supply chains.
“If the AI tool you're using isn't designed and trained in-house or locally hosted, I think you have to be really aware of the value chain risks.”
She flagged that organisations may be using or disclosing personal information without knowing it, due to upstream AI model training and foreign hosting arrangements.
“You may have been said to have used personal information that went in preparation of the material… that’s where we’re seeing risk.”
Rather than relying on traditional forms of regulatory intelligence, such as complaints data, the OAIC is considering direct engagement with organisations to survey and map how AI tools are being used.
8. Breach response: Timeliness and pragmatism
Kind noted that timely notification remains the main compliance gap in breach responses.
“The main shortcoming we’re seeing in breach response is timeliness… entities perceive a trade-off between certainty and promptness.”
She confirmed that, from the perspective of the Privacy Act, ransomware attacks where systems are encrypted but there is no evidence of access or exfiltration may not always meet the Notifiable Data Breaches (NDB) threshold, particularly if there’s no risk of harm to individuals.
“If there’s nothing the individual could potentially do in that instance, it seems like it’s less likely to reach that threshold.”
The OAIC continues to triage notifications, prioritising incidents that involve sensitive information, vulnerable cohorts, or signs of systemic non-compliance. It adopts a risk-based approach to initiating further inquiries.
“We do preliminary assessment as to whether or not a certain number of risk factors are met and then escalate as required.”
Kind clarified that while the 30-day notification obligation is important, the OAIC’s approach focuses on cooperation and proportionality.
She acknowledged the resource intensity of full investigations, particularly those assessing compliance with APP 11 (reasonable security steps), and confirmed that the OAIC focuses its enforcement posture on matters where investigation is likely to drive meaningful change.
“The challenge comes in investigating the root causes… that’s a pretty resource-intensive process.”
9. Enforcement priorities: Emerging tech and systemic harm
With expanded powers, the OAIC signalled a more proactive enforcement posture, particularly in relation to:
- AI and emerging technologies;
- Digital identity and biometrics;
- Systemic and egregious harms, especially involving vulnerable individuals; and
- Well-resourced entities expected to meet higher compliance standards.
“We're very interested in adopting a more robust enforcement posture… where there's persistent, egregious or systemic privacy harms.”
The OAIC is also actively testing APP 3.5 "lawful and fair collection" to extend the practical reach of current law.
“We think [APP 3.5] may do quite a lot of heavy lifting."
10. Biometrics: High risk, high scrutiny
The Commissioner confirmed the OAIC is developing guidance on biometrics, particularly around facial recognition in retail and consumer settings.
“I think we’re going to see an explosion with wearable technology… and a whole range of new applications of biometrics.”
When discussing whether consent can be achieved by displaying prominent signage indicating facial technology is being used, Kind noted that the type of public place is important. For example, Kind suggested that signage might be sufficient in a luxury store, but raised questions about whether meaningful consent can be obtained in essential service contexts.
11. Credit reporting: OAIC re-engaging with enforcement
Kind acknowledged the complexity and co-regulatory nature of the credit reporting system, and noted that the OAIC is seeking to take a more active role as the privacy regulator in this area.
“Our priority this year is to step up our presence as the privacy regulator of the credit framework.”
Key concerns include:
- Delays in correcting credit records
- Lack of compliance with the 30-day correction timeframe
- Poor consumer experience with dispute resolution
She also confirmed stakeholder concerns around soft enquiries and opaque credit scoring, but noted the OAIC will wait on government decisions about structural reform before acting on those issues.
12. A strategic, outcomes-focused regulator
Reflecting on her first year in the role, Kind emphasised that regulatory effectiveness depends on engagement, education, and proportionate enforcement.
“One of the biggest insights has certainly been around just what a complex system regulatory enforcement is.”
The OAIC will continue evolving how it writes determinations and guidance, aiming to explain what “good” looks like while improving transparency and education.
“The more our guidance and determinations can be informed by the reality of what organisations are dealing with, the better.”
This session was delivered as a live interview with Commissioner Carly Kind as part of our Digital Economy Live series. To view the full conversation on demand, please visit our Digital Economy Live webpage.
If you would like to explore how these developments affect your business, or how to prepare for upcoming reforms, please get in touch with our Privacy Team.