Skip to main content

Command Palette

Search for a command to run...

Understanding Data Privacy Frameworks and AI Governance

Updated
22 min read
Understanding Data Privacy Frameworks and AI Governance
R

I'm technologist in love with almost all things tech from my daily job in the Cloud to my Master's in Cybersecurity and the journey all along.

Published by: Roberto Date: October 19, 2025

Introduction

The rapid advancement of Artificial Intelligence (AI) has brought transformative changes to various industries. However, it has also raised significant concerns regarding data privacy, ethical use, and accountability. In response, a complex landscape of data privacy frameworks and AI governance models has emerged to address these challenges. This document provides a comprehensive overview of key frameworks, including international standards from ISO, the European Union's General Data Protection Regulation (GDPR) and AI Act, the United States' NIST AI Risk Management Framework, Australia's privacy and AI governance frameworks, and the principles of Responsible AI. It aims to clarify their scope, requirements, and practical implications for organizations developing or deploying AI technologies.

International Organization for Standardization (ISO) Frameworks

The International Organization for Standardization (ISO) has developed a suite of standards to provide a structured approach to managing AI and data privacy. These standards are designed to be globally applicable and can be adopted by organizations of all sizes and sectors.

ISO/IEC 42001:2023 - AI Management Systems

Published in December 2023, ISO/IEC 42001 is the world's first international standard for an Artificial Intelligence Management System (AIMS) [1]. It provides a framework for establishing, implementing, maintaining, and continually improving an AIMS. The standard is designed for any organization that provides or uses AI-based products or services and aims to ensure the responsible development and use of AI systems.

According to ISO, the standard "specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems" [1].

Key objectives of ISO/IEC 42001 include:

  • Risk Management: Providing a structured approach to identifying, assessing, and mitigating risks associated with AI.

  • Ethical Considerations: Addressing ethical challenges such as fairness, transparency, and accountability in AI systems.

  • Innovation and Governance: Balancing the drive for AI innovation with the need for robust governance and control.

ISO/IEC 42001 is complemented by several other standards that address specific aspects of AI and data privacy:

  • ISO/IEC 27001:2022 - Information Security Management Systems (ISMS): This is the leading international standard for information security. It provides a systematic approach to managing sensitive company information, including personal data. In the context of AI, it is crucial for protecting the data used to train and operate AI models.

  • ISO/IEC 23894:2023 - AI — Guidance on risk management: This standard provides guidance on managing AI-related risks, aligning with the broader principles of risk management outlined in ISO 31000.

  • ISO/IEC 27091 - AI Privacy Protection: This standard, currently in the draft stage, will provide specific guidance on protecting privacy in AI systems.

General Data Protection Regulation (GDPR)

The General Data Protection Regulation (GDPR) is a landmark data privacy and security law from the European Union (EU) that came into effect on May 25, 2018 [2]. It is considered the toughest privacy and security law in the world, imposing strict obligations on any organization that targets or collects data related to people in the EU, regardless of the organization's location.

The GDPR grants data subjects a range of rights over their personal data, including the right to be informed, the right of access, the right to rectification, the right to erasure, and rights related to automated decision-making and profiling. These rights have significant implications for the development and use of AI systems, particularly those that rely on personal data.

GDPR and AI

The GDPR's principles directly impact how AI systems are designed and deployed. Key considerations include:

  • Lawful Basis for Processing: Organizations must have a valid lawful basis for processing personal data, such as explicit consent from the data subject.

  • Data Protection by Design and by Default: Privacy considerations must be integrated into the design of AI systems from the outset.

  • Automated Decision-Making: The GDPR provides specific rights for individuals in relation to automated decision-making, including the right to obtain human intervention and to contest the decision.

EU AI Act

The EU AI Act is the world's first comprehensive legal framework for AI [3]. It follows a risk-based approach, categorizing AI systems based on their potential risk to health, safety, and fundamental rights. The regulation imposes different requirements for each risk category:

  • Unacceptable Risk: AI systems that pose a clear threat to the safety, livelihoods, and rights of people will be banned. This includes systems that manipulate human behavior to circumvent users' free will.

  • High-Risk: AI systems identified as high-risk will be subject to strict obligations before they can be put on the market. These include requirements for risk assessment, data quality, transparency, human oversight, and cybersecurity.

  • Limited Risk: AI systems with limited risk will have specific transparency obligations. For example, when using chatbots, users should be aware that they are interacting with a machine.

  • Minimal Risk: The vast majority of AI systems fall into the minimal risk category. The AI Act allows the free use of such applications, such as AI-enabled video games or spam filters.

The AI Act entered into force on August 1, 2024, and its provisions will be phased in over the following two years.

NIST AI Risk Management Framework (AI RMF)

The U.S. National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework (AI RMF) to provide a voluntary framework for managing the risks associated with AI [4]. The AI RMF is designed to be flexible and adaptable to different contexts and is intended to be used by organizations of all sizes and sectors.

The framework is organized around four core functions:

  1. Govern: This function focuses on establishing a culture of risk management and ensuring that the organization has the necessary policies, processes, and structures in place to manage AI risks.

  2. Map: This function involves identifying the context in which an AI system will be used and mapping out the potential risks and benefits.

  3. Measure: This function focuses on developing and implementing metrics and methods for assessing AI risks.

  4. Manage: This function involves prioritizing and acting on the risks identified and assessed in the previous functions.

Responsible AI

Responsible AI is a governance framework that guides the ethical, transparent, and accountable development and use of AI. Many organizations have developed their own Responsible AI principles. A prominent example is Microsoft's six principles for Responsible AI [5]:

  1. Fairness: AI systems should treat all people fairly.

  2. Reliability and Safety: AI systems should perform reliably and safely.

  3. Privacy and Security: AI systems should be secure and respect privacy.

  4. Inclusiveness: AI systems should empower everyone and engage all people.

  5. Transparency: AI systems should be understandable.

  6. Accountability: People should be accountable for AI systems.

These principles are often operationalized through internal governance structures, tools, and practices.

Australian Data Privacy and AI Governance Framework

Australia has developed a comprehensive approach to data privacy and AI governance that combines legislative requirements with voluntary ethical frameworks. Unlike the European Union's prescriptive regulatory approach, Australia has adopted a principles-based model that emphasizes flexibility and adaptability while maintaining robust privacy protections.

Privacy Act 1988 and Australian Privacy Principles

The Privacy Act 1988 is the principal piece of Australian legislation protecting the handling of personal information about individuals [6]. The Act was introduced to promote and protect the privacy of individuals and to regulate how Australian Government agencies and organizations handle personal information. It applies to Australian Government agencies and organizations with an annual turnover of more than $3 million, as well as certain other organizations such as health service providers and credit reporting bodies.

At the heart of the Privacy Act are the 13 Australian Privacy Principles (APPs), which serve as the cornerstone of Australia's privacy protection framework [7]. The APPs are principles-based law, which provides organizations with flexibility to tailor their personal information handling practices to their business models and the diverse needs of individuals. They are also technology-neutral, allowing them to adapt to changing technologies—a critical feature in the rapidly evolving AI landscape.

The 13 APPs govern standards, rights, and obligations around four key areas: the collection, use, and disclosure of personal information; an organization or agency's governance and accountability; the integrity and correction of personal information; and the rights of individuals to access their personal information. A breach of an Australian Privacy Principle constitutes an "interference with the privacy of an individual" and can lead to regulatory action and significant penalties.

The 13 Australian Privacy Principles in Detail

Understanding each of the 13 APPs is essential for organizations operating in Australia, particularly those developing or deploying AI systems. The following provides detailed explanations with practical examples, including specific AI-related considerations.

APP 1 — Open and Transparent Management of Personal Information ensures that APP entities manage personal information in an open and transparent way, including maintaining a clearly expressed and up-to-date privacy policy. For example, an e-commerce website must publish a privacy policy explaining what personal information it collects (name, address, payment details), how it uses that information (processing orders, marketing), and who it shares it with (delivery partners, payment processors). In the AI context, a company deploying an AI chatbot must update its privacy policy to explain that customer queries are processed by AI systems, what data is collected during interactions, how long conversation logs are retained, and whether the data is used to train AI models.

APP 2 — Anonymity and Pseudonymity requires APP entities to give individuals the option of not identifying themselves or using a pseudonym, with limited exceptions. A hospital conducting a patient satisfaction survey must allow respondents to complete it anonymously unless identification is necessary for follow-up. For AI applications, a mental health AI chatbot service should allow users to access support anonymously without requiring account creation or personal identification, unless clinical safety requires identification.

APP 3 — Collection of Solicited Personal Information outlines when an APP entity can collect personal information that is solicited, applying higher standards to sensitive information. An employer can collect an applicant's work history and qualifications because this is reasonably necessary for assessing job suitability, but collecting religious beliefs would require consent and clear justification. In AI contexts, this principle has critical implications: when AI systems generate or infer personal information—including images, profiles, or predictions about individuals—this constitutes a collection of personal information and must comply with APP 3 requirements. An AI-powered recruitment tool analyzing facial expressions during video interviews would be collecting sensitive information requiring explicit consent and clear justification.

APP 4 — Dealing with Unsolicited Personal Information requires organizations to determine whether unsolicited information could have been collected under APP 3, and if not, to destroy or de-identify it as soon as practicable. If a company receives medical records sent to the wrong email address, it must securely delete the information and notify the sender. For AI systems, if an AI tool scraping public websites inadvertently captures personal health information from a forum post, the organization must identify this unsolicited sensitive information and delete it.

APP 5 — Notification of the Collection of Personal Information requires entities to notify individuals about the collection of their personal information, including the entity's identity, purposes of collection, who the information will be disclosed to, and whether it will be sent overseas. When a user creates an account on a streaming service, the service must notify them that it's collecting their email, payment details, and viewing preferences for account management and content recommendations. A company using AI to analyze employee emails for productivity insights must notify employees that their email content is being collected and analyzed by AI systems, the purpose of the analysis, who will access the results, and how long data will be retained.

APP 6 — Use or Disclosure of Personal Information permits use or disclosure only for the primary purpose of collection, with secondary uses requiring consent or being within reasonable expectations and related to the primary purpose. A pharmacy cannot use prescription information to send marketing materials for vitamins without consent, as this is a secondary purpose not directly related to dispensing medication. In AI contexts, a company collecting customer service chat logs to resolve inquiries (primary purpose) may use these logs to train an AI model for improving customer service if this is a related secondary purpose properly disclosed, but using the data to train an AI model sold to third parties would require explicit consent.

APP 7 — Direct Marketing permits use of personal information for direct marketing only if certain conditions are met, including that the individual would reasonably expect it or has consented, and an opt-out mechanism must be provided. Sensitive information cannot be used for direct marketing without consent. An online bookstore can send promotional emails to customers who purchased books if this was disclosed at purchase and an unsubscribe option is provided. A company using AI to personalize marketing emails must ensure customers consented to marketing communications and must provide a clear opt-out mechanism, with the AI system configured to respect opt-out preferences.

APP 8 — Cross-border Disclosure of Personal Information requires entities to take reasonable steps to ensure overseas recipients will comply with the APPs before disclosing personal information overseas. An Australian company storing customer data on servers in the United States must ensure the cloud provider has appropriate privacy safeguards equivalent to the APPs. This is particularly critical for AI applications: an Australian organization using an AI service hosted overseas (such as a large language model API) must ensure that personal information sent to the AI provider is protected according to APP standards, especially when using public AI tools that may store or use data for model training.

APP 9 — Adoption, Use or Disclosure of Government Related Identifiers prohibits organizations from using government identifiers (like Medicare numbers or driver's license numbers) as their own identifiers except in limited circumstances. A private hospital can use a patient's Medicare number when billing Medicare (authorized by law) but cannot use it as the patient's hospital ID number. An AI-powered identity verification system can process government ID numbers to verify identity when legally required, but the organization cannot store or use these numbers as primary customer identifiers.

APP 10 — Quality of Personal Information requires entities to take reasonable steps to ensure personal information is accurate, up-to-date, complete, and relevant. A bank must ensure credit information reported to credit agencies is accurate, as inaccurate information could significantly impact an individual's ability to obtain credit. For AI systems, organizations using AI to make decisions based on personal information (such as loan approvals or hiring recommendations) must ensure the training data and input data are accurate and current, as AI models trained on outdated or inaccurate information may produce unreliable results that could harm individuals.

APP 11 — Security of Personal Information requires reasonable steps to protect personal information from misuse, interference, loss, unauthorized access, modification, or disclosure, and to destroy or de-identify information no longer needed. A medical clinic must implement secure storage for patient records (encrypted digital storage or locked filing cabinets) and restrict access to authorized personnel. Organizations using AI systems must ensure that personal information used to train or operate AI models is encrypted, access-controlled, and protected from unauthorized disclosure. When AI models are no longer needed, any personal information embedded in the models must be securely destroyed or de-identified—particularly important for AI systems that may inadvertently memorize training data.

APP 12 — Access to Personal Information requires entities to provide individuals with access to their personal information upon request, generally within 30 days, unless specific exceptions apply. A patient requesting copies of their medical records must receive them within a reasonable timeframe. In AI contexts, an individual requesting access to the personal information an AI-powered credit scoring system holds about them, including the factors that influenced their credit score, must receive this information in an understandable format, with explanations of what data was used and how the AI system used it to generate the score.

APP 13 — Correction of Personal Information requires entities to take reasonable steps to correct personal information to ensure it is accurate, up-to-date, complete, relevant, and not misleading. If an individual requests correction, the entity must respond within a reasonable period. An individual discovering their credit report contains an incorrect default listing can request correction from the credit reporting agency, which must investigate and correct the error if substantiated, then notify all organizations to whom the incorrect information was disclosed. For AI systems, if an AI-powered background check service has incorrect employment history information, the organization must investigate, correct the error if valid, notify any employers who received the incorrect information, and consider whether the AI model needs retraining to prevent perpetuating the error.

Recent Privacy Law Reforms

Australia's privacy landscape underwent significant transformation with the passage of the Privacy and Other Legislation Amendment Act 2024, which received Royal Assent on December 10, 2024 [8]. This landmark legislation implements 23 reforms from the Privacy Act Review Report and represents the most substantial update to Australian privacy law in decades. Most provisions came into effect immediately upon Royal Assent, with some major changes scheduled for later implementation.

One of the most significant reforms is the introduction of a statutory tort for serious invasions of privacy, which elevates privacy from a regulatory matter to a personal right. This means that individuals who suffer serious invasions of privacy can now seek damages directly through the courts, rather than relying solely on regulatory enforcement by the Office of the Australian Information Commissioner (OAIC). The reforms also include enhanced enforcement powers for the OAIC and substantially increased penalties for privacy breaches, reflecting the growing recognition of privacy as a fundamental right in the digital age.

AI and Privacy: OAIC Guidance

Recognizing the unique privacy challenges posed by AI systems, the Office of the Australian Information Commissioner released two comprehensive guidelines on October 21, 2024 [9]. These guidelines—"Guidance on privacy and the use of commercially available AI products" and "Guidance on privacy and developing and training generative AI models"—provide practical direction for organizations navigating the intersection of AI and privacy law.

The OAIC guidance establishes several critical principles for AI systems. First, privacy obligations apply to any personal information input into an AI system, as well as to output data generated by AI where it contains personal information. When AI systems generate or infer personal information—including images, profiles, or predictions about individuals—this constitutes a collection of personal information under Australian Privacy Principle 3 and must comply with all associated requirements. This means that even artificially generated information, such as AI hallucinations or deepfakes, is considered personal information if it relates to an identified or reasonably identifiable individual.

The guidance emphasizes that organizations must conduct thorough due diligence before adopting commercially available AI products. This includes assessing whether the product has been tested for its intended uses, how human oversight can be embedded into processes, the potential privacy and security risks, and who will have access to personal information input or generated when using the product. Importantly, the OAIC recommends as a matter of best practice that organizations should not enter personal information, and particularly sensitive information, into publicly available generative AI tools due to the significant and complex privacy risks involved.

Australia's AI Ethics Framework

In November 2019, the Australian Government published Australia's Artificial Intelligence Ethics Principles, establishing a voluntary framework to guide the responsible design, development, and implementation of AI [10]. Unlike the EU's regulatory AI Act, Australia's approach is entirely voluntary and designed to be aspirational, complementing rather than substituting existing regulations and practices.

The framework comprises eight core principles that ensure AI is safe, secure, and reliable:

Human, societal and environmental wellbeing requires that AI systems benefit individuals, society, and the environment throughout their lifecycle. This principle emphasizes that AI system objectives should be clearly identified and justified, with encouragement for systems that address areas of global concern such as the United Nations' Sustainable Development Goals.

Human-centred values ensures that AI systems respect human rights, diversity, and the autonomy of individuals. The principle establishes that machines should serve humans, not the other way around, and that AI systems should enable an equitable and democratic society while protecting fundamental freedoms.

Fairness mandates that AI systems be inclusive and accessible and should not involve or result in unfair discrimination against individuals, communities, or groups. This principle pays particular attention to vulnerable and underrepresented groups and requires measures to ensure compliance with anti-discrimination laws.

Privacy protection and security requires AI systems to respect and uphold privacy rights and data protection while ensuring the security of data. This includes proper data governance and management throughout the AI lifecycle, appropriate data anonymization, and identification of security vulnerabilities.

Reliability and safety ensures that AI systems reliably operate in accordance with their intended purpose throughout their lifecycle. Systems should be reliable, accurate, and reproducible as appropriate, and should not pose unreasonable safety risks.

Transparency and explainability establishes that there should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI and can find out when an AI system is engaging with them. This includes providing reasonable justifications for AI system outcomes and information about key factors used in decision-making.

Contestability requires that when an AI system significantly impacts a person, community, group, or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system. This includes sufficient access to information and inferences drawn by the algorithm.

Accountability ensures that those responsible for different phases of the AI system lifecycle are identifiable and accountable for the outcomes. This includes mechanisms for responsibility and accountability both before and after design, development, deployment, and operation, with appropriate levels of human control or oversight.

The principles are designed to be flexible and context-dependent. Not every principle will be relevant to every use of AI, and the framework explicitly recognizes that many businesses use AI systems (such as email filtering or accounting software) that may not require comprehensive analysis against all principles.

Voluntary AI Safety Standard

Building on the AI Ethics Principles, the Australian Government released the Voluntary AI Safety Standard (VAISS) on September 5, 2024 [11]. This standard provides practical, actionable guidance for organizations developing, procuring, and deploying AI systems through ten voluntary guardrails that apply across the AI supply chain.

The ten guardrails establish a comprehensive framework for safe and responsible AI:

Guardrail 1 (Accountability) creates the foundation for an organization's use of AI by requiring the establishment, implementation, and publication of accountability processes. This includes designating an overall owner for AI use, developing an AI strategy, and ensuring appropriate training.

Guardrail 2 (Risk Management) requires organizations to establish and implement a risk management process to identify and manage AI-specific risks throughout the system lifecycle.

Guardrail 3 (Data Governance) focuses on protecting AI systems and implementing data governance measures to ensure data quality, security, and appropriate use.

Guardrail 4 (Testing) mandates testing of AI systems before deployment and throughout their lifecycle to ensure they operate as intended.

Guardrail 5 (Human Control) requires maintaining appropriate human oversight and enabling human intervention when necessary.

Guardrail 6 (User Disclosure) ensures users are informed when interacting with AI systems and understand the capabilities and limitations of those systems.

Guardrail 7 (Processes for Impacted People) establishes processes for people impacted by AI decisions, enabling contestability and redress.

Guardrail 8 (Transparency) requires providing clear information about AI systems to enable understanding of how they work.

Guardrail 9 (Diversity, Inclusion and Fairness) commits organizations to ensuring AI contributes to safe, fair, and sustainable outcomes by defining organizational responsibilities, documenting goals, and preventing unwanted bias and discrimination.

Guardrail 10 (Security) mandates implementing security measures throughout the AI lifecycle to protect against adversarial attacks and misuse.

The Voluntary AI Safety Standard is designed to work in conjunction with the AI Ethics Principles and provides a practical implementation pathway. While currently voluntary, the standard may form the basis for future mandatory requirements, particularly for high-risk AI applications. The Australian Government is currently consulting on proposals to introduce mandatory guardrails for AI in high-risk settings, which would create a tiered regulatory approach similar to the EU AI Act but tailored to the Australian context.

National Framework for Assurance of AI in Government

To demonstrate leadership in responsible AI use, Australian federal, state, and territory governments jointly released the National Framework for the Assurance of Artificial Intelligence in Government on June 21, 2024 [12]. This framework establishes cornerstones and practices of AI assurance specifically for government use of AI systems, positioning government as an exemplar under the broader safe and responsible AI agenda.

The framework implements Australia's AI Ethics Principles in the government context and provides practical guidance for government agencies on how to assure AI systems throughout their lifecycle. It emphasizes the importance of transparency, accountability, and public trust in government AI applications.

Australian Approach: Principles-Based Flexibility

Australia's approach to data privacy and AI governance represents a middle path between the prescriptive regulatory frameworks of the European Union and the more fragmented approach in the United States. The Australian model emphasizes principles-based regulation that provides flexibility for innovation while maintaining strong privacy protections and ethical guardrails.

This approach has several distinctive characteristics. First, there is currently no AI-specific legislation in Australia, unlike the EU AI Act. Instead, existing laws—including the Privacy Act, consumer protection legislation, and anti-discrimination laws—apply to AI systems. Second, the framework relies heavily on voluntary compliance with ethical principles and safety standards, though this may evolve toward mandatory requirements for high-risk applications. Third, the technology-neutral nature of the Australian Privacy Principles ensures that privacy protections adapt to technological change without requiring constant legislative updates.

For organizations operating in Australia, this means that compliance requires a thorough understanding of how existing privacy obligations apply to AI systems, combined with voluntary adoption of ethical principles and safety standards. The OAIC's recent guidance provides critical clarity on these obligations, particularly regarding the treatment of AI-generated information as personal information and the restrictions on using personal information in publicly available AI tools.

Comparative Analysis

The following table provides a comparative overview of the key frameworks discussed in this document:

FeatureISO/IEC 42001GDPREU AI ActNIST AI RMFResponsible AI (Microsoft)
TypeInternational StandardEU RegulationEU RegulationVoluntary FrameworkCorporate Principles
FocusAI Management SystemsData ProtectionAI SystemsAI Risk ManagementEthical AI Development
ScopeGlobalEU Data SubjectsEU MarketGlobal (Voluntary)Corporate-Specific
ApproachManagement SystemRights-BasedRisk-BasedRisk ManagementPrinciples-Based
EnforcementCertificationFinesFinesVoluntaryInternal Governance

Australian Framework in Context

FeatureAustralian Privacy ActAustralian AI Ethics PrinciplesVoluntary AI Safety Standard
TypeFederal LegislationVoluntary PrinciplesVoluntary Standard
FocusData ProtectionEthical AI DevelopmentSafe AI Implementation
ScopeOrganizations >$3M turnover + GovernmentAll AI Developers/UsersAI Supply Chain
ApproachPrinciples-BasedEthics-BasedGuardrails-Based
EnforcementRegulatory + TortVoluntaryVoluntary (Mandatory Proposed)

Conclusion

The landscape of data privacy frameworks and AI governance is complex and rapidly evolving. Organizations must navigate a variety of international standards, regulations, and principles to ensure the responsible and lawful use of AI. The frameworks discussed in this document—from the comprehensive management system approach of ISO/IEC 42001 to the rights-based framework of the GDPR, the risk-based approach of the EU AI Act, the voluntary guidance of the NIST AI RMF, Australia's principles-based privacy and AI governance model, and the ethical principles of Responsible AI—provide a roadmap for organizations to build and deploy AI systems that are not only innovative but also trustworthy, transparent, and accountable.

For organizations operating in Australia, the combination of the Privacy Act's robust privacy protections, the AI Ethics Principles' ethical guidance, and the Voluntary AI Safety Standard's practical guardrails creates a comprehensive framework that balances innovation with responsibility. The recent OAIC guidance on AI and privacy provides critical clarity on compliance obligations, particularly regarding the treatment of AI-generated information and the use of personal information in AI systems. As Australia continues to refine its approach—potentially introducing mandatory guardrails for high-risk AI applications—organizations should proactively adopt the voluntary standards and ethical principles to ensure they are well-positioned for future regulatory developments.

References

[1] ISO/IEC 42001:2023 - AI management systems [2] What is GDPR, the EU’s new data protection law? [3] EU AI Act: first regulation on artificial intelligence [4] AI Risk Management Framework [5] Responsible AI Principles and Approach [6] The Privacy Act [7] Australian Privacy Principles [8] Privacy and Other Legislation Amendment Act 2024 [9] Guidance on privacy and the use of commercially available AI products [10] Australia's AI Ethics Principles [11] Voluntary AI Safety Standard [12] National Framework for Assurance of AI in Government

Enterprise Architecture

Part 3 of 4

In this series, I will detail the TOGAF principles and foundations in preparation for the fundamental certification exam

Up next

Mastering Enterprise Architecture: Essential Insights

Crafting the Blueprint: Navigating Enterprise Architecture for Success

More from this blog

U

Understand. Build. Conquer the Cloud

70 posts

No time for a novel? Here are my my Cloud Architect field notes: Distilling my complex cloud adventures into digestible TL;DRs.