AI Implementation Under APRA CPS 234: A Practical Guide for Australian Banks
Australian banks are under mounting pressure to deploy artificial intelligence across lending, fraud detection, customer service, and risk management. But unlike less regulated industries, financial institutions must navigate a dense web of prudential standards that govern how technology is adopted, operated, and governed.
APRA's prudential framework was not written with AI in mind, yet its requirements apply directly to AI systems. Getting this wrong is not a theoretical concern. APRA has shown it will act decisively when institutions fail to meet their obligations, as demonstrated by its enforcement actions following the CBA and Medibank incidents.
This guide provides a practical, implementation-focused approach to deploying AI within APRA's framework, with particular focus on CPS 234 (Information Security), CPS 230 (Operational Risk Management), and CPS 220 (Risk Management).
Why AI Creates Unique Compliance Challenges for Banks
Traditional software systems are deterministic. Given the same input, they produce the same output. AI models, particularly machine learning and large language models, behave differently. They learn from data, evolve over time, and can produce outputs that are difficult to explain or predict.
This creates several compliance challenges that banks must address head-on:
- Model opacity: Many AI models operate as "black boxes," making it harder to demonstrate the transparency and explainability that regulators expect.
- Data dependency: AI systems are only as reliable as the data they consume. Poor data governance translates directly into model risk.
- Dynamic behaviour: Models can drift over time as underlying data patterns change, meaning a model that was compliant at deployment may not remain so.
- Third-party complexity: Many banks rely on external AI vendors, cloud platforms, and pre-trained models, each introducing supply chain risk that must be governed.
- Attack surface expansion: AI systems introduce new vectors for adversarial attacks, data poisoning, and prompt injection that traditional security controls may not address.
Understanding these unique characteristics is the first step toward building a compliance framework that actually works.
CPS 234: Information Security Obligations for AI Systems
CPS 234 requires APRA-regulated entities to maintain an information security capability commensurate with the size and extent of threats to their information assets. AI systems are information assets, and the data they consume, process, and produce must be protected accordingly.
Classifying AI as an Information Asset
Under CPS 234, banks must identify and classify information assets, including those managed by related parties and third parties. For AI, this means classifying:
- Training datasets and feature stores
- Model weights, parameters, and configuration files
- Inference outputs and decision logs
- Prompt templates and system instructions (for LLM-based systems)
- Model monitoring and performance data
Each of these must be assessed for confidentiality, integrity, and availability requirements, then protected with controls proportionate to their classification.
Security Controls for AI Environments
CPS 234 requires controls that protect information assets commensurate with their criticality and sensitivity. For AI systems, this translates to:
Access management
- Role-based access to training data, model registries, and inference endpoints
- Separation of duties between model development, validation, and deployment
- Privileged access management for data science and ML engineering teams
- Audit logging of all access to models and training data
Data protection
- Encryption at rest and in transit for all training data and model artefacts
- Data masking and tokenisation for sensitive fields used in model training
- Secure handling of personally identifiable information (PII) in feature engineering
- Controls preventing training data from leaking into model outputs
Vulnerability management
- Regular security assessments of AI/ML platforms and infrastructure
- Monitoring for adversarial attacks, data poisoning, and model extraction attempts
- Patch management for ML frameworks, libraries, and dependencies
- Penetration testing that specifically covers AI attack vectors
CPS 234 Compliance Checklist for AI Systems
Use this checklist when deploying any AI system within an APRA-regulated entity:
- [ ] AI information assets identified and classified per the bank's classification scheme
- [ ] Security controls implemented commensurate with asset criticality
- [ ] Access controls enforce least privilege and separation of duties
- [ ] All access to training data, models, and outputs is logged and auditable
- [ ] Encryption applied to data at rest and in transit across the AI pipeline
- [ ] Vulnerability assessments cover AI-specific attack vectors
- [ ] Incident response plans updated to address AI-specific scenarios
- [ ] Board and senior management informed of material AI security risks
- [ ] Third-party AI providers assessed against CPS 234 requirements
- [ ] Testing program includes AI-specific security testing at least annually
CPS 230: Operational Risk and AI Resilience
CPS 230 (Operational Risk Management), which took effect on 1 July 2025, introduces strengthened requirements for managing operational risk, business continuity, and material service providers. AI systems fall squarely within its scope.
Operational Risk Framework for AI
Banks must maintain an operational risk management framework that identifies, assesses, manages, and monitors operational risks. For AI, this means:
Risk identification
- Catalogue all AI models in production, including their purpose, data inputs, and business criticality
- Assess model risk based on the potential impact of incorrect outputs
- Identify single points of failure in AI pipelines and infrastructure
- Map dependencies between AI systems and critical business processes
Risk assessment
- Evaluate the likelihood and impact of model failures, including gradual degradation and sudden failure
- Consider scenarios where AI outputs are systematically biased or incorrect
- Assess concentration risk where multiple business processes depend on the same AI platform or vendor
- Quantify the financial and reputational impact of AI-related incidents
Risk controls
- Implement model performance monitoring with defined thresholds and alert mechanisms
- Establish fallback procedures for when AI systems are unavailable or producing unreliable outputs
- Maintain human-in-the-loop processes for high-stakes decisions
- Document and regularly test business continuity plans for AI system outages
Business Continuity for AI Systems
CPS 230 requires entities to maintain credible business continuity plans. For AI systems, this means thinking carefully about what happens when things go wrong:
- Degraded mode operations: How does the business process function if the AI system is unavailable? Manual fallback procedures must be documented, tested, and understood by operational staff.
- Model rollback: Can you revert to a previous model version quickly if a new deployment introduces errors? Your ML pipeline needs versioning, automated rollback capabilities, and clear criteria for when rollback is triggered.
- Data pipeline resilience: If upstream data feeds fail, how does the AI system behave? Design for graceful degradation rather than silent failure.
- Recovery time objectives: Define and test RTOs for AI systems based on their business criticality. An AI model powering real-time fraud detection has very different recovery requirements than a batch reporting model.
CPS 220: Risk Management and AI Governance
CPS 220 requires regulated entities to have a risk management framework that is appropriate to the size, business mix, and complexity of the institution. AI adoption increases complexity, and the risk management framework must evolve accordingly.
Board and Senior Management Responsibilities
APRA expects boards and senior management to understand and oversee the risks associated with AI:
- Risk appetite: The board should define the institution's appetite for AI-related risks, including model risk, data risk, and concentration risk on AI vendors.
- Reporting: Regular reporting to the board on AI model performance, incidents, and emerging risks. This should not be buried in technology reports but surfaced as a distinct risk category.
- Competency: Boards need sufficient understanding of AI to provide effective challenge. This may require AI literacy programmes for directors or the appointment of advisors with relevant expertise.
- Accountability: Clear assignment of accountability for AI risk, typically through the Chief Risk Officer or a dedicated AI risk function, with appropriate escalation paths.
Model Risk Management Framework
While APRA has not issued a standalone model risk management standard (unlike the Federal Reserve's SR 11-7), the principles embedded in CPS 220 and APRA's broader supervisory guidance make clear that model risk must be managed rigorously. A robust model risk management framework should include:
Model inventory and tiering
- Maintain a comprehensive inventory of all AI/ML models, including their purpose, owner, data sources, and deployment status
- Tier models based on materiality: a credit decisioning model that affects millions of dollars in lending requires more rigorous oversight than an internal chatbot
- Review the inventory regularly, at least quarterly, to capture new models and retire deprecated ones
Model development standards
- Documented development methodology covering data selection, feature engineering, model selection, training, and validation
- Mandatory bias testing across protected attributes before deployment
- Explainability requirements proportionate to model risk tier
- Code review and version control for all model code and configuration
Independent validation
- Models above a defined risk threshold must be independently validated before deployment
- Validation should cover conceptual soundness, data quality, performance benchmarking, and stress testing
- Validators must be independent of the development team, with appropriate expertise and authority to block deployment
Ongoing monitoring
- Continuous monitoring of model performance against defined metrics
- Drift detection for both data drift (changes in input distributions) and concept drift (changes in the relationship between inputs and outputs)
- Regular back-testing and champion-challenger analysis
- Defined thresholds that trigger re-validation or model retirement
Third-Party Risk: Governing AI Vendors and Platforms
Most banks will not build every AI capability in-house. Cloud AI services, pre-trained models, and specialist AI vendors are part of the landscape. CPS 234 and CPS 230 both impose requirements on how third-party risk is managed.
Due Diligence for AI Vendors
When engaging AI vendors, banks should assess:
- Data handling practices: Where is data stored and processed? Does the vendor use customer data to train their own models? What happens to data when the contract ends?
- Model transparency: Can the vendor explain how their models work? Can they provide documentation on training data, model architecture, and known limitations?
- Security posture: Does the vendor meet the bank's CPS 234 requirements? Have they been independently assessed (e.g., SOC 2 Type II, ISO 27001)?
- Operational resilience: What are the vendor's SLAs, and do they align with the bank's business continuity requirements under CPS 230?
- Concentration risk: How many critical processes depend on this vendor? What happens if the vendor fails or is acquired?
Data Platform Considerations
The choice of data platform has significant implications for APRA compliance. Platforms like Databricks offer several capabilities that directly support prudential requirements:
- Unity Catalog provides centralised governance across all data and AI assets, supporting the classification and access control requirements of CPS 234
- MLflow enables model versioning, experiment tracking, and model registry capabilities that underpin model risk management
- Delta Lake provides ACID transactions, audit history, and data lineage, supporting the data integrity and auditability requirements across CPS 234 and CPS 230
- Attribute-based access control enables fine-grained permissions that enforce the least-privilege principle required by CPS 234
- Audit logging captures a comprehensive trail of data access, model training, and deployment activities for regulatory reporting
When evaluating data platforms for AI in a regulated banking environment, the ability to demonstrate governance, lineage, and auditability is not optional. It is a regulatory requirement.
Third-Party AI Risk Checklist
- [ ] Vendor due diligence completed covering data handling, security, and resilience
- [ ] Contractual provisions address data ownership, model transparency, and audit rights
- [ ] Vendor's security controls assessed against CPS 234 requirements
- [ ] Material service provider obligations under CPS 230 addressed in the contract
- [ ] Exit strategy documented, including data retrieval and model portability
- [ ] Concentration risk assessed across all AI vendor relationships
- [ ] Ongoing monitoring programme established for vendor performance and risk
- [ ] Incident notification requirements defined and agreed with the vendor
Practical Implementation Framework
Bringing all of this together, here is a phased approach to implementing AI within APRA's prudential framework.
Phase 1: Foundation (Months 1 to 3)
Governance setup
- Establish an AI governance committee with representation from risk, technology, legal, and business
- Define AI risk appetite and policy, aligned with the board-approved risk management framework
- Create an AI model inventory template and begin cataloguing existing models
Platform preparation
- Assess and select a data platform that supports regulatory requirements (governance, lineage, access control, audit logging)
- Implement role-based access controls and data classification for AI workloads
- Establish secure development environments for data science teams
Risk assessment
- Conduct an AI-specific risk assessment covering information security, operational risk, and model risk
- Identify gaps against CPS 234, CPS 230, and CPS 220 requirements
- Develop a remediation plan with clear owners and timelines
Phase 2: Build and Validate (Months 3 to 6)
Model risk management
- Implement model development standards and validation processes
- Deploy model monitoring infrastructure covering performance, drift, and bias
- Establish independent model validation capability, either internal or through a qualified third party
Security hardening
- Implement AI-specific security controls (adversarial testing, data poisoning detection, access logging)
- Update incident response procedures for AI-specific scenarios
- Conduct penetration testing covering AI attack vectors
Third-party governance
- Complete due diligence for all AI vendors against CPS 234 and CPS 230 requirements
- Update contracts to include AI-specific provisions (data handling, model transparency, audit rights)
- Establish ongoing monitoring for material AI service providers
Phase 3: Scale and Mature (Months 6 to 12)
Operational integration
- Integrate AI risk reporting into the enterprise risk management framework
- Establish regular board reporting on AI model performance and risk
- Embed AI governance into existing change management and approval processes
Continuous improvement
- Conduct regular reviews of the AI governance framework against evolving APRA guidance
- Participate in industry forums and APRA consultations on AI governance
- Benchmark against international standards (e.g., NIST AI Risk Management Framework, EU AI Act principles)
Culture and capability
- Deliver AI risk awareness training to first, second, and third lines of defence
- Build AI literacy at board level through targeted education programmes
- Establish communities of practice across data science, risk, and compliance teams
Key Takeaways for Banking Leaders
APRA's prudential framework provides a solid foundation for governing AI, but it requires thoughtful interpretation and deliberate implementation. The banks that get this right will be the ones that treat AI governance not as a compliance burden, but as a competitive advantage.
Here is what matters most:
1. Start with your existing framework. CPS 234, CPS 230, and CPS 220 already provide the scaffolding. Extend and adapt rather than building from scratch.
2. Classify and tier your AI models. Not all AI carries the same risk. Focus your most rigorous controls on models that drive material business decisions.
3. Invest in your data platform. The right platform makes governance, lineage, and auditability achievable rather than aspirational. Platforms like Databricks that offer integrated governance through Unity Catalog can significantly reduce the compliance burden.
4. Govern your vendors as rigorously as your internal teams. Third-party AI introduces risks that must be managed with the same discipline as internal model development.
5. Build capability across the organisation. AI governance is not just a technology or risk function responsibility. It requires literacy and engagement from the board through to front-line operations.
How Get AI Ready Can Help
Get AI Ready works with Australian banks and financial institutions to design and implement AI governance frameworks that satisfy APRA's prudential requirements while enabling genuine innovation.
As a Databricks Delivery Partner, we bring deep expertise in building compliant data platforms with integrated governance, lineage, and security controls. Our team includes former advisors to major Australian financial institutions who understand the intersection of technology, risk, and regulation.
Whether you are deploying your first production AI model or scaling an existing programme, we can help you navigate the prudential landscape with confidence.
Contact us to discuss your AI governance requirements, or take our AI Readiness Diagnostic to understand where your organisation stands today.