Your Industry Body asks you “Are you using AI?”. Your response…
- Ian Youngman
- 53 minutes ago
- 4 min read

The intersection of artificial intelligence and professional standards isn't just coming – it's here, and it's complex.
If you haven't been asked about your AI usage yet by your industry body, you will be soon. The accounting profession is grappling with how to harness AI's transformative potential while maintaining the ethical standards and compliance obligations that define our profession.
Over the past couple of years, our three major accounting bodies – CA ANZ, CPA Australia, and IPA Australia – have been increasingly vocal about AI's implications for practice. Their message is clear: AI presents enormous opportunities, but with them come significant professional and legal obligations. We have provided a summary table and further detail below of the public comments of each Professional Body on the questions of AI use by firms in public practice.
The Professional Standards Landscape
Summary of Industry Body Warnings and Guidance
Body | APES 110 (code of Ethics) | APES 320 (Quality Control) | Privacy Act concerns | AI opportunities noted |
CA ANZ | Members must ensure AI outputs meet professional competence requirements and maintain professional scepticism | Quality control procedures must address AI tool validation and human oversight requirements | Client data protection obligations when using cloud-based AI services; consent requirements | Enhanced efficiency in routine tasks, improved analysis capabilities, better client insights |
CPA Australia | Due care obligations require understanding AI limitations; members remain responsible for all work produced | Documentation requirements for AI-assisted work; need for appropriate supervision and review | Data sovereignty issues with offshore AI providers; breach notification obligations | Automation of compliance tasks, enhanced data analytics, improved risk assessment |
IPA Australia | Emphasis on maintaining professional judgement and not outsourcing decision-making to AI systems | Smaller practices need scalable quality control measures for AI implementation | Particular focus on small practice vulnerabilities in data security | Cost-effective access to advanced analytical tools for small practices |
The APES 110 Challenge: Professional Competence in the AI Age
APES 110's fundamental principles don't change because we're using AI – if anything, they become more critical. The Code's requirements around professional competence and due care mean you need to:
Understand your tools: You can't maintain professional competence while using AI systems you don't understand. This doesn't mean becoming a data scientist, but it does mean understanding your AI tool's limitations, training data, and potential biases.
Maintain professional scepticism: AI outputs must be subject to the same professional judgement you'd apply to any other source of information. The 'black box' nature of many AI systems makes this challenging but not optional.
Take ownership: You remain responsible for all work product, regardless of how it was generated. AI is a tool, not a decision-maker.
APES 320: Quality Control in an AI World
Quality control becomes significantly more complex when AI enters your workflow. APES 320's requirements for appropriate supervision, review, and documentation don't disappear – they evolve:
Documentation requirements: How do you document AI-assisted work appropriately? What audit trail is required? These are live questions that practices are solving in real-time.
Review procedures: Traditional file reviews may need updating to address AI-generated content and the different risk profiles it creates.
Training and competence: Staff competence requirements now include understanding the AI tools they're using and their limitations.
Privacy Act Implications: The Data Challenge
Perhaps the most immediate practical challenge is Privacy Act compliance. When you upload client data to AI platforms, several obligations are triggered:
Consent and disclosure: Many AI platforms involve data processing by third parties, potentially offshore. This requires appropriate client consent and disclosure.
Data security: You remain responsible for client data security, even when using third-party AI services. Due diligence on AI providers' security measures becomes essential.
Breach notification: If an AI platform suffers a data breach involving your client data, you may have notification obligations under both the Privacy Act and professional standards.
Your Response Strategy
When your industry body asks about your AI usage, your response should demonstrate:
Awareness of professional obligations and how they apply to AI usage
Process for ensuring AI-assisted work meets professional standards
Controls for data protection and privacy compliance
Understanding of your AI tools' capabilities and limitations
Documentation of your AI governance approach
Next Steps
If you would like to trial a safe, secure and effective AI platform for Australian accountants in public practice that has been designed with APES 110 and APES 320 in mind, please contact us at info@elfworks.ai.
Elfworks is the most accurate AI Tax platform in Australia, designed to transform the way accounting firms work. By harnessing multiple enterprise-grade AI models and over 70,000 Australian tax documents, including legislation, case law, ATO rulings, and private guidance, Elfworks delivers fast, reliable, and bias-free advice. The platform streamlines research, automates routine tasks, and builds firm-wide knowledge through hyper-personalised CPD, freeing accountants to focus on high-value client work while ensuring compliance and quality at every step.
Trusted by over 200 Australian accounting firms, Elfworks turns complex workflows into efficient, accurate, and profitable processes.
Please click here for Elfworks pricing information.



Comments