top of page

Ai Dangers for Accountants #2 – Shadow Ai and APES 320

Each year, the job of an Australian accountant in public practice becomes harder. Client demands and client messes, endless compliance, legislative uncertainty and strange ATO interpretations name just a few of the challenges facing you and your reports (if you can find staff that is). So, when you discover a use of Ai that helps speed up the completion of single or recurring tasks, it’s very natural to adopt this as part of your way of operating.


This scenario is borne out across Australia every day on countless desks, remote and central, yet this scenario likely meets the definition of Shadow Ai – an activity that increases the risk of breaching professional and ethical obligations for members of CPA Australia, CA ANZ and IPA.

Shadow Ai is likely happening in your firm, as you read. This is not a good news post ☹.

 

What is Shadow Ai?

Shadow Ai refers to the unauthorised and/or unmonitored use of Ai tools, systems or applications within an organisation without the knowledge, approval or oversight of management. Shadow Ai typically arises when individuals find a use for Ai that helps them work better and tends to creep into an organisation over time. Even if your firm formalises an Ai solution (such as Copilot) shadow Ai will creep in when staff go outside the formalised solution due to its deficiencies. For example, Co-pilot is powered by an earlier version of the Chat GPT source code (Microsoft being a significant shareholder in Open Ai, developer of Chat GPT) and despite Co-pilot being a great tool to navigate and manage information in the Microsoft realm it is simply not capable of handling complex tax questions. If you don’t believe me, give it some revenue figures for a company over a couple of years and ask for the company’s Base Rate Entity Status and maximum dividend franking rate for those years and the next. You may then want to ask it “How many “r’s” in the word strawberry?” 😊


The point is, formalised solution or not, your staff have likely already gone elsewhere for enhanced Ai solutions and will continue to do so as new and exciting Ai solutions pass into commonplace.

 

Why Shadow Ai risks a breach of APES 320

From 1 July 2023 and the reissuing of APES 320 with a focus shift from quality control to quality management, firms providing non-assurance services have been required to establish a comprehensive System of Quality Management (SOQM) that identifies, and addresses, quality risks associated with service delivery. The SOQM must help prevent lapses in quality, a real challenge when staff are actively seeking out time saving solutions from the growing pool of Ai offerings.



To dive into the detail of the tension between shadow Ai in Australian accounting firms and APES 320, here are some of the key risks:

  • Undocumented processes: APES 320 requires that all workflows contributing to the quality of services be properly documented and monitored. Shadow AI tools, being outside official workflows, are typically undocumented, which contravenes the requirement for clear policies and procedures.

  • Risk of quality failures: The standard mandates firms to identify and address risks to service quality. Shadow AI usage can introduce risks such as data inaccuracies, confidentiality breaches, or ethical concerns that the firm is unaware of, thereby failing to meet the quality control obligations. For example, Ai models are prone to hallucinate (i.e., make things up) when asked complex queries. Is the output being checked for accuracy? Are staff even aware of how wrong Ai can be?

  • Monitoring and control deficiencies: APES 320 requires ongoing monitoring of all systems that contribute to service delivery quality. Shadow AI is inherently excluded from such monitoring, creating a gap in compliance.

  • Non-compliance with ethical standards: APES 320 aligns with APES 110 (Code of Ethics), which emphasizes transparency, accountability, and integrity. Undisclosed or unauthorized use of AI violates these principles and may compromise the firm's ethical standing.

 

How to use Ai without breaching APES 320

Ideally, Ai usage within a firm should be formalised as part of the System of Quality Management (SOQM) and staff should be educated on the formal solution and the risks of stepping outside this framework. Yet the question must then be asked: Are we locking our staff into a complete Ai solution? Will they feel restricted by the solution and grow to resent it?


At Elfworks, we’ve built an Ai platform for accountants in public practice that is constantly evolving with customer feedback and the strategic deployment of the best Ai models on the market; Grok, Chat GPT, Gemini, Claude and Llama (to name a few). We are not locked into using an older version of a prior frontrunner. Our custom-built platform is designed with APES 320 compliance in mind as it tags every Ai usage to the client and allows firm leaders to pull an APES 320 compliance report at a click of a button.


The report includes bot usage, Ai model research usage and Ai draft advice usage. It even tracks the use of our custom-built ATO Private Ruling database that attaches a relevant score to search output.

If you would like full, free, no obligation trial of an Ai platform designed from the bottom-up with APES 320 compliance in mind, please contact us at info@elfworks.ai or phone me on 0418 902 440.


Thank you for reading. Next blog I’ll take a look at Ai Dangers for Accountants #3 – Data privacy.


Please click here for Elfworks pricing information.

 

 
 
 
bottom of page