top of page
Elfworks - Pitch Deck v5.jpg

Ai Dangers for Accountants #3: Privacy Breaches

Updated: Jun 19

Sharing client data while using Ai models without appropriate consideration and safe guards risks breach of privacy obligations under Privacy Act 1988 (Cth) and the Australian Privacy Principle (APPs), professional obligations of member bodies (CA, CPA and IPA) as well the client confidentiality requirements set out in the Tax Agent Services Act 2009.


Non-compliance carries significant penalties (up to $50 million for corporations), reputational damage, and loss of client trust.Thankfully there is a way to use LLMs as part of your System of Quality Management that upholds your client data and privacy obligations.


ree

Use of free Large Language Models and privacy risks

Free versions of Large Language Models (LLMs), such as basic ChatGPT tiers, pose significant privacy risks due to their operational models:

  • Cloud-based processing: Queries and embedded client data are sent to the provider's servers, often located overseas.

  • Data retention and training: A common term of service for free LLMs allows providers to retain user inputs and outputs, and often use them to train future models. This means client data submitted in prompts can become part of the model's dataset.

  • Limited security: Free versions lack enterprise-grade security features like robust encryption, access controls, or audit trails.

  • Third-party involvement: Data may be shared with subcontractors or cloud partners, further complicating compliance.


When an accountant inputs client data into a free LLM query (e.g., "Summarise Client X's tax return: [insert data]"), that data becomes subject to these inherent risks, potentially violating privacy obligations.


Mitigation strategies

Accountants must avoid using free LLMs for any sensitive client data. Instead, consider:

  • Utilising enterprise-grade LLM subscriptions that offer data isolation, no user data training, and compliance assurances.

  • Anonymising or de-identifying data before inputting it into any external tool.

  • Conducting thorough due diligence on any LLM provider's privacy policy, security measures, and terms of service.

  • Considering on-premises or private cloud LLM solutions to retain control over data location.

  • Developing internal AI governance policies prohibiting the use of free tools for client data. This can be done as part of your System of Quality Management.


Utilising enterprise-grade Large Language Models (LLMs) significantly mitigates risks by safeguarding your clients' data. By integrating a platform with security and privacy at its core into your System of Quality Management, you can ensure compliance with APES 320 and relevant privacy legislation, while effectively mitigating privacy and data-related issues.


For a free trial of a platform powered by enterprise-grade LLM subscriptions—featuring data isolation, no user data training, and robust compliance assurances—visit www.elfworks.ai or contact us at info@elfworks.ai for more details.

Comments


bottom of page