top of page
Elfworks - Pitch Deck v5.jpg

AI Adoption – Working with a lying sycophant

Updated: Oct 15

It sounds harsh but bear with me. A person who fabricates information to please might be called a “sycophant” or a “yes man”. A person who makes things up is typically known as a “liar” or “fantasist”. Every major AI model on the market exhibits the traits of a lying sycophant. That’s the bad news. The good news is they have tremendous potential, they are open to feedback and, when instructed well, can deliver great outcomes.


ree

If you are a leader in an Australian Accounting firm and are currently baulking at the incorporation of AI into your workflow practices, you’re not alone. We are now seeing weekly reports of AI failures across the accounting and legal professions, all stemming from the tendency of AI model to hallucinate. Here are some of the high(low)lights:

  1. According to Damien Charlotin’s AI Hallucination Database (www.damiencharlotin.com), hallucinations crept into 28 Australian court case submissions between January and September 2025, ranging from incorrect citations to complete fabrication of legal precedents. One of these errors led to a personal costs order against a lawyer;

  2. The hallucination-riddled Deloitte Report for the Federal Government continues to make headlines, with Deloitte admitting to AI use, refunding part of the fee for the report and suffering reputational damage;

  3. UK Financial Reporting Council warned in June 2025 of AI’s impact on Big 4 audit quality, in particular the potential risk of hallucinations in financial reports.

 

Combine AI optimism with deep skepticism

After close to two years testing AI models, we know the output of these models can only be trusted once they are cached in significant guardrails. Here is a summary of the root problems with AI models that lead to the need for guardrails:

  • They are not like us: Despite the inference AI is artificial ‘human’ intelligence, they do not think like us. AI simply learns patterns in training and internet data and then uses these patterns to predict the next word in their output.

  • They are too eager to please:  AI models tend to make things up when it cannot find something directly on point; you rarely find them saying “I don’t know”. They behave this way (making things up) as they are trying to please the user.

  • They do not self-check: Alone, an AI model is not attached to a database of information that serves to validate the output from the model; such a database needs to be built and the AI model directed towards it. They will access the web but that's not always the most accurate resource either.


In response to the above problems, we have developed the following guard rails.

Existing Features

  1. Stepped process: Before we have the AI models attempt to answer the query, we ask them to tell the platform what they think is relevant to answer the query. The Elfworks platform then checks this list against our own stored databases of legislation, case law and ATO releases to ensure the items on the planned research list actually exist. If the item on the list doesn’t exist, it gets wiped from the research list. Only do we let them attempt an answer using their list of documents.

  2. Multi-model validation: We have four AI models at work on the platform, Chat GPT, Grok, Gemini and Claude. They each have a go at answering the question with the hallucination-swept list of documents and then they read and challenge each other’s answers before compiling a ‘Concensus’ view and any viable ‘Alternative’ views.

  3. ATO Legal Database Validation: We have a separate database storing 29,000+ Private Rulings released since 2008. The platform then goes back to the original query, finds the 5 most relevant Private Rulings on point and compares these with the Concensus view.

Coming features

  1. Live citation link in Elfworks output: This month (October 2025), we will introduce live links within the Elfworks output, making it much easier to check the output references against real legislation, case law and ATO releases.

 

Contact us for free trial

If you would like to trial a platform designed to help Australian accountants in public practice with research, advice drafting and a range of productivity tools that has built-in databases and validation steps to weed-out hallucinations, please contact us on info@elfworks.ai.

 
 
 

Comments


bottom of page