AI model bias – The danger and the treatment.
- Ian Youngman
- Oct 30
- 3 min read
“Human beings are poor examiners, subject to superstition, bias, prejudice, and a profound tendency to see that they want to see rather than what is really there.” - M. Scott Peck
The quote above applies equally to Ai models as it does to their human creators.
Knowledge of human bias (and human error) is so well ingrained that our professional disciplines have inbuilt checks and balances against bias and error; think the peer review scientific process and the multi-tier review process in accounting and law firms. AI models themselves are created by humans and trained on information created by humans and are therefore prone to bias, which begs the question; if AI models are incorporated into professional services workflows, how can we combat this bias risk?

What is bias within AI models?
Bias within AI models refers to systemic, unfair or skewed outputs that favour certain outcomes, groups or perspectives over others, often reflecting imbalances in the data, design or deployment of the AI model. If we consider the potential bias risks within Australian tax questions, here are two clear areas the jump out:
Recent bias risk: There are knowledge cut-offs for most major AI models, they are trained up on data up to a certain time. This explains why the new release of a model has that relevant, up-to-date feel that gradually fades until the new release for that same model.
Source dominance bias: The most prevalent and available data become the ‘truth’ to the AI model. Consider the available data published by the ATO on various topics that emphasises compliance and the ATO view of the world. Compare this to the private and more nuanced positions professionals arrive at through the application of the rules to specific client scenarios.
The above two bias risks can be mitigated via the application of the multi-tier review process.
“Are you sure its not deductible? Can’t you argue…”
Perhaps the most concerning bias risk is the risk of ‘agreeable mirroring’ or sycophancy risk. If a professional works with the one AI model on the one question with multiple interactions, there is a risk the AI model will eventually agree with them. The nature of these AI models is they want to please you, the user. Think about the recent AI fail by Deloitte; the AI model was so eager to please the Deloitte user it made up a fictional book written by a real Professor as well as making up a very relevant but fake quote from an actual Federal Court case. Dangerous stuff.
Multi-model validation – Part of the bias treatment
The major protection against the risk of AI model bias is your firm’s multi-tier review process working on the model output and the human additions to the model output. The next best protection is multi-model validation; 4 different models (each with their own inherent biases), all working independently and then together on a problem to arrive at a consensus position plus any viable alternative positions.
The multi-model validation becomes even more important in small firms and for sole traders who have limited or no capacity within the business for a second human review.
Next Steps
The next step is to experience the benefits of an AI enhanced application with bias-removing multi-model validation created to help Australian accountants in public practice with research, advice drafting, client email responses and a wide array of productivity tools. If you would like a free trial of the Elfworks platform, please contact them on info@elfworks.ai.
Please click here for Elfworks pricing information.




Comments