Using AI in Recruitment: What can go wrong and how to reduce risks

Across Australia, jobseekers are increasingly encountering AI as part of the recruitment process, from AI-driven interviews to automated screening tools and algorithmically automated assessments. What was once a niche specialised use has rapidly become mainstream—particularly among large employers seeking to process high-volume applications.

But as adoption accelerates, so too do concerns about fairness, transparency, and the human experience of recruitment. Recent media reporting highlights the growing unease among candidates and experts alike, with several high-profile cases underscoring the risks.  However, there is also research to support the positive outcomes of using AI, particularly in reducing conscious and unconscious bias in recruiting.

This blog unpacks the potential harms of AI-enabled recruitment and offers practical recommendations for organisations looking to deploy these tools responsibly, and in a way that encourage and supports applicants rather than putting off your potential new hire.

When AI Rejects You

A recent story involving a young Australian jobseeker, Jamie, who was rejected by AI for an entry-level role at Woolworths has been viewed on TikTok over 100,000 times.  The online application mentioned “AI may be involved” in the hiring process but provided little more detail on exactly how AI would be used. After completing two AI-based video interviews, Jamie claims he was rejected without ever speaking to a human. His experience echoes a broader trend: candidates feeling dismissed, dehumanised, or confused by automated hiring systems.  The use of AI in these cases can be like “a black box deciding your future.”

This story is not unique. Reports indicate that  other major Australian organisations (which definitely have the revenues to support a well-resourced HR function) are increasingly relying on AI-driven interviews and assessments to manage large applicant pools.

While efficiency is the goal, the impact on job applicants – particularly young or otherwise vulnerable applicants for low level positions – is becoming harder to ignore.

What are the harms

Algorithmic Discrimination

Research has shown that AI tools can “enable discrimination” against marginalised groups.   This is particularly the case where  AI is used in the recruitment context.  Algorithms trained on historical hiring data may replicate past biases, for example, favouring certain genders, appearances accents, educational backgrounds, or demographic markers.

Studies and investigative reporting have documented the following indicators of biases:

  • Lower scores for candidates with non-Anglo names
  • Disadvantages for neurodivergent applicants in video-based assessments
  • Accent-based bias in voice analysis tools
  • Disparate outcomes for older workers

This is not hypothetical. Discrimination by recruitment algorithms is now widely recognised as a real and present problem.

However, other studies suggest that AI can help un-mask bias in the recruitment process.  This research showed that throughout the job recruitment process women believe artificial intelligence assessments reduce bias, while men fear it removes an advantage.  According to the study: “Women were significantly more likely to complete their applications when they knew AI would be involved, while men were less likely to apply,”

A second experiment as part of the same research focused on the behaviour of 500 tech recruiters.  It  found that when recruiters knew the applicant’s gender, they consistently scored women lower than men. However, this bias completely disappeared when the applicant’s gender was hidden. When recruiters had access to both the AI score and the applicant’s gender, there was also no gender difference in scoring.

This finding shows us they use AI as an aid and anchor – it helps remove the gender bias in assessment.”

Opaque Decision-Making

Candidates often receive no explanation for why they were screened out. AI-based video interviews, in particular, provide little insight into:

  • What traits were assessed
  • How responses were scored
  • Whether facial expressions, tone, or background noise influenced the outcome

This lack of transparency undermines trust and makes it difficult for organisations to demonstrate fairness or compliance.  Again, this is particularly problematic where interacting with particularly young or otherwise vulnerable applicants for low level positions.

Absence of humans in the hiring experience

The absence of humans as part of the process is particularly harmful in the recruitment process.  As well as discouraging diverse applicants, it can create the perception of unfairness and ultimately damage the employer brand – just ask the 100,000 viewers of Jamie’s TikTok video whether they are likely to engage in the Woolies recruitment process.

When jobseekers feel they are being judged by a machine, without knowing the criteria used for the judgement and with no right of appeal or opportunity to put forth additional information or relevant circumstances, it is not surprising that they report feeling alienated by AI-only recruitment processes. And of course trust erodes quickly.

Over-collection of personal information

AI hiring tools collect and analyse far more data than most candidates understand.  It’s not just what is submitted with your application but also information from the video which might include:

  • Facial data
  • Voice recordings
  • Micro-expressions
  • Behavioural patterns
  • Social media activity

These data types are highly sensitive and can reveal information about health, disability, ethnicity, or emotional state—often without explicit consent or awareness.

For organisations, this creates significant privacy risks (particularly around transparency and purpose of collection), security issues (and retention and deletion), and potential regulatory exposure.

Unvalidated or Pseudoscientific Assessments

As well, it is not clear how well information like micro-expressions and behavioural patterns  algorithms may be interpreted by AI algorithms to make decisions about your appropriateness for a position. 

Some AI tools claim to infer personality traits, emotional intelligence, or job suitability from facial movements or vocal patterns. Many of these claims lack scientific validation and are highly unreliable in their output.

Using unproven tools in hiring decisions exposes organisations not only to risk of privacy breaches but also poor hiring outcomes and reputational harm. :

How can organisations reduce the risks posed by AI in hiring?

The following are some steps that organisations can take to reduce some of risks from the use of AI in recruiting:

  1. Conduct a Privacy AI Impact Assessment Before Deployment: Map the data flows, identify privacy and AI risks, and identify and document mitigations. This is essential for demonstrating compliance and responsible governance.  Particular privacy risks to think about are providing notice, including details of AI being used in the assessment process, considering the purpose of all data being collected and how long that data should be retained (particularly for unsuccessful candidates).
  2. Ensure Human Oversight: AI should support—not replace—human decision-making. One of your ‘must have’s’ for almost any AI implementations is be ensuring that there is a ‘Human in the Loop’: Always provide candidates with a clear pathway to request human review and use AI as an aid not as a replacement for human decision making in the recruitment process.
  3. Demand Transparency From Vendors: As part of your review of any AI tool, you should have some understanding of how the tool works and how it was developed (to satisfy yourself about any risks that may be built into the tool).  To do this, make sure that you have access to clear vendor documentation on:
  • How the model works
  • What data it was trained on
  • What data it uses
  • Validation and bias-testing that has been done and the results of those tests
  • Data retention and deletion practices (within the AI tool)

If a vendor cannot explain their system, it should not be used.

  1. Minimise Data Collection: Collect only what is necessary and proportionate. Avoid tools that rely on analytics that may not be accurate or reliable.  For this reason, be careful about using a tool that incorporates:
  • Facial analysis
  • Emotion recognition
  • Personality inference from video
  1. Obtain valid consent where needed: Depending on what information is being collected, consent may be required.  It can be challenging to support that you have a truly voluntary consent in recruitment situations – where the individual may feel they have no real option to say no to the use of AI tools as part of the process.  To be truly voluntary, candidates should be given the option to refuse to participate in the AI managed process.  For example, they should have the right to opt for an interview with a human and there should be no negative inference from the selection of that option.
  2. Communicate Clearly With Candidates:  Transparency is of utmost importance when using AI as part of our recruitment practices.  Before asking a candidate to interact with AI, you must explain:
  • How AI is used as part of that process
  • What data is collected
  • How decisions are made
  • How candidates can seek review

This explanation should be made as overtly as possible e.g. in a carefully and appropriately worded privacy notice made directly available to the candidate before submitting an application.  Including details in your Privacy Notice that is linked to the application form but not more clearly referred to may not be sufficient.

Remember, transparency builds trust and ultimately better outcomes.  There are groups who may feel that AI will assist with managing discrimination.

  1. Test for Bias Regularly: Conduct ongoing fairness testing across different demographic groups (Including those with different vulnerabilities). Make sure you document those results and take corrective action when disparities appear.
  2. Strengthen Governance: Establish internal policies, oversight committees, and vendor management frameworks to ensure AI systems remain safe, fair, and compliant.

Building a Fairer Future for AI-Enabled Hiring

AI can absolutely support better hiring outcomes—but only when deployed with care, transparency, and strong governance. The recent stories emerging from Australian jobseekers are a reminder that efficiency cannot come at the expense of fairness or dignity.

Organisations that take a proactive, ethical approach will not only reduce risk—they will build trust with candidates and strengthen their employer brand.

Resources:

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.