You’re rolling out AI for staff rostering, performance monitoring, or recruitment screening. You probably haven’t thought about worker consultation requirements. But if your employees are covered by modern awards or enterprise agreements—and most are—you need to consult them before you deploy AI that affects how they work.
The stakes are pretty high. Between 2016 and 2019, the Robodebt scheme raised more than half a million inaccurate Centrelink debts through automated income averaging. The Australian Government repaid more than $751 million in unlawfully claimed debts and paid $112 million in compensation to roughly 400,000 people. The Royal Commission called it a public administration failure of massive scale.
This guide is part of our comprehensive Understanding Australia’s National AI Plan and Its Approach to AI Regulation, where we explore the plan’s three-pillar framework. For detailed context on how the plan positions Australia as an Indo-Pacific AI hub, including the three pillars structure, see our foundational overview. The National AI Plan’s “spread the benefits” pillar is focused on workforce development. The APS AI Plan 2025 sets out consultation standards for government agencies. And forthcoming WHS AI-specific guidance will address psychosocial risks from surveillance and monitoring.
This guide walks you through the compliance frameworks, consultation processes, and safeguards you need to implement workplace AI without creating legal headaches or destroying employee trust. For broader legal compliance requirements for workplace AI, including Privacy Act ADM provisions and Consumer Law obligations, see our detailed compliance guide.
What AI Uses in the Workplace Require Worker Consultation?
Most employees are covered by modern awards or enterprise agreements that mandate consultation when major changes are likely to have a significant effect on them. These obligations cover AI and automated decision-making.
Here’s what triggers mandatory consultation:
AI for rostering and scheduling: Automated shift allocation, workload balancing, or schedule optimisation affects when people work.
Performance monitoring systems: Keystroke logging, productivity tracking, output measurement, or quality scoring monitor how people work.
Recruitment and hiring tools: Resume screening, candidate ranking, interview analysis, or selection algorithms determine who gets hired.
Work allocation systems: Task assignment algorithms, job distribution tools, or workload management platforms control what work people do.
Performance evaluation: Automated appraisals, rating systems, or promotion recommendation engines affect career progression.
The EU AI Act explicitly lists AI used for recruiting, screening, selection, and performance evaluation as high risk. Australian employment law delivers similar protection through existing industrial relations frameworks.
Union representatives have been pushing for a stronger regulatory framework and greater worker voice in AI adoption. Assistant Treasury Minister Dr Andrew Leigh said workers must be “partners in shaping how AI is deployed, not passive recipients of decisions made in corporate boardrooms.”
The APSC will issue a Circular setting out clear standards for consultation on AI-related workplace changes in the Australian Public Service. While this applies directly to government agencies, it establishes a best-practice model that private sector organisations should look at.
Notice the pattern—consultation requirements aren’t about productivity tools like spell checkers or code completion. They’re about systems that make decisions about people or monitor what people do.
How Do You Conduct Worker Consultation About AI Implementation?
Genuine consultation means giving employees and their unions a genuine opportunity to influence the decision before you make it. This means engaging before you commit to a vendor or approach, providing complete information about the AI system, and actually responding to concerns raised.
Here’s how to do it properly:
Start early: Consult before you commit to a vendor or implementation approach. Once you’ve committed funds or made board promises, you’ve boxed yourself in. You can’t meaningfully respond to worker concerns if you’ve already locked in your approach.
Explain what you’re proposing: What AI system are you considering? What will it do? How will it affect work? What decisions will it make or inform? What data will it collect and process?
Provide time to respond: Don’t spring this on people in a 30-minute meeting. Give them time to understand what you’re proposing, talk it through with colleagues, and formulate concerns.
Involve union representatives: If your workplace has union members, their representatives must be part of the consultation. Agency Consultative Committees in the APS enable inclusive and representative input from employees and unions.
Document the process: Keep records of what information you provided, when consultation occurred, what concerns were raised, and how you addressed them.
Actually respond to concerns: Only 35% of Australian workers feel AI implementation has been transparent and well communicated. Almost two thirds are left in the dark about their organisation’s AI implementation strategy.
This represents a fundamental trust failure between organisations and their workforce. And building trust is not just about compliance—it’s about confidence.
For APS agencies, the consultation standards ensure employees have a voice in how AI is introduced, what problems can be solved with AI, and where it’s likely to have significant impact. The consultation must be ongoing and genuine, grounded in existing obligations and mechanisms.
If you’re in the private sector, look at what the APS is doing. Their frameworks give you a template for what “meaningful consultation” actually looks like in practice.
What Does Workplace Health and Safety Guidance Say About AI?
WHS legislation plays a role in safeguarding employees from AI-related risks. The specifics are coming in forthcoming WHS AI-specific guidance, but the principles are already clear from existing obligations and recent state-level reforms. For complete coverage of WHS obligations under Australian law, including Privacy Act and Consumer Law requirements, see our compliance guide.
New South Wales workers compensation changes seek to link work, health and safety risks with workplace surveillance and discriminatory decision-making. These reforms aim to ensure human oversight in key decisions and prevent unreasonable performance metrics and surveillance.
The psychosocial risks are real:
Surveillance stress: Constant monitoring creates anxiety. 72% of Australian workers are concerned about breaching data or regulatory requirements if they use AI.
Loss of autonomy: 58% of workers believe AI will be used to justify demands for greater productivity rather than helping reduce their workload.
Skills degradation: 60% of workers worry about losing thinking skills if they use AI at work.
Job insecurity: 54% of workers worry about job losses in their sector as a result of AI usage.
Workplace surveillance is governed by a patchwork of state and territory laws as well as WHS obligations. While these laws need modernising, they provide protection against unreasonable monitoring and data collection.
Conduct AI risk assessments before implementation. Evaluate potential WHS risks alongside bias, privacy, and discrimination risks. Your findings should inform your consultation process and help you design safeguards.
What Lessons Can We Learn from the Robodebt Scandal?
The Robodebt scheme used automated income averaging to raise Centrelink debts. The system assumed that if you earned $26,000 in a financial year, you must have earned $500 per week every week. It then demanded you prove you didn’t owe money based on that assumption.
This was unlawful. Debts were imposed on people which they then had to prove they didn’t owe. The automation was used to shift the burden of establishing facts from the government to vulnerable welfare recipients.
The Royal Commission examining Robodebt’s impact made 57 recommendations for reform. Here are the key lessons for workplace AI:
Human oversight is required: Improved safeguards must include human-led oversight mechanisms for any automated decision-making processes. This means oversight by people with training, authority, and the ability to override automated outputs—not rubber-stamping algorithmic decisions.
Cultural issues enable technological failures: The Royal Commission identified that one of the key causes was over-responsiveness in the Australian Public Service. When organisational culture discourages people from raising concerns about automated systems, those systems will cause harm.
Meaning-making matters: After the Royal Commission report, 50 agencies remained silent in the immediate aftermath, representing more than 25% of the entire public service. The absence of meaning-making in times of crisis erodes the conditions necessary for learning and cultural reform.
Co-design with affected people: Social security systems must be redesigned in partnership with those meant to benefit from them. This applies equally to workplace AI—design it with workers, not just for them.
Greater oversight powers needed: The Commonwealth Ombudsman and other oversight bodies need greater powers and resourcing. Internal oversight alone isn’t sufficient when organisational culture is compromised.
Some agencies got it right. IP Australia and Services Australia stood out by rejecting top-down control in favour of open two-way communication.
What Surveillance Practices Are Prohibited in Australian Workplaces?
State and territory workplace surveillance laws provide a patchwork of protections. The specific requirements vary by jurisdiction, but the principles are consistent—you can’t monitor people in unreasonable ways, and you must notify them about surveillance.
NSW workers compensation changes give union officials specific entry rights to inspect digital work systems to investigate suspected breaches.
The reforms aim to prevent unreasonable performance metrics and surveillance. So what counts as unreasonable?
Surveillance that serves no legitimate purpose: Monitoring bathroom breaks, tracking personal conversations, or recording private spaces goes beyond any legitimate business need.
Disproportionate monitoring: If you need to verify attendance, you don’t need keystroke logging. Match the monitoring to the actual requirement.
Covert surveillance: With limited exceptions for investigating serious misconduct, you must notify workers about surveillance before it starts.
Discriminatory metrics: Performance standards that disadvantage people with disabilities, caring responsibilities, or other protected attributes aren’t just unreasonable—they’re unlawful discrimination.
Establish and communicate clear policies on workplace surveillance and data handling. Transparency builds trust.
What Are Workers’ Rights When AI Is Used in the Workplace?
Even if an algorithm makes a decision without human oversight, the employer remains liable under unfair dismissal laws. The Fair Work Commission would still require a valid reason for dismissal and would assess whether the process was fair and reasonable.
You can’t outsource accountability to an algorithm.
Workers’ rights include:
Consultation rights: A genuine opportunity to influence decisions about AI implementation before those decisions are made.
Transparency: Information about how AI systems work, what decisions they make or inform, and what data they collect and process.
Human oversight: General protections provisions under the Fair Work Act capture circumstances where someone has been rejected for discriminatory reasons, regardless of whether a human or algorithm made the decision.
Appeal mechanisms: The ability to challenge decisions and have them reviewed by a person with authority to override automated outputs. Human oversight must be established by individuals with appropriate competence, training, authority, and support.
Protection from discrimination: Unfair dismissal laws, anti-discrimination statutes, adverse action provisions, and WHS legislation all safeguard employees from AI-related decisions that violate their rights.
Maintain human involvement in employment decisions made using AI, particularly in hiring, firing, promotion, and performance management. The algorithm can inform the decision. It can’t be the decision.
How Do You Implement Workplace AI While Maintaining Employee Trust?
Only 32% of Australian workers rate their AI proficiency as high, and just 35% have received any formal training. Yet 66% want more formal AI training.
Here’s the gap—workers want to understand and use AI effectively, but organisations aren’t providing the support they need.
Success with AI requires focusing on worker agency and human-centred design. Organisations that win will be those who put humans at the centre.
Provide training: Workers need ethical guardrails, intuitive tools, and inclusive training. Training needs vary—workers want help with basic AI interactions (32%), creating effective prompts (23%), ethical use (24%), and continuous learning (29%).
Communicate clearly: While 59% of workers believe automating routine tasks is a great idea, and 64% say AI has a positive impact on their job, organisations need to close the transparency gap.
Involve workers in design: Organisations must be transparent about how AI is used, involve workers in the conversation, and ensure AI enhances rather than undermines the human experience at work.
Support leadership: The APS AI Plan will support leaders to provide safe and responsible adoption environments through regular information and dedicated masterclasses.
Build communities of practice: Peer learning and communities of practice will be implemented to embed capability and drive adoption.
Foster open communication: As demonstrated by IP Australia and Services Australia, rejecting top-down control in favour of open two-way communication is key to lasting change.
Cultural transformation in the APS is a long-term learning project requiring rebalancing the competing obligations of public servants. Your organisation faces the same challenge—balancing efficiency gains from AI against worker rights, trust, and legal obligations. For comprehensive guidance on implementing governance with worker input, including AI6 governance practices for workplace AI, see our implementation guide.
What Support Is Available for Workforce Development and Transition?
As outlined in Australia’s National AI Plan, the “spread the benefits” pillar emphasizes workforce development and transition support. The APS AI Plan mandates foundational AI literacy training for all public servants. The aim is to provide capability foundations together with flexible, just-in-time learning to keep pace with rapid technological change.
Chief AI Officers will drive adoption within agencies, leading internal engagement, sharing guidance and use cases, and overseeing AI adoption and innovation. A peer working group will develop shared training materials for distribution via platforms like GovAI, APS Professions, and the APS Academy.
For private sector organisations, these initiatives show what good workforce development looks like:
Mandatory baseline training: Everyone needs foundational AI literacy. Not just the technical team. Everyone who will work with or be affected by AI systems.
Role-specific training: Different roles need different capabilities. Developers need different knowledge than managers, who need different knowledge than frontline workers.
Just-in-time learning: AI literacy, technical skills, and organisational capacity to evaluate AI systems will increasingly determine whether organisations can adopt AI safely, compliantly, and competitively.
Leadership development: Supporting leaders to provide safe and responsible adoption environments is crucial. Leaders need dedicated support to understand AI implications and model good practice.
Invest in skills development by providing upskilling and retraining opportunities to help employees use AI safely and effectively, adapt to technological change, and maintain workforce capability.
Closing the confidence gap requires more than access to tools. It demands investment in capability. For some workers, training deepens curiosity and skill. For others, it’s essential for overcoming fear, uncertainty, and resistance.