Fired by the algorithm? Singapore employers face reckoning as AI enters the HR room AURORATOTO GROUP

Fired by the algorithm? Singapore employers face reckoning as AI enters the HR room
fired-by-the-algorithm-singapore-employers-face-reckoning-as-ai-enters-the-hr-room
#Fired #algorithm #Singapore #employers #face #reckoning #enters #room,

SINGAPORE: As artificial intelligence (AI) becomes a key part of workforce management, human resources (HR) leaders must ensure fairness, maintain transparency, and meet legal standards in Singapore’s changing employment landscape.

AI isn’t a shield: Legal and ethical accountability still falls on employers

AI is quickly integrating into the workplace. It is used in performance reviews, workforce planning, and even decisions about dismissals. While these tools offer speed and objectivity, they also come with considerable risks, especially when they influence termination decisions.

Zhao Yang Ng, of Baker McKenzie Wong & Leow, stresses that employing AI does not take away from employers the obligation to make legitimate, reasonable, and logical decisions and choices.

“Even if AI highlights an issue, employers must be able to justify their decisions based on merit and job relevance,” Ng told HRD Asia. This aligns with the Tripartite Guidelines on Fair Employment Practices and the upcoming Workplace Fairness Act (WFA) 2025, which will specifically prohibit discrimination based on 11 protected characteristics.

Employers must be prepared to explain how the AI works. This includes specifying the data it utilises, how it arrives at decisions and makes choices, and what procedures are prepared to detect partiality.

Transparency isn’t optional: How employers can avoid mistakes

Singapore law does not require a reason for sacking an employee with notice. However, stating that AI influenced a decision, without understanding how it works, can lead to legal problems.

“If the dismissal is based on an AI alert for poor performance and the output is unclear or incorrect, employers could face wrongful dismissal claims,” Ng said.

The key is transparency, supported by documentation. HR should keep clear records that show:

  • What tools were used in decision-making
  • How performance was assessed
  • Who verified the results
  • Whether the employee had a chance to respond

“Even simple internal notes can demonstrate that the process was fair and consistent,” Ng adds. On the other hand, unclear clarifications, descriptions, or erratic details can damage a business owner’s integrity and trustworthiness in cases of conflicts.

Human oversight is non-negotiable: Build HITL processes now

While it is true that AI can process data fast, it still cannot replace human decision-making, especially when making vital decisions such as firings or removals.

Thus, Ng encourages business owners and company managers to blend Human-In-The-Loop (HITL) checks at each phase of AI-supported HR procedures. This includes:

  • Assigning decision-makers with the authority to override AI
  • Defining when and how human reviews take place
  • Creating clear internal policies for AI usage

“This isn’t just about fulfilling requirements,” Ng said. “If the AI makes a mistake, someone needs to be ready to catch and correct it before it impacts someone’s job.”

He also recommends updating employee handbooks to include information on AI monitoring practices, data usage, and the tools used in performance management.

Prepare now for the WFA: Internal appeals and vendor accountability are crucial

As the Workplace Fairness Act approaches implementation, expected around 2026 or 2027, HR leaders must start establishing appeal processes, especially where AI has affected outcomes.

“Employees need a clear way to challenge AI-influenced decisions,” Ng said. “Appeals are not just about addressing complaints — they reveal flaws in your system before they turn into legal issues.”

Ng also warns against blindly trusting third-party AI vendors. Employers are still fully responsible for any biased or flawed results produced by outsourced tools.

“You must understand what the algorithm measures, how it processes data, and what safeguards are in place. Without this knowledge, you won’t know what you’re defending in court.”

Bottom line: AI can be a powerful HR tool, but only if used dutifully

The advantages of AI in HR — faster evaluations, scalable insights, and streamlined processes — can be realised only when they are backed by strong governance, transparency, and ethical oversight.

“The law may be catching up,” Ng concluded, “but expectations are already changing. Employers who do not handle AI responsibly risk harm to their reputation and legal problems.”

What HR should do now:

  • Implement an internal AI use policy
  • Create structured HITL processes for all AI-assisted decisions
  • Establish clear appeal procedures
  • Require transparency from AI vendors
  • Maintain thorough, consistent documentation

In this changing AI-driven era, human judgment remains the essential safeguard, and it is something HR leaders cannot afford to ignore.