AI-Written Grievances: A Growing HR Problem

  • Home
  • News
  • AI-Written Grievances: A Growing HR Problem

Over the past 12 months, I’ve seen a noticeable shift in how employees are raising grievances, appealing redundancies, and challenging disciplinaries.

More and more are turning to AI tools such as ChatGPT to draft their complaints.

On the surface, this sounds like progress. AI can improve structure, remove emotional language and help people articulate concerns more clearly.

In reality? I’m seeing a growing number of inaccurate, inflated and legally flawed grievances that are creating significant administrative and legal headaches for employers.

The Problem: Confidently Wrong

AI tools generate answers based on the information they are given.

If an employee:

  • Misunderstands what happened
  • Inputs incomplete or biased information
  • Doesn’t understand the legal framework
  • Asks leading or emotionally loaded questions

AI will often produce a polished, professional-looking document that sounds legally robust  but isn’t.

I am regularly seeing grievances that:

  • Incorrectly cite the Employment Rights Act 1996
  • Misapply “automatic unfair dismissal” arguments
  • Refer to irrelevant case law
  • Use “without prejudice” terminology incorrectly
  • Contain sweeping allegations that don’t align with the actual facts

The wording is impressive.
The tone is formal.
The legal references look authoritative.

But the substance is often flawed.

If you put poor information into AI, you get poor information back, just packaged more convincingly.

A Personal Note: I Use AI Too

Here’s the important bit, I use AI myself.

As an HR consultant, I use it to:

  • Sense-check structure
  • Brainstorm angles
  • Improve flow
  • Stress-test arguments

And do you know what I often find?

I argue with it.

Sometimes it gets the law slightly wrong.
Sometimes it oversimplifies.
Sometimes it makes bold assumptions that don’t reflect UK employment practice.

Because I understand employment law, I can spot when it’s off track.

That’s the difference.

AI is a tool, but without subject knowledge, it can mislead just as easily as it can help.

Quantity Over Quality

Another noticeable trend is length.

What used to be a two-page grievance is now often 8–12 pages long. Some employers are receiving lengthy AI-drafted appeals for relatively straightforward disciplinary outcomes.

The issue isn’t that employees shouldn’t raise concerns. They absolutely should.

The issue is that HR teams  particularly in SMEs without in-house legal departments  are having to spend significant time unpicking:

  • Incorrect legal references
  • Exaggerated claims
  • Misinterpretations of policy
  • Assertions that don’t reflect the actual legal framework

This creates delay, frustration and unnecessary escalation.

Inflated Expectations

Perhaps the most damaging consequence is expectation.

When an AI tool suggests someone has a “strong tribunal claim” or implies that a process is automatically unlawful, employees understandably believe that to be fact.

When the internal outcome or later legal advice doesn’t match what the chatbot suggested, employees can feel misled or “cheated”.

This widens the trust gap and makes resolution harder.

What AI Can (and Can’t) Do

AI can:

  • Help structure a complaint
  • Improve grammar and clarity
  • Reduce emotional language
  • Provide general information

AI cannot:

  • Verify the accuracy of what it’s told
  • Assess credibility
  • Apply employment law reliably to specific circumstances
  • Understand organisational context
  • Replace professional legal advice

It is a drafting assistant, not an employment lawyer.

What Employers Should Be Doing Now

This shift means employers need to be even more confident and robust in their processes.

I would recommend:

  • Ensuring grievance and disciplinary procedures are clear and up to date
  • Training managers on handling legally inflated or AI-generated complaints calmly
  • Keeping detailed and contemporaneous notes
  • Sticking firmly to policy and evidence rather than reacting to tone
  • Managing expectations early in the process

In many cases, the underlying issue is still straightforward. It’s the presentation that has become more complex.

Final Thoughts

AI isn’t the enemy.

Used responsibly, it can help employees articulate concerns they might otherwise struggle to express.

But AI-generated grievances are increasingly creating more heat than light particularly where the law is misunderstood or misrepresented.

As HR professionals and business owners, we now have an additional role:

Not just managing workplace disputes,
but managing workplace disputes in the age of AI.

If you’re starting to see this in your organisation and would like support navigating it confidently and compliantly, feel free to get in touch.

Previous Post
Cart

No products in the cart.