AI synthesis is deciding your employer brand
- sam19977
- 4 days ago
- 8 min read
AI tools are shaping how candidates understand employers, often from thin data and category language. EVP specificity matters more than ever.
What you will find in this piece: a first-hand account of how AI tools summarise employers, why they tend to produce the same flattering, category-generic portrait regardless of the actual source material, and what that means if your employer brand relies on being distinctively you.
Why it matters now: candidates are increasingly researching employers through AI synthesis rather than reading primary sources. The picture they receive may be confident, coherent, and misleading, and most organisations are not yet doing anything about it.
Key points before you read:
AI tools will attempt to generate an employer profile even when credible data is thin or contradictory.
For most employers, the synthesis will default to category norms: supportive, inclusive, flexible. Generic by design.
Even for larger, well-documented organisations, AI compresses distinct cultures into shared templates.
The remedy is not more content. It is more specific, evidence-rich, and distinctively human content.
EVP development needs to shift from describing positive attributes to documenting what is genuinely different, including the trade-offs.
AI synthesis is very clever and also very dim
Much like the humans that created it.
Like humans it can be vastly overconfident from limited sources. And whilst I believe as a society that we are fully aware of the limitations of AI synthesis – even the risks of being reliant on it - when presented with a polished and plausible answer, we’re happy to accept it.
Pre-AI - understanding employers
As someone who has been asked to conduct social listening for employers many times over the years, I can tell you this. There are somewhere between a handful and a few dozen employers that you can meaningfully conduct any social listening for.
For the rest, outside of review sites, there’s just not enough fodder out there.
Why? While people are willing, able - often insistent - on sharing their opinion and experiences on almost every facet of their lives, they don’t do so about their work, in general, or their employer, in particular.
Of course, as much as you might be frustrated by your work, there are sound reasons not to commit them to paper. Professionalism, or just needing to maintain a pay cheque, for just two. And for another, it’s a bit like sharing your dreams. Fascinating and meaningful to you, pretty dull to anyone else.
So, unless you are McDonalds, Tesco, the NHS – and very few others – you can’t just reach out into the aether and gain a useful and nuanced sense of what it’s like to work somewhere.
Post-AI - understanding employers
Now, in the old days – by which I mean more than about 3 years ago – you’d run that kind of exercise for Widgets UK of Anywhereshire. And soon you’d discover there was little to find. You’ve wasted a little time and money on doing so, no harm done.
What happens when you ask your AI tool of choice to do the same?
Obviously, I did. It told me there was little to report on my made-up company, but it did direct me to:
Widgit Software (Warwick-based, education/symbol software)
Culture & Environment: Described as having a supportive and inclusive culture, with a focus on diversity and neurodiversity. The office is described as a collaborative and welcoming space.
Values: The company emphasizes integrity and kindness.
Work-Life Balance: Staff mention flexible working.
This is generally described as a highly supportive and inclusive environment. Employees praise the "kind and fair" leadership, a strong focus on neurodiversity, and a flexible, collaborative atmosphere centred on their mission of helping people communicate through symbols.
What this AI synthesis doesn’t highlight is that is based on its careers page (singular) and three reviews. And that one of the three reviews is headlined: “Poor culture, toxic management.” Or that employees don’t mention neurodiversity at all, only the company mentions that.
It’s a poor synthesis of a very small data set.
Post-AI – the candidate experience
For most roles – alongside any word of mouth they can get – this is how people are going to research their next employer. They’re not going to google and read each source. They’re going to rely on that synthesis, which may or may not actually be very good.
So, candidates may have a greater sense of confidence that they understand you. But the confidence may be misplaced. And that’s not helpful.
I think what is more concerning is when we look at larger employers. Those that should have enough of a footprint for AI to make better sense of it.
As the volume of content, that you and your employees produce, goes up, the quality of synthesis should improve too. If you’re big enough to need to care strongly about sustained employer reputation, there should be enough content out there for AI to get a better fix on what you’re really about. The image it confidently portrays should be justified.
But there is another issue too.
We can all spot AI written content. It’s got a certain ring to it. It’s got a particular veneer and verbosity. There’s a superficiality to it that doesn’t always bear much scrutiny.
It recompresses similar content on similar topics in similar ways.
AI takes away all of the identity
It makes you sound like everyone else.
It’s the same reason that every poster for every Summer Fete in the UK now looks near identical. It understands the topic and it applies the template.
In my example above, the model has identified the main themes that should be present in an employer brand … and then fitted whatever evidence it can to that.
If your EVP is built from category language, AI will return you as a category example.
Why it’s chosen to ignore the grain and focus on the positive, I don’t know. What I do know is that it will always default to trying to complete the task, and the way it can be most sure of doing that is by following the same patterns for similar requests.
To explore this, I asked it to find ten other companies that fit the description it generated for Widgit Software. Among those it came up with are:
1. Auto Trader UK Consistently ranked as one of the most inclusive employers in the UK, Auto Trader is praised for its neurodiversity network and a culture that values individual voice over statistics. They offer a highly collaborative environment and a long-standing commitment to flexible working.
2. Capgemini UK Crowned the #1 inclusive employer in recent years, Capgemini is noted for its "Active Inclusion" strategy. Staff describe leadership as "approachable and kind," with a strong focus on hybrid working and supporting neurodivergent employees through dedicated programs.
3. Innocent Drinks Beyond their "fun" branding, Innocent is a reputable inclusive employer that actively encourages staff to bring their "full selves" to work. Their office is designed for collaboration, and they offer extensive flexible and hybrid working options.
4. Deloitte UK Deloitte has made neurodiversity a global priority through its Neurodiversity@Deloitte initiative. Employees benefit from a dedicated neurodiversity network, personalized support, and flexible working arrangements designed to help all personality types thrive.
Do we think that they all resemble the working experience of Widget Software, or even each other? With even a minute’s thought, no. But does the average punter give it that thought? Again, do we trust AI synthesis beyond the point that we should?
I think the evidence of our own experiences is yes.
AI synthesis is deciding your employer brand
So, we’re in this position that you’re being viewed through an AI lens, which will – by design – make you sound the same as everyone else. It’s not a great place to be.
What can be done?
I’m not making any claims to be some kind of AI guru or whisperer. I report what I can plainly see from my own experiences.
If AI is trying to push the image of you as homogenised and samey, you need to try every harder to stand out.
That’s in messaging and that’s in language.
You need AI to recognise that what you’re portraying is something outside of the category norms and should be reported as such.
It all comes back to EVP
To me, this a strong an argument for leaning harder into EVP.
But now to be less focussed on the positives, and more on the differences.
You certainly cannot rely on describing yourself as supportive. That’s going to need hard evidence, proof that your ways of working and your culture genuinely make people supported and able to thrive.
If you believe you have an environment that is more accessible for people that are neurodiverse, then you need them telling their story, and how things are made more accessible, because their needs are met.
And flexible working?! What do you mean? Be highly specific, and don’t think it just needs to be hybrid or permanent arrangements for those with parental or caring responsibilities. It can be everyday flexibility where people have that little more scope to make sure they don’t let anyone down, while doing the things that are meaningful to them.
Get granularly specific. What policies do you have and what do they mean to people? Have you got any rituals, totems or other artefacts that embody you and only you? What are the terms and languages that has most currency? And, vitally, all jobs involve compromise – what’s yours? What don’t you give, what will people not experience?
It can’t just be you and employer brand / talent attraction
It needs to be what your employees say about you too. And it needs to be how you talk more broadly.
For your employees, I’ve never been a huge fan of “activating” the EVP internally. It feels a whole lot like tell, not show. I think there’s a risk of instructing people that this is how we want you to talk about us. My inner rebel is already dismissing that idea.
They need to feel the distinctiveness as part of their everyday experience. If they then see that – corporately – you’re happy to talk about the not-totally-varnished reality too, then they can follow suit. That’s still going to be mostly on review sites rather than any further afield. But perhaps it gets into a bit more of their language, when they do talk about their work, in lots of contexts.
And if we understand that entire experience well enough, then we can have confidence to include it in other external communication, that which typically reaches the customer, or the shareholder, or the users of your services.
That is then going to give AI enough to feed on. To identify that you don’t quite fit the mould it’s trying to put you in, and ensure that – to those candidates, even to your existing employees – there are credible reasons to view you distinctively. And not just reasons to get you interested, reasons to make you commit to this role, this employer, this career.
AI will not reward vague positivity. It will reward recognisable evidence.
Key insights
AI synthesis works from what is available, not what is true. For most employers, that means a handful of review site entries and a careers page. The confident summary a candidate reads may rest on almost nothing.
Category language produces category results. If your EVP describes you as supportive, collaborative, and flexible, AI will file you alongside every other employer that uses the same words. Differentiation requires evidence, not adjectives.
Positive bias is baked in. The example in this piece shows AI choosing to weight three reviews toward the positive while omitting a headline that read "Poor culture, toxic management." Systems optimise for completing the task, and a coherent positive narrative is the easiest task to complete.
Volume of content does not automatically mean quality of synthesis. Larger employers with more material may simply get a more confidently wrong summary if that material relies on template language.
Specificity is the only real defence. Granular policies, real employee stories, named rituals, honest trade-offs: these are the signals AI needs to distinguish you from the category. Vague positivity is invisible to a model looking for patterns.
Employees need to feel the difference, not be briefed on it. An activated EVP that instructs employees how to talk is unlikely to produce the authentic, varied, and credible signals that AI can work with.
The customer-facing voice matters too. If the language your organisation uses publicly, across all contexts, is distinct and consistent, AI has more to go on. Employer brand does not live only on a careers page.

Comments