top of page
sam19977

How to drive employee engagement: Accuracy of understanding

Last time I talked about the mis-application of engagement survey results. Specifically, if you treat them as a target rather than a measure, you’ll end up driving some odd behaviours.

 

This time I wanted to dig into some ideas I’ve been exploring lately. Two articles in particular provided a lot of inspiration, you’ll find links at the bottom and they’re both worthy of a chunk of your time.

 

Frequency of Surveys

 

I have – in the last two weeks – discussed with clients the merits of both the annual survey, and one that happens at a much higher frequency, right down to some kind of daily measurement.

 

The big “pro”s to the annual survey are that:

·         it happens at the same time each year – so you are taking out a number of variables that might otherwise affect the results, and you can make good year-on-year comparisons

·         it becomes an event, and you can use this to create comms that encourage a really good participation

I don’t suppose anyone will be terribly surprised to hear that some of the limitations and problems with an annual survey are directly related to these.


It happens at the same time each year

Well, that’s a big gap, which does give your response time to have taken effect. But it also gives a lot of time for other things to happen. There will be highs and lows for the entire organisation, in departments, teams, and for individuals. Some of those will have been in your power, and many won’t.

So, yes, the survey may have been released on the first Monday in November again. But really, how alike are those two periods? In an ideal world, you’d want to map what’s happened – at least at a whole-business level – so that you can map the context between the two points.

But, and the but here is sizeable, you need to do that ahead of the survey. Definitely ahead of the results. Because as soon as you start to see the results, and then start layering on explanation, then you’re in real danger of undermining the whole legitimacy of what you’ve done.

You’re at real risk of starting to explain away any drops in results. You could appear defensive. It could certainly appear that you’re not really listening that what people had to stay.

You need to have somehow nailed your colours to a mast ahead of the survey. And be entirely  prepared to be contradicted by the results you get.

A couple of sub-points I’d like to make here.

1.       I’m talking about year-on-year comparisons with two data collection points: Year 1 and Year 2. Once we start getting into multi-year comparisons, then it starts to become robust, we’re then looking at patterns and trends, not jumps and lurches. Those jumps and lurches can be more distracting than informative.

2.       If we accept that year-on-year comparisons have their issues because of what’s happened in that past 12 months, then where does that leave us with benchmarking? Yes, a lot of the external winds and pressures will be the same – but the internal context is entirely unknown. Yes, there’s aggregation of data across a sector, but how reliably similar to your organisation is that picture?


It becomes an event

Which is great because then you can rally people around and ensure you’re getting as many people involved as possible.

But that has some knock-on effects too.

If you raise the expectation of participation, then you’re raising the expectation of outcomes. And probably by an order of magnitude (or two, three?) more than the level of energy you’re asking people to put inot the survey. The internal conversation may go … I spent 15 minutes on that damn survey, and I STILL don’t have: total work place-time flexibility / a career path plotted for 5 years / free tea (delete as applicable). There’s a danger you won’t be able to meet expectations.

And if you make it a big event, and especially if you make it the ONLY event, then there’s lots of pressure to ask everything. And the survey becomes longer, and more unwieldly.

Now, there’s a risk then that you’re upping the expectation levels again. If it appears you’re trying to solve every problem, people might reasonably expect every problem to be solved.

And you’re not thinking about the user experience either. As I’ve said before, the gold is in the comments. But you only get golden comments when you leave your users with their energy and inspiration to apply them. A surefire way to suppress that energy is to ask them 150 questions first.

 

So, we want to go more frequent?

I’d say so, at very least as a complement to an annual survey. And if we do that, it can be a complement to something far more streamlined.

I made a proposal just a few weeks ago to look at engagement daily. Likely with a single question / action that would in some way measure energy. You’d send it a random sample of x% of the population daily. Quite soon you’ve got a rich picture of where and when there is more energy. You’ll have a view of why that might be, but far more importantly you’ve got areas to target and ask some richer questions: What’s happening here? What makes you lot so chipper? And you lot, what’s draining your mood so much?

The actual questions you want - and need - to ask in response can be far better targeted.

Waiting to hear on that one, fingers crossed!


What about weighting of response?

This idea came out in Jessica’s piece on Medium, and it’s stuck with me.

The point she makes is, in a traditional engagement survey, everyone’s voice carries the same weight. Same as voting in an election. It doesn’t matter how informed or engaged you are, your vote counts the same as someone whose sole source of political information are occasional trips to less-visited corners of the internet.

But that’s not how we view people in an organisation. We pay people differently for the different value they provide.

So why should we treat their opinions the same?

I’m not saying we should weight them by pay grade, but we could obviously see some benefit in analysing if different things are important to those with higher performance, or roles that are in some way more crucial to success. Or outside of the common departmental and geographical analysis, are there demographic differences?

And what about whether everything is of equal importance? Take career advancement. For some, it’s vital. But there are also plenty of people for whom doing a good job, every day is what they want. They might like the money of a promotion, but not the extra responsibility and pressure. To them a question like: “Do you see a path for career advancement at [organisation]?” – is far less relevant, it’s going to get a neutral or negative response. How do we interpret that?


Different needs for different groups

It’s very possible to put a rich data set behind your engagement survey, so that your data can be cut in many different ways

Ultimately, you may be able to start understanding different employee needs, what’s the biggest priority for different types of employee. You may be able to start to say those groups will also tend to have these characteristics, and that might give a little more useful insight too.

Two caveats to this:

1)      As I’ve said before, you can’t rely just on the data, you’ve got to get out and speak to people. Humans are complex.

2)      You’ve got to be prepared to be wrong, and to pivot accordingly, whatever your insight has told you. Humans are complex and massively unpredictable.


Connection to actual outcomes

And – what I think has motivated Jessica and Susan to write their pieces in the first place - there’s so little effort placed on connecting engagement results to organisational outcomes.

Engagement data is treated as though it happens in isolation. We’re putting in measures, because we know that a poor level of engagement will affect our ability to perform. But we don’t then see how one tracks against the other.

If you do track engagement over time, that should in some way correlate to your organisation doing things better, quicker, cheaper – whatever metric makes most sense to you.

I’m not sure why that doesn’t happen. It might be fear of not getting the results you want. It might be a lack of confidence in making correlations. You can see why, there’s going to be a lag between action and reaction and so it may be hard to draw those connections.


What can be done to make the connection to outcomes?

I’ll be honest, the below isn’t something I’ve worked all the way through – it’s something I’d love the opportunity to work on:

If you commit to long-term analysis – and that means real consistency of process over a number of years - you should be able to draw lines between what aspects of engagement, when you get it right, have the biggest effect.

That is difficult, it will take considerable time, and adding a single human to any process introduces chaos. Adding x humans introduces x! chaos.

The aim is to collect enough data points so that the patterns appear out of the noise. If we tend to do this, we tend to see that. If we fail to do this, the effect tends to be that.

 

Further reading

 

0 views0 comments

Recent Posts

See All

Comentarios


bottom of page