Ethics and data integrity
Ethics and data integrity: When employee engagement survey data is used to predict the behaviour of people that have completed the survey.
As predictive modelling becomes even more commonplace, with models being used to predict everything from what television shows will be watched tonight to election outcomes, there’s a couple of questions that run deep into ethics when predictive modeling enters the world of employee engagement and feedback surveys.
“What happens when the organisation collating the employee feedback and engagement data uses the responses from employees to predict their future behaviour?”
And;
“When employees realise that their survey responses are being used to predict their behaviour, will they change their survey responses? And what does this do to the validity of the data being collected in the first place?”
Predicting future behaviour and people’s actions is notoriously hard, especially in a work setting. A huge number of variables may be influential, but only a limited set of variables taken from the workplace are included in the model.
When assessing predictive modelling we need to consider a whole range of factors:
Is the data being used fairly?
What will you do with the predictive outputs?
What about the false positives?
Will we create more harm than good and are we opening ourselves up to legal challenge from our employees?
Is our predictive model explainable? Auditable? And defensible in court?
As with any other modelling that may be used in the workplace, I believe you must ask yourself these questions before using employee feedback data for any other reason than that what it is intended for – measuring engagement and helping people become their best at work in a happy and supportive environment.
Does predictive modelling really belong in the world of employee feedback and engagement? We don’t know for sure right now as it is early days. Using it for things like reducing staff turnover is a great objective, but should it be done in a more open way? As opposed to collating data for one reason then using it for another?
I personally foresee a situation where employees start to game and proactively manage their survey responses to deter any predictive model from identifying them as a flight risk. This deliberate change in their responses could make the engagement data less valuable, accurate and as actionable as we would all want. I’m pretty sure this won’t be the last time this discussion point is raised.
What do you think? Is it all a bit dystopian?
After all, we’re all employees, even those that are implementing and using this predictive tech!
Find out more
If you'd like to learn more about our EX expertise, check out our Employee Experience page.