A percentage of workers will try and game AI-driven tracking systems by 2023
By 2023, more than one-in-ten workers will seek to trick artificial intelligence (AI) systems used to measure employee behaviour and productivity.
Such systems have seen a significant uptick in use in the wake of the COVID-19 pandemic, states Gartner’s recently released “Predicts 2021: Digital Workplace Applications Evolve to Support Remote, “Force Multiplier” Staff.”.
“Many businesses are making a permanent shift to full- or part-time remote work, which can be both costly and require cultural changes,” said Whit Andrews, distinguished research vice president at Gartner. “For management cultures that are accustomed to relying on direct observation of employee behaviour, remote work strengthens the mandate to digitally monitor worker activity, in some cases via AI.
“Just as we’ve seen with every technology aimed at restricting its users, workers will quickly discover the gaps in AI-based surveillance strategies. They may do so for a variety of reasons, such as in the interest of lower workloads, better pay or simply spite. Some may even see tricking AI-based monitoring tools as more of a game to be won than disrespecting a metric that management has a right to know.”
Organisations are using AI-enabled systems to analyse worker behaviour in the same way that AI is used to understand shoppers, customers, and members of the public. These tools provide basic activity logging with alerts, or in more sophisticated versions, can attempt to detect positive actions or misbehaviour through multivariable analysis.
Many employers use productivity monitoring systems despite a high percentage of workers finding such tools unappealing. Even prior to the pandemic, Gartner research showed that workers feared new technologies used to track and monitor work habits. As these tools become more prevalent, Gartner predicts that organisations will increasingly face workers who seek to evade and overwhelm them.
Workers may seek out gaps where metrics do not capture activity, accountability is unclear, or the AI can be fooled by generating false or confusing data. Such activities have already been observed in digital-first organizations; for example, ride-share drivers sometimes work for two different services simultaneously as a way of maximising personal earnings.
“IT leaders who are considering deploying AI-enabled productivity monitoring tools should take a close look at the data sources, user experience design and the initial use case intended for these tools before investing,” said Andrews. “Determine whether the purpose and scope of data collection supports employees doing their best work. For those that do decide to invest, ensure that the technology is being implemented ethically by testing it against a key set of human-centric design principles.”