Shorouk Express
“As with other waves of automation, the supposed potential of generative artificial intelligence (AI) to transform the way we work is creating a huge buzz”, say Data & Society researchers Aiha Nguyen and Alexandra Mateescu in a new report on the technology and its use in the workplace. To get a sense of how this shift will affect employment, argues the report, we need to look beyond the dichotomy between an AI that empowers us and an AI that replaces us.
Advocates of generative AI often claim that it will improve labour efficiency. The promise is that it will automate tedious tasks in every sector, from customer service to medical diagnosis. In reality, the impact of AI is more ambiguous, and certainly less magical.
Yes, AI will affect the way work is organised. But for workers themselves, it will offer essentially nothing except an enhanced form of exploitation.
The media hype surrounding AI has had three effects. First, it has helped us forget that this technology will mostly affect work rather than leisure. Second, it has exaggerated AI’s capacity to replicate the knowledge and expertise of workers. Finally, it has understated the drawbacks of AI, most notably in its potential to exploit legal loopholes – in particular in copyright law. In general terms, AI reduces human work to a collection of data points, all while it remains highly dependent on that work in order to function. To develop a successful AI, one must not only tap into intellectual property without consent, but also extract data from workers.
Interesting article?
It was made possible by Voxeurop’s community. High-quality reporting and translation comes at a cost. To continue producing independent journalism, we need your support.
Subscribe or Donate
In call centres, for example, operators’ conversations are used to create AI chatbots. The workers themselves are typically not paid. The same problem applies to authors whose publishers choose to feed their content to AI systems. For the time being, workers have little recourse to challenge this “unpaid commodification of their labour”. However, this new form of exploitation could have long-term consequences for them: its ultimate aim is to replace their work with algorithms in much the way that mannequins replace models in the world of fashion.
Progress has been made in some industries. The American Association of Voice Actors, for example, has called for actors’ consent to be obtained when their image or voice is used for AI, with limits on the duration of use and the ancillary income. The researchers at Data & Society point out that “major asymmetries of power and information between industries and workers remain typical” and call for new types of labour rights and worker protection.
AI often enters the workplace in some innocuous way, only to be gradually assimilated as an integral part of existing work processes. In practice, automation rarely replaces workers. Instead, it tends to partially automate certain specific tasks, and above all to reconfigure the way humans work alongside machines. The output of AI often needs to be reworked before it is usable. Indeed, writers are now being hired to rehumanise synthetic texts – while being paid less than if they had written them themselves, on the pretext that they add less value.
Not only do we take advantage of workers to produce automation, but this automation is also further constraining the parameters of that same work
Chatbots increasingly resemble autonomous vehicles, with their remote command centres where humans can take over control if necessary. The effect is to invisibilise the plethora of staff who teach the bots to speak and who correct their mistakes. The devaluation of the people behind AI often obscures the extent of the human work required to make it function properly.
The use of AI can often lead to a worryingly excessive simplification of work processes. In 2023, for example, the National Eating Disorders Association sacked its online support staff and replaced them with a chatbot. The bot was then promptly suspended after an alleged incident in which it instructed people seeking help to… lose weight.
Similarly, machine-translation tools are increasingly being used in place of human interpreters in the USA’s system for processing asylum applications. This has led to refusals caused by obvious errors, such as names being changed to months of the year, or misunderstood deadlines. While machine translation can reduce costs, it is too often deployed in complex, high-stakes situations where it is inappropriate.
Lastly, the researchers point out, AI tends to replace certain types of staff more than others – most particularly those in junior or entry-level positions. This comes at an obvious cost in terms of training and essential skills for junior staff. Such jobs also tend to be occupied disproportionately by women and minorities.
The use of AI can serve to tighten up the surveillance and “datafication” of the workplace. It has greatly expanded the use of automated decision-making, which is already highly opaque in the eyes of workers. The decisions in question include the automated allocation of tasks, employee appraisals, disciplinary measures, etc.
Not only do we take advantage of workers to produce automation, but this automation is also further constraining the parameters of that same work. As mentioned earlier, AI surveils call-centre agents so as to train chatbots that might replace them. But employees’ responses are also used to generate scripts that manage and regulate the employees’ interactions with customers, thus further restricting their autonomy in a pernicious feedback loop.
In fact, presenting chatbots and AI as virtual assistants rather than virtual supervisors conceals the growing asymmetry of power at work, as Aiha Nguyen and Alexandra Mateescu point out. Such language helps to hide the opacity and increased control that the deployment of AI currently entails. In fact, say the authors, “a critical assessment of generative AI in the workplace should begin by asking what a particular tool enables employers to do and what incentives drive its adoption beyond promises of increased productivity”.
In many industries, the adoption of generative AI is driven by the prospect of reducing costs or production times. It is already widely used in personnel planning tools, particularly in retail, logistics and healthcare. Here it can optimise such practices as understaffing and outsourcing, thereby maximising profits while at the same time worsening working conditions. Replacing employees with machines reinforces the idea that workers are now just interchangeable cogs in a machine.
Generative AI is generally adopted to speed up production and reduce costs. It does this by capturing more of the value of labour, in the form of workers’ data, and transferring it to cheaper machines supervised by cheaper employees.
AI means that workers are being reduced to the sum of their data. We urgently need to think about how we can expand the rights of those workers, and better protect the data they produce in the course of their work.
👉 Original article on Dans les Algorithmes