Artificial intelligence (AI) is rapidly reconstructing the world of work. From managing productivity to determining which candidates get hired, AI systems are having profound effects on our daily routines – but these effects are not universally positive.
The risks associated with AI in the workplace range from reductions in job quality and spikes in work intensity to workplace discrimination and ubiquitous surveillance. For many workers, the introduction of AI systems to their workplaces leads to extremely unfair outcomes.
So far, the debate around the ethics of AI has generally skipped the question of work. Instead, the debate has focused on the risks AI poses to society as a whole. But a new collaboration between Fairwork and the Global Partnership on Artificial Intelligence (GPAI) aims to change that.
The Fairwork AI team, based at the Oxford Internet Institute, are in the process of producing a set of ethical principles and associated benchmarks to guide the deployment of AI systems in the workplace. These principles build on the 2019 OECD Recommendations on Artificial Intelligence and are being generated through a two-stage multistakeholder consultation engaging with representatives from the International Labor Organization, Uber, Microsoft, the International Transport Federation, the UK Information Commissioner’s Office, and more. The resulting set of principles will be published as part of a full report by the end of 2022.
The first output of the project is an open access research article laying out the team’s critique of the existing AI ethics literature in relation to work: Politics by Automatic Means? A critique of Artificial Intelligence Ethics at Work.
Once the Fairwork AI principles are published, a new Fairwork AI team will lead an extended impact phase to put them into practice. This impact phase will kick in just as more and more concrete legislative action is being taken to regulate AI system, from the EU AI act to the US Algorithmic Accountability Act. But our experience in the platform sector shows that regulatory action can bbenefit from non-statutory, civil society-led monitoring and standard-setting approaches such as Fairwork. In fact, our Fairwork methodology has continued to open up space for an ecosystem of policy actors to understand the current state of play and take meaningful action to mitigate the risks of the platform economy in a way that compliments the development of concrete legislation.
As AI regulation begins to be developed across the world, we also need non-governmental organizations to perform two key functions: first, multinational monitoring of conditions with a consistent and comparable methodology; and second, the creation of a set of practical standards of fairness and a system that applies scrutiny as a way to leverage private sector actors to proactively make change – thereby demonstrating the feasibility of fair work.
The overall objective guiding this work will be to highlight the fundamental questions of fairness posed by the widespread deployment of AI to the workplace, provide information on the existing practices, risks and outcomes of this deployment, and shape the standards through which this deployment is evaluated. Throughout, Fairwork remains fundamentally committed to understand and amplify workers’ experiences of work, as a a primary step towards enabling fairer outcomes.
We perceive multiple, overlapping pathways of impact. Our experience in the platform sector offers us a robust model to predict how our AI work can develop in the future. We perceive multiple and overlapping impact streams for the project:
- Workers: Offering workers’ and worker organizations a set of standards to mobilize around and negotiate over, as well as analysis of wider trends and key risks.
- Policymakers: Informing policymakers of the state of play and possible routes for regulatory action at all stages in the development of regulatory systems.
- Private sector: Shaping the practices of private sector actors through a combination of public scrutiny and proactive stakeholder engagement during the scoring process and beyond.
- Civil society: Mobilizing civil society to both advocate for large-scale change and shape the macro conditions of the private sector through changes to consumer demand.
The exact scope of this impact scheme will vary depending on the results of ongoing funding bids, but we intend for it consist of two streams: first, one or more case studies of workplaces where the principles have been implemented; and second, a rating exercise in line with the ones produced by the 40+ Fairwork gig work and cloudwork teams.
Regarding the case study, our goal is for it to act as a significant reference point for the implementation of fairer AI in the workplace which can be cited by a wide range of actors seeking to form policy or best practice. To do so, we will collect qualitative data on the process of principle adaptation by the partner organization through a combination of participant observation and interviews. We will then produce multi-format research outputs that can be shared across social and news media, detailed in academic publications and blogs , and written up into a second public report specifically focused on practical implementation.
In terms of scoring, we will score at least four large, high-profile UK employers who extensively deploy AI in the workplace and engage with them to discuss and shape their use of technology. This approach will not only positively impact the working conditions of a significant number of workers, but also begin a discussion about how the GPAI principles can be applied in practice to further the vision laid out in the OECD Principles on AI. Through our direct engagement with the companies, we will also be able to identify direct points of intervention for policy and communicate these to UK policymakers.
Fairwork’s network of research teams across 40+ countries offers us an unparalleled opportunity to monitor and influence the deployment of AI systems to workplaces across the globe. If you want to collaborate with the Fairwork AI project, contact us now.