Language:

GPAI Fairwork AI Principles 2022

Fairwork is a research project that aims to set and measure fair standards for the future of work. In 2022, Fairwork led a project funded by the Global Partnership on Artificial Intelligence to develop a set of principles to guide the fair use of AI systems in the workplace.

Through a year-long global stakeholder consultation, we developed the ten Fairwork AI principles alongside stakeholders ranging from Uber and Microsoft to the International Labour Organisation, the International Transport Trade Union Federation and the Distributed AI Research Insitute. These principles were then published in a report.

Fairwork has now adopted these principles to measure the fairness of Artificial Intelligence systems in workplaces across the world. The Fairwork AI project will start this evaluation in the UK with a series of case studies in different sectors.

The Fairwork AI team has released a policy brief focusing on AI governance in the UK from the perspective of the workplace.

 

Fairwork AI principles

1. Guarantee fair work

Ongoing changes in work caused by the introduction of AI systems have the potential to disrupt the labour market, but internationally agreed minimum rights and standards remain a precondition of fair AI.

2. Build fair production networks

AI system development and deployment relies on global networks of human labour, hardware production, and infrastructure. Organisations seeking to implement fair AI in the workplace must therefore look beyond the immediate production process to the networks of production that enabled it and use their procurement power to achieve fairness across the network.

3. Promote explainability

Workers have a right to understand how the use of AI impacts their work and working conditions. Organisations must respect this right and provide detailed, understandable resources to allow workers to exercise it.

4. Strive for equity

AI systems have been found to reproduce and scale up patterns of social discrimination. The costs associated with embedding negative consequences for marginalised groups into workplace technology are extremely high. As a result, AI systems must be (re)designed, built, and deployed in a way that actively seeks to eliminate sources of discrimination. Processes such as audits and impact assessments should be integrated into the AI system lifecycle to allow for ongoing scrutiny.

5. Make fair decisions

The automation of decision making can lead to reductions in accountability and fairness. But building in human oversight into a decision making loop doesn’t solve this problem. Instead, the subjects of those decisions need to be empowered to challenge them, and a renewed emphasis should be placed on the liability of those stakeholders who direct the development and deployment of AI systems in the workplace.

6. Use data fairly

The collection of large quantities of data and the concentration of its ownership may exacerbate risks for individuals and social groups, especially when shared with third parties. Limits must therefore be put on collection (i.e. data minimisation) and processes must be instituted for subjects to access and protect their data in a comprehensive and explainable format. Organisations should provide comprehensive guidelines for individuals to understand data ownership, data usage and any potential risks that result, so that they are able to question, contest, and when necessary, reject, decisions made about them.

7. Enhance safety

Advances in algorithmic management have increased the risks of work intensification and surveillance. In this context, the right to healthy, safe working environments must be protected. Potential improvements in safety should be capitalised upon, but deployment must take place in a way which reflects the different understandings of stakeholder groups about the trade-offs involved.

8. Create future-proof jobs

The introduction of AI systems to workplaces can cause specific risks such as job destruction and deskilling. These risks can be reduced by treating the introduction of AI as an opportunity for workers and organisations to engage in a participatory and evolutionary redesign of work which uses the rewards of AI to increase job quality.

9. Avoid inappropriate deployment

Organisations should proactively test AI systems to a high standard in order to avoid harms in advance, rather than iterating to address them post-deployment.

10. Advance collective worker voice

The risks and rewards of AI systems are understood differently by different stakeholder groups. These divergences should be proactively negotiated, rather than suppressed. Pursuing AI system implementation in a multi-stakeholder environment requires a mechanism to turn ethical principles into ethical practice through democratic participation by workers. Collective bargaining between workers and management is best suited to play this role.