The Workers Training AI Models Are Paid $1.50 an Hour. The Models They Build Sell for $200 a Month.

Every large language model that exists today was built on the invisible labour of hundreds of thousands of people who were paid, in most cases, between $1 and $3 per hour to do work that the AI companies themselves describe as critical to making their products safe and useful.

This is not a secret exactly, but it is a fact that the AI industry has been very successful at keeping out of the conversation about its products. When OpenAI describes ChatGPT as a marvel of machine learning, it is technically accurate. It is also true that the machine did not learn to refuse harmful requests, avoid racial stereotypes, or navigate sensitive topics by magic. It learned because tens of thousands of workers in Kenya, the Philippines, India, and Venezuela read disturbing content, categorised it, labelled it, and told the model what was harmful and what was not.

What the Work Actually Involves

Data annotation work for AI training ranges from straightforward to genuinely traumatising. At the benign end: labelling images, rating the quality of responses, identifying errors in generated text. At the other end: reviewing graphic violence, child abuse material, and extremist content in order to train content moderation systems to recognise and block it. Workers doing this latter category often have no psychological support, no clear limits on their exposure, and no recourse when the content affects them.

A Time investigation published in early 2023 documented Kenyan workers hired through a subcontractor for OpenAI being paid less than $2 an hour to review some of the most disturbing content imaginable for hours at a time. OpenAI’s response to the coverage was to acknowledge the importance of the work and say they were committed to improving conditions. The subcontractor they used, Sama, later exited the content moderation space citing the psychological toll on workers.

The working conditions have not fundamentally improved since that investigation.

The Geography of the Work Is Not an Accident

The reason this work is concentrated in Kenya, the Philippines, Venezuela, and similar markets is not that workers in those countries are uniquely suited to it. It is that paying someone in Nairobi $1.50 an hour is legal, while paying someone in California the same amount is not. The work is identical. The labour cost is not.

Remotasks, Scale AI, Appen, and dozens of similar platforms connect AI companies with annotation workforces in lower-income countries. The platform takes a cut, the contracting AI company pays the platform, and the individual worker receives what is left, often with no employment protections, no benefits, and no guaranteed hours. They are classified as independent contractors in most jurisdictions, which exempts the companies from minimum wage obligations and employer-side tax contributions.

The Revenue Multiple

OpenAI’s ChatGPT Plus subscription costs $20 a month. Claude Pro costs $20 a month. Microsoft Copilot for enterprise runs $30 per user per month. These are consumer-facing prices; enterprise deals run significantly higher.

The workers whose labour made these products safe enough to sell are paid between $1 and $3 per hour. An annotator working a full 40-hour week for 52 weeks earns between $2,080 and $6,240 a year. The companies selling the products built on that labour are valued in the hundreds of billions of dollars.

This is not unique to AI. It is the global labour arbitrage model that has underpinned supply chains for decades. What is different is the marketing. AI companies present themselves as building the future, doing something unprecedented and ethical, taking safety seriously. The gap between that positioning and the conditions of the workers in their supply chains is larger than in almost any other technology sector.

What Would Change It

Transparency requirements that force AI companies to disclose their data annotation supply chains. Minimum compensation floors applied to platforms operating in specific markets. Classification of annotation workers as employees rather than contractors. Or simply consumers and enterprise clients demanding that the AI tools they use are built ethically, in the same way some consumers demand ethically sourced coffee.

None of these things are happening at meaningful scale. The workers who made your AI assistant safe enough to use remain largely invisible, underpaid, and unprotected while the companies they made possible are preparing IPOs that will make their founders billionaires.

ST

Synthetic Truth

Independent coverage of AI, work, and money. No corporate sponsorship, no stock portfolio, no incentive to mislead. Just honest analysis on where technology, power, and the economy are headed.

Leave a Comment

Your email address will not be published. Required fields are marked *

Free Newsletter

AI is changing everything.
Stay ahead of it.

Get the unfiltered truth about AI, jobs, and money — straight to your inbox. No hype. No fluff.

No thanks, I prefer to stay uninformed