AIChinaDeepSeekEconomicsFeaturedIntellectual propertyWorkforce

The Hidden Cost of AI: Extractive AI Is Bad for Business

The next big AI risk isn’t existential, it’s economic. Companies that extract workers’ expertise without consent may find themselves trading short-term speed for long-term value erosion.

The Chinese AI company DeepSeek recently sent shockwaves through the financial world, causing market chaos and sparking uncertainty among tech policymakers. OpenAI released a statement acknowledging potential evidence that DeepSeek trained its model on data generated by outputs from OpenAI’s GPT-4o model through a process called distillation. Simply put, DeepSeek is being accused of training its model on OpenAI’s model and benefiting from that transfer of knowledge. But before we ask whether DeepSeek stole from OpenAI, we should ask a deeper question: who did OpenAI take from?

OpenAI has been accused of illegally appropriating data in the form of news articles, stories, and even YouTube video transcriptions to power its models. Those models are trained on vast amounts of human-generated data, often without compensation or acknowledgement to the human creator. These practices are only lightly discussed at major international AI safety summits—such as those in the United Kingdom, South Korea, and more recently this past February in France—which tend to focus on whether AI might invent biological weapons, develop new cyberattacks, or if unseen model bias poses a threat to humanity. The silent transfer of value from creators to algorithms is emerging as one of the most overlooked economic risks of the AI boom. The truth is, people have already begun to express that they have been harmed by decisions to use or employ AI.

In a recent significant event, one of the first major labor disputes over the use of AI was observed in the 2023 Writers’ Guild of America (WGA) strike. While the main issue revolved around streaming services and residuals owed to writers, negotiations concerning the use of generative AI prolonged the strike, which has its own section in the WGA’s Minimum Basic Agreement (MBA). Essentially, the WGA advocated against company or studio use of AI to write or rewrite literary materials, and that AI-generated content cannot be used as source material, which would have implications for how writers receive credit for their original work. Additionally, the MBA gives the WGA the right to assert that the exploitation of writers’ work used to train an AI model is prohibited.

Studios weren’t pursuing AI to enrich storytelling. They wanted faster, cheaper content. It’s a business strategy, not a creative one, and it mirrors a broader trend: companies leveraging generative AI not to enhance human work, but to replace it. While the WGA example was successful in its effort to protect writers, labor displacement in favor of generative AI remains a looming threat to the broader U.S. workforce across industries. AI systems are being trained on years of human expertise: journalists’ reporting, customer service transcripts, school curriculums, and more. Workers are being displaced by tools built on their own labor, without credit or compensation. It’s not just automation; it’s extraction. 

As Nobel laureate Daron Acemoglu and Simon Johnson have argued, this approach to AI isn’t driving shared productivity–it’s flattening wages, eroding job quality, and accelerating inequality. Investors should take note: AI products that exploit human labor without consent are increasingly facing lawsuits, regulatory scrutiny, and reputational blowback. Even in customer-facing roles, the business case for replacement is shaky. AI chatbots trained on customer service conversations replicate scripts but not human judgment, empathy, or creativity. The result? A degraded user experience and declining customer satisfaction. You don’t need to be a labor economist to understand that replacing real insight with surface mimicry creates brand risk. 

The same is true in media, law, design, and finance. Professionals are watching as their intellectual output becomes training data used to build products that may ultimately devalue their expertise. In the long run, that threatens talent pipelines, company culture, and competitive moats. 

The path forward requires recognizing this form of harm. Companies that invest in augmenting human work, respect creative rights, and prioritize transparency in AI deployment will be better positioned to attract top talent and earn long-term trust. In the decade ahead, speed alone won’t define the AI winners–the edge will go to those who treat human capital as a strategic asset.

The question isn’t whether AI will transform work, but whether companies will use that power to elevate human talent or extract it. Bet on the former, because extraction isn’t a long-term business model.

About the Authors: Ali Crawford, Andrew J. Lohn, and Matthias Oshinski

Ali Crawford is a research analyst at Georgetown University’s Center for Security and Emerging Technology (CSET), where she works on the CyberAI Project. Her work focuses on how the United States is building and maintaining cyber and AI education and workforce ecosystems. She earned her M.A. in national security and diplomacy from the University of Kentucky, and her B.S. in international business from West Virginia Wesleyan College.

Andrew J. Lohn is a senior fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. He previously served as the director for emerging technology on the National Security Council staff within the Executive Office of the President during the Biden administration, under an Interdepartmental Personnel Act agreement with CSET. He has a PhD in electrical engineering from UC Santa Cruz and a bachelor’s in engineering from McMaster University.

Matthias Oschinski is a senior fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where his work focuses on research related to the AI workforce. He has a PhD in economics from the Johannes-Gutenberg University in Mainz, Germany, a master’s degree in forced migration from Oxford University, and a master’s degree in economics from the Julius-Maximilians University in Würzburg, Germany.

Image: Shutterstock

Source link

Related Posts

1 of 117