AI in Media: When Machines Create the Content
An investigation into AI's transformation of media and entertainment — automated journalism, AI-generated art and music, Hollywood's labor battles, deepfake entertainment, copyright wars, synthetic influencers, and the erosion of creative livelihoods.
The Content Machine
In March 2023, a synthetic image of Pope Francis in a white Balenciaga puffer jacket went viral, fooling millions into believing it was real. It was generated by Midjourney in seconds. The episode was treated as a curiosity — a viral moment, a meme, a parlor trick. It was actually a warning shot: the infrastructure of trust that separates real information from fabrication was breaking down, and the tools to break it further were improving exponentially.
By 2026, AI generates, assists, or curates the majority of digital content consumed globally. Recommendation algorithms on YouTube, TikTok, Netflix, and Spotify decide what billions of people see, hear, and read. Generative AI tools produce images, text, music, video, and code at scales that dwarf human output. The question is no longer whether AI will transform media and entertainment. It is whether recognizable forms of human creative expression will survive the transformation, and on what terms.
This is an industry where AI simultaneously empowers and displaces, where the technology that gives a teenager in Lagos the ability to produce a Hollywood-quality short film also eliminates the jobs of the professionals who used to make those films. The economics are brutal, the creative implications are profound, and the regulatory framework is nearly nonexistent.
Automated Journalism: The Newsroom Without Reporters
The Quiet Automation
The Associated Press began using AI to generate corporate earnings reports in 2014, in partnership with Automated Insights. The system produced over 4,000 earnings stories per quarter — more than the AP’s human reporters had ever written. The quality was indistinguishable from human-written reports for most readers. More importantly, the system freed human journalists to write analytical and investigative stories rather than templated recaps.
By 2026, automated journalism has expanded far beyond earnings reports. The Washington Post’s Heliograf system has generated thousands of articles on high school sports, election results, and local real estate markets. Reuters uses AI to generate financial news summaries and commodities reports. Bloomberg’s AI systems produce real-time market updates that constitute a significant fraction of the terminal’s news output.
The Local News Crisis
The intersection of AI and local journalism is particularly concerning. Over 2,900 newspapers have closed or merged in the United States since 2005, creating vast “news deserts” where no local news organization exists. AI has the potential to partially fill this gap — generating high school sports coverage, municipal meeting summaries, and local event announcements from structured data.
But automated local journalism cannot perform the core function of local news: accountability reporting. Investigating corruption at city hall, documenting environmental violations, holding school boards accountable — these activities require human judgment, source relationships, and the willingness to pursue stories that powerful interests would prefer to suppress. No AI system does this.
The risk is that AI-generated local news creates the appearance of coverage without the substance, satisfying the informational need (what happened at last night’s school board meeting?) while eliminating the accountability function (why did the school board award that contract to the superintendent’s brother-in-law?).
Newsroom Economics
The economics of AI in journalism are stark. A reporter costs $60,000-$120,000 annually in salary and benefits. An AI content generation system costs a fraction of that to operate at much higher volume. News organizations facing declining advertising revenue and subscriber bases face enormous pressure to automate routine content production.
The resulting dynamic is a two-tier newsroom: a shrinking core of human journalists doing original reporting, investigation, and analysis, supported by AI systems that handle routine content generation, transcription, translation, and distribution optimization. This model works if the human core is preserved and invested in. The danger is that cost-cutting reduces the human core to the point where the journalism that matters most — the journalism that AI cannot do — disappears.
AI-Generated Art: Beauty, Theft, and the Death of the Illustrator
The Image Generation Explosion
Midjourney, DALL-E, Stable Diffusion, and their successors have transformed image creation. These systems generate photorealistic images, illustrations, concept art, and design mockups from text descriptions in seconds. Midjourney alone has over 16 million users and generates hundreds of millions of images monthly.
The impact on commercial illustration and photography has been severe. Stock photography agencies report revenue declines of 30-40% since AI image generators launched. Getty Images, which filed suit against Stability AI for training on its copyrighted images, simultaneously launched its own AI image generation service in partnership with NVIDIA. Shutterstock licensed its library to OpenAI for training and launched an AI image generation tool.
Freelance illustrators, concept artists, and graphic designers report client losses of 30-70%, according to surveys by the Graphic Artists Guild and the Society of Illustrators. The speed and cost advantages of AI generation are overwhelming for routine commercial work — social media graphics, blog illustrations, marketing materials, presentation visuals.
The Training Data Controversy
The central ethical and legal controversy in AI-generated art is training data. Models like Stable Diffusion were trained on billions of images scraped from the internet, including copyrighted works by living artists, without consent, notification, or compensation.
Artists have organized in response. The Fairly Trained certification program identifies AI models trained exclusively on licensed or public domain data. Spawning AI developed tools (Have I Been Trained?) that allow artists to search whether their work appears in AI training datasets. Opt-out registries (Do Not Train) allow artists to declare their work off-limits, though compliance is voluntary and unenforceable.
Class-action lawsuits are proceeding through U.S. courts. Anderson v. Stability AI, Andersen v. Midjourney, and Getty Images v. Stability AI all raise fundamental questions about whether training AI models on copyrighted works constitutes fair use. The outcomes will reshape the economics of generative AI across all media.
Hollywood Strikes: Labor Fights the Machine
The 2023 Inflection Point
The 2023 strikes by the Writers Guild of America (WGA) and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) were the first major labor actions in any industry driven substantially by AI concerns. Both unions made AI regulation in creative work a central demand.
The WGA’s contract, ratified in September 2023, established that AI cannot be credited as a writer, that AI-generated content cannot be used as source material without writer consent, and that studios must disclose any AI-generated material provided to writers. The SAG-AFTRA contract, ratified in November 2023, addressed the use of AI-generated digital replicas of actors, requiring informed consent and compensation for the use of an actor’s likeness by AI systems.
Enforcement Challenges
These provisions were groundbreaking. They are also difficult to enforce. Determining whether a screenplay was “informed by” AI-generated outlines or whether a studio executive used ChatGPT to draft scene descriptions before handing them to a writer requires a level of process monitoring that the guilds struggle to maintain.
The larger challenge is that Hollywood is a global industry. While WGA and SAG-AFTRA contracts govern U.S. productions, content produced in other jurisdictions — India, South Korea, Nigeria, the UK — operates under different labor regimes. AI-generated content can be produced anywhere, and the competitive pressure on U.S. studios to use AI for cost reduction is intensified by competitors with fewer labor constraints.
The Synthetic Actor
AI-generated actors and digital replicas represent the most dramatic flashpoint. Companies like Synthesia, Hour One, and Soul Machines produce synthetic presenters for corporate video, advertising, and customer service. These digital humans are created from real actors’ performances, raising questions about consent, likeness rights, and the economic displacement of background actors and extras.
Deep-fake technology enables the use of deceased actors’ likenesses in new productions. The estate of James Dean was reported to have licensed his digital likeness for a Vietnam War film. The use of de-aging technology and AI-generated performances in franchises like Star Wars and Indiana Jones has pushed the boundaries of what constitutes an “actor’s performance.”
AI Music Generation: Suno, Udio, and the Songwriter’s Nightmare
The Capability Leap
AI music generation has advanced from novelty to commercial viability with startling speed. Suno and Udio, the leading platforms, can generate full songs — vocals, instrumentation, lyrics, mixing — from text descriptions in under a minute. The quality ranges from passable to genuinely impressive, particularly for genres with formulaic structures: pop, country, electronic dance music, ambient.
As of early 2026, Suno has over 12 million registered users and reports that users have generated over 100 million songs on its platform. Udio, backed by a16z with $10 million in seed funding, has positioned itself as the higher-quality alternative for more sophisticated musical output.
The Copyright Battleground
The music industry’s response has been aggressive litigation. The Recording Industry Association of America (RIAA), representing Universal Music Group, Sony Music, and Warner Music Group, filed suit against both Suno and Udio in 2024, alleging that the platforms trained on copyrighted recordings without authorization.
The cases raise the same fair use questions as visual AI generation, but with additional complications. Music copyright encompasses both composition (melody, lyrics, harmony) and sound recording (the specific performance). AI music generators potentially infringe both, and the music industry has historically been more aggressive in copyright enforcement than the visual arts.
The economic stakes are significant. The global recorded music market was valued at $28.6 billion in 2023. If AI-generated music captures even 10-15% of the market for background music, commercial jingles, podcast soundtracks, and social media content — applications where bespoke human composition is expensive and AI generation is adequate — the revenue displacement exceeds $3 billion annually.
The Ghostwriting Phenomenon
A more subtle disruption is emerging: AI-assisted songwriting that is never disclosed. Producers and songwriters are using AI tools to generate melodic ideas, chord progressions, and lyrical concepts that they then refine and claim as original work. The prevalence of this practice is unknown by design, but industry insiders interviewed by multiple outlets describe it as widespread.
This creates a paradox: the music industry is simultaneously suing AI platforms for copyright infringement while its own members quietly use AI tools in their creative process. The distinction being drawn — using AI as a private creative tool versus training AI on copyrighted works without permission — is legally coherent but commercially fragile.
The Copyright Wars: NYT v. OpenAI and Beyond
The Marquee Case
The New York Times’ lawsuit against OpenAI and Microsoft, filed in December 2023, is the highest-profile copyright case in AI. The Times alleged that OpenAI trained its models on millions of Times articles without authorization, and that ChatGPT can reproduce Times content nearly verbatim in response to certain prompts.
The case is significant not only for its specifics but for the principles at stake. If training AI models on copyrighted content constitutes fair use, then the entire generative AI industry’s legal foundation is secure, and content creators have no right to compensation. If it does not, then virtually every generative AI model in existence was built on unauthorized use of copyrighted material, with potentially enormous liability.
The Broader Landscape
The Times case is one of dozens. Authors (led by Authors Guild president Douglas Preston), visual artists, music publishers, and software developers have all filed copyright claims against AI companies. The legal landscape as of early 2026 includes:
| Case | Parties | Key Issue | Status (2026) |
|---|---|---|---|
| NYT v. OpenAI | New York Times vs. OpenAI/Microsoft | News article training data | Discovery phase |
| Anderson v. Stability AI | Artists vs. Stability AI | Visual art training data | Class certification granted |
| Getty v. Stability AI | Getty Images vs. Stability AI | Stock photo training data | Settled (UK); pending (US) |
| RIAA v. Suno/Udio | Major labels vs. music AI | Sound recording training data | Pre-trial motions |
| Doe v. GitHub | Developers vs. Microsoft/GitHub/OpenAI | Code training data (Copilot) | Partially dismissed, ongoing |
| Concord v. Anthropic | Music publishers vs. Anthropic | Song lyrics reproduction | Active litigation |
The outcomes of these cases will determine whether AI companies must license training data, compensate creators retroactively, or operate freely under fair use protections. The creative industries’ future relationship with AI depends substantially on these judicial decisions.
Deepfake Entertainment and Disinformation
The Democratization of Deception
Deepfake technology — AI-generated video and audio that convincingly depicts real people saying and doing things they never said or did — has progressed from detectable novelty to near-undetectable sophistication. Consumer-grade tools can now produce deepfake videos that pass casual inspection, and professional-grade tools can deceive even careful viewers.
In entertainment, deepfakes enable creative possibilities: de-aging actors, translating performances into new languages with lip-synced dubbing, and creating posthumous performances. The ethical boundaries are debated but manageable when consent is obtained.
In disinformation, deepfakes are weaponry. AI-generated audio of political figures making inflammatory statements has circulated during election cycles in the United States, India, Nigeria, and the European Union. A deepfake robocall using President Biden’s synthetic voice discouraged voting in the 2024 New Hampshire primary, leading to regulatory action and criminal charges.
The detection-generation arms race favors generation. Detection tools from companies like Reality Defender, Sensity AI, and Microsoft’s Video Authenticator can identify many deepfakes, but their accuracy degrades as generation technology improves. More fundamentally, detection requires that someone choose to check — and in the fast-moving information environment of social media, most content is consumed and shared without verification.
Synthetic Influencers
AI-generated virtual influencers represent a commercially significant phenomenon. Lil Miquela, a synthetic influencer created by Brud, has 2.6 million Instagram followers and has partnered with Prada, Calvin Klein, and Samsung. Lu do Magalu, created by Brazilian retailer Magazine Luiza, has over 30 million followers across platforms.
These synthetic personas raise novel questions about authenticity, disclosure, and consumer manipulation. When a synthetic influencer promotes a product, is the audience being deceived? When the persona is explicitly disclosed as AI-generated, is there a meaningful difference from any other form of branded content?
The market for virtual influencers is projected to exceed $37 billion by 2030, according to Grand View Research. For the media and entertainment industry, synthetic influencers represent both a commercial opportunity and an existential challenge: if audiences form parasocial relationships with AI-generated personas as readily as with human influencers, the economic foundation of human celebrity — scarcity, authenticity, vulnerability — erodes.
Game Development: AI as Creative Partner and Threat
Procedural Everything
Video game development has used AI for decades — from pathfinding algorithms to procedural content generation. But generative AI is transforming game development at a fundamental level.
AI asset generation tools from companies like Scenario, Leonardo AI, and Luma Labs produce 3D models, textures, and environments from text or image prompts, reducing production timelines for indie games from years to months. NPC dialogue systems powered by large language models enable dynamic conversations that respond to player actions rather than following scripted trees. Inworld AI, which provides AI-powered NPC platforms, has partnered with Xbox and has raised over $120 million.
The Workforce Impact
The game industry employed approximately 320,000 people in the U.S. in 2024, according to the Entertainment Software Association. AI threatens to displace a significant fraction of these positions, particularly in art production (3D modeling, texturing, concept art), quality assurance (automated testing), localization (AI translation), and narrative design (procedural dialogue).
The impact is already visible. Layoffs across the gaming industry in 2023-2025 — over 20,000 job cuts across major studios including Microsoft, Sony, EA, and Embracer — were attributed partly to AI-driven efficiency gains. While cyclical economics and post-pandemic corrections were also factors, studio executives increasingly cite AI tools as enabling equivalent output with smaller teams.
AI Sports Commentary and Beyond
AI-generated sports commentary has moved from experiment to deployment. The Chinese Basketball Association has used AI commentators for select broadcasts. AI-powered highlight generation systems at ESPN and other networks automatically identify and clip significant plays using computer vision, reducing production staff requirements.
In publishing, AI-assisted sports journalism covers minor league, amateur, and international competitions that lack the audience to justify human correspondents. The volume of sports content available to consumers has expanded enormously, even as the human workforce producing it contracts.
The Creator Economy: Empowerment and Extraction
The Paradox
AI simultaneously empowers and exploits creative workers. A filmmaker who once needed a $50,000 budget and a full crew can now produce a visually compelling short film with AI assistance for under $1,000. A musician who once needed a recording studio can produce a polished track on a laptop. A writer who once needed an agent and a publisher can generate, illustrate, and publish a book independently.
This empowerment is real, and it is democratizing access to creative production in historically unprecedented ways. But it also drives the value of any individual creative output toward zero. When everyone can produce professional-quality content, the scarcity premium that sustained creative professionals evaporates.
The result is a creator economy that is simultaneously larger (more people producing more content) and poorer (less revenue per creator). Spotify pays an average of $0.003-$0.005 per stream; flooding the platform with AI-generated music further dilutes the revenue pool for human artists. Amazon’s Kindle store is inundated with AI-generated books that dilute discoverability for human authors. Social media feeds mix human and AI-generated content indistinguishably, with engagement algorithms indifferent to the source.
What Survives
The media and entertainment industries that survive AI disruption will be those that offer something AI cannot: genuine human experience, authentic creative vision, and the irreplaceable quality of one consciousness communicating meaningfully with another.
Live performance, where the shared experience of audience and performer is the product, is AI-resistant. Investigative journalism, where human judgment, courage, and source relationships are essential, is AI-resistant. Literary fiction, where the idiosyncratic voice of a specific human mind is the point, is AI-resistant — at least for now.
What is not AI-resistant is the vast middle ground of commercial content production: stock photography, corporate copywriting, background music, formulaic genre fiction, routine journalism, and the visual filler that constitutes the majority of digital media. This middle ground employed millions of people. AI is consuming it.
The question for society is not whether this efficiency gain is desirable in the abstract. It is whether we have mechanisms to distribute its benefits beyond the owners of the AI systems and the platforms that deploy them. For the broader sectoral context, see our AI Sector Impact Overview. For how these dynamics connect to questions of AI governance and human agency, see our Manifesto.