← Back to Episodes

Why Your Ai Content Belongs To Everyone

Published: 15 January 2026

[00:00]
Ashley: Hello everyone. My name is Ashley and you are listening to Podcast7.
Ray: And my name is Ray. Welcome to the show.
Ashley: Today we are plunging into a truly fascinating and, frankly, a pretty terrifying collision course. We've been looking at two source documents that analyse human value but from dimensions that feel like parallel universes.
Ray: That's really the core tension we're exploring. On one side, you have the immediate reality defined by the US Copyright Office report from January 2025, which firmly states that human contribution is the legal bedrock of creation.
Ashley: Right, and then on the other side we have this NBER working paper on Artificial General Intelligence that models a future where human economic contribution is essentially non-existent.
Ray: Exactly. So we're looking at these immediate legal principles that define human creation right alongside future economic models predicting the potential obsolescence of all human labour. That's our deep dive mission: to see what value human contribution holds when facing exponential AI capability both in the courtroom and in the chaotic labour market.
Ashley: It forces us to ask a really tough question: If the law insists humans must be involved to create value, but the economy decides that human involvement has zero marginal utility, what happens to the human creator?
Ray: Let's unpack this starting with the present reality governed by intellectual property law. For anyone running a B2B content pipeline or designing a go-to-market strategy, this is the legal ground you are standing on right now. The US Copyright Office report followed an inquiry that received over 10,000 comments, which tells you how much the creative economy is hanging on this question.
Ashley: And the core conclusion is absolutely critical: they found that existing law is adequate and that material generated wholly by AI is not copyrightable.
Ray: So, no robot IP.
Ashley: Not in the US. The system maintains the bedrock requirement of human authorship.
Ray: Which means the difference between using AI as a protectable tool and using it as an unprotectable substitute for your team is all about how you integrate it.
[02:15]
Ashley: Precisely. The office drew a clear line between assistive use—like color correction, identifying chord progressions, or preliminary outlines—and generative use.
Ray: Assistive use is perfectly acceptable because it enhances existing human expression where the human is still applying taste. But the moment the AI acts as a "stand-in for human creativity," you need to bring in the lawyers. This leads to the "prompt problem".
Ashley: Right, why is it that if you spend hours crafting an extremely detailed, sophisticated text prompt, the output is generally not considered sufficient to claim copyright?
Ray: Because legally, the prompt itself is just an idea or an instruction. Copyright protects the fixed tangible expression of that idea. You can't copyright the "idea" of a superhero, only the specific book you make.
Ashley: And the problem is the "black box" nature of AI. You specify the idea, but the AI fills in the expression; the user lacks control over how those ideas actually manifest.
Ray: The Copyright Office's own test illustrated this perfectly. They prompted for a "spectacled cat in a robe reading the Sunday newspaper and smoking a pipe".
Ashley: You have a clear mental image there.
Ray: Totally. But the resulting image had critical expressive details the human never specified—most notably, an incongruous human hand holding the newspaper.
Ashley: A human hand!
Ray: The AI decided the shape of the robe, the lighting, and the background. These were expressive choices the machine made on its own. The gap between the instruction and the final image was filled by the machine's choices, not the human author's intent.
Ashley: That addresses the issue of iterative prompting, where creators "re-roll the dice" hundreds of times. That effort is legally irrelevant because copyright protects originality and expression, not the "sweat of the brow".
[04:45]
Ray: That's the hard lesson for GTM teams: spending hours on prompt engineering alone won't grant you IP protection. You have to actively manage the output.
Ashley: So where is human work protected? If you can't copyright the machine art, how do you protect the content your team builds around it?
Ray: You copyright the structure, the selection, and the arrangement. The human element must be clearly perceptible, like in the comic book Zarya of the Dawn. The images were AI, but the human wrote the text and creatively arranged the elements into a narrative, which is protected as a compilation.
Ashley: So if a B2B team uses AI to generate hero images for a landing page, the copyright is in the human decisions about which images to select and how to arrange them with human-written headlines.
Ray: It also extends to derivative works, like the Rose Enigma example where a human inputs their own hand-drawn illustration. If that original work is perceptible in the AI-modified output, the protection holds because the AI is just acting like a filter.
Ashley: It's a unified legal front; Korea and the EU have similar requirements, though the Beijing internet court did grant copyright for an image using 150 prompts plus significant adjustments.
Ray: That divergence is where things could get fuzzy: how much human effort equals creative control? But for the western market, the human must be in the driver's seat.
[07:15]
Ashley: This framework assumes human creation is a scarce, necessary input. But we need to ask: what if that reality doesn't last? The NBER paper suggests the fundamental value of work is about to be redefined.
Ray: That redefinition alters the landscape for every GTM role and RevOps pipeline. The law protects what we create today, but these models ask if what we create tomorrow is even worth protecting.
Ashley: If you're building a strategy to leverage these tools legally and effectively, you need a framework that connects these two worlds.
Ray: Yeah, you do. It’s why we’re big believers in the approach pioneered by [SPONSOR] Demand7—where AI meets demand generation. Visit demand7.ai to explore GTM engineering and see how AI execution can revolutionise your pipeline by focusing on what is actually scarce: effective strategy.
Ashley: Now, let's shift to the macroeconomic shifts predicted by the transition to Artificial General Intelligence (AGI).
Ray: The NBER paper paints a stark picture, defining AGI as AI capable of performing all tasks humans can perform. They break work down into "atomistic tasks" that differ only in complexity or compute required. Their automation index, "I," grows exponentially.
Ashley: This isn't just gradual change. There is a scarcity threshold. In Region 1, where we live now, human labour is scarce and wages are high.
Ray: But once the index surpasses that threshold, you enter Region 2, where labor scarcity is relieved. Every job, from coding to strategy, has a cheaper AI substitute.
[10:30]
Ashley: The implication is brutal: when machines can do the job exactly as well as a human, the paper predicts a sharp wage collapse. Your work becomes so abundant that the market values it at functionally zero—the cost of running the machine.
Ray: So the same human who must sign off on IP today is the one whose economic contribution tomorrow might be viewed as worthless. That is the divergence.
Ashley: The NBER paper uses three simulations. The first is "business as usual" where automation proceeds slowly and wages rise forever.
Ray: Then there's "bounded AGI," assuming human brain capability is finite. In that scenario, output speeds up 10x, but wages collapse permanently.
Ashley: The third, and arguably most frightening for white-collar workers, is a "bout of automation". A sudden breakthrough wipes out repetitive office jobs all at once.
Ray: That causes wages to collapse into Region 2. But because there is a long tail of complex physical tasks, the economy eventually returns to making human labour scarce again.
Ashley: So it's a painful dip followed by recovery. But that "dip" could be 5 to 10 years of wage dislocation. That's a generation of strategy professionals whose careers could be destroyed before the system stabilises.
[13:45]
Ray: If labour scarcity is relieved, the value of a high-priced consultancy or specialised RevOps team collapses if a machine replicates that talent at near-zero cost.
Ashley: And then there's the mic-drop moment: fixed factors. Things like minerals, land, and matter itself.
Ray: Right, even if AI designs the best chip, it needs finite silicon and energy. That bottleneck ensures capital accumulation eventually stalls, meaning automation always wins the race, leading to permanent wage collapse.
Ashley: The final twist is "nostalgic jobs". Society can choose to slow down automation for certain roles—priests, judges, maybe creative directors.
Ray: By protecting these jobs, society ensures labour stays scarce, which counterintuitively maximises long-term wage growth for everyone.
Ashley: It’s a massive trade-off: you can choose high wages by protecting human roles, or maximum exponential growth by allowing full automation. You can't have both for long.
Ray: We stand in this incredible moment: the US Copyright Office demands human involvement for protection today, while economists predict technology will eliminate the economic scarcity of that same control tomorrow.
Ashley: If technological progress makes human labour economically obsolete but legally necessary for authorship, how should society value the creative intent that falls between those definitions?
Ray: The models show your job may be replaceable, but the law insists only you can authorise the replacement.
Ashley: That is the paradox we leave you with. Continue the conversation at podcast7.ai.
Ray: Thanks for joining us on the deep dive. We'll see you next time.
Return to Archive