Essay
For AI-native studios like Wonder, the ethical questions around AI don't arrive via regulatory consultation — they arrive in the work itself, daily and without warning. Chief Legal Officer Ali Keegan on why the creative industries can't afford to leave the hard questions to lawyers and legislators.
Our ambition is to treat copyright as a value, not just a rule. That means working to be more deliberate about which tools we use for which outputs, maintaining clear records of human creative contribution, building clearer records of human creative contribution, and pushing toward workflows where authorship isn't an afterthought. We're also watching the licensing and indemnification space closely — some tool providers are starting to offer meaningful protections, and that matters when we're advising clients or defending the integrity of our own IP.
What I want Wonder to stand for here is a creative culture that genuinely respects the rights of other creators — not because we have to, but because we're creators too. The human artists whose work trained these models deserve to be part of that conversation.
The longer-term picture
I think the studios and agencies that will be trusted in five years are the ones building rigorous practices now, while the legal and regulatory frameworks are still forming. The companies that treated ethics as a PR exercise will find themselves exposed — either legally, or in the court of client and talent trust.
For AI-native studios like Wonder, the opportunity is real: we can set a standard rather than inherit one. That requires us to stay close to the policy conversations happening at the industry and legislative level, to invest in internal education, and to be willing to say no to tools or workflows that create risks we're not prepared to own.
We don't have all the answers. The honest truth is that nobody does right now. But we think that's an argument for more rigor, not less — and for being the kind of studio that takes these questions seriously enough to keep asking them.
