Essay

Navigating AI with Intention - A View from Wonder

Navigating AI with Intention - A View from Wonder

For AI-native studios like Wonder, the ethical questions around AI don't arrive via regulatory consultation — they arrive in the work itself, daily and without warning. Chief Legal Officer Ali Keegan on why the creative industries can't afford to leave the hard questions to lawyers and legislators.

Ali Keegan, Chief Legal Officer, Head of Policy and Head of IP Acquisition and Distribution, Wonder

At Wonder, we sit in an unusual position. We're not just a company that uses AI tools — we're a studio that builds original IP and productions using them, while also running an agency that creates content for some of the world’s leading brands and artists. That dual identity matters when it comes to AI ethics, because the stakes are different on each side. When it's our own IP, we’re accountable to ourselves. When it’s a client’s brand, we’re accountable to them too - and their risk tolerance, their legal exposure, and their audiences are all part of the equation. That means the ethical questions around AI aren't abstract for us. They show up in our work every day, often in two different registers at once, and the answers we arrive at have real consequences for the creators on our team, the clients we serve, and the industry we're helping to shape.


So here's my honest take on where we are, and where I think this is going.


The tool landscape moves faster than any policy can

The pace of change in generative AI is genuinely hard to overstate. In the time it takes to develop a thoughtful internal framework for one tool, three more have launched, each with different training data, different licensing terms, and different risk profiles. For a studio like Wonder, where creative teams are naturally drawn to every new capability, this creates real tension. The interest in exploring new tools isn't reckless — it's core to what makes us good at what we do. But "move fast" and "protect the business" are not always comfortable bedfellows.


What we’re working toward is a posture of structured curiosity. We want to enable our team to explore, but with guardrails that make the risk visible, named and understandable before it becomes a problem. That means trying to ask harder questions earlier: Who trained this model, and on what? What are the indemnification terms? What do our client contracts actually say about AI-generated content? These aren't questions that slow creativity down. They're questions that make creativity sustainable.


Copyright is the defining question of this moment

The legal landscape around AI and copyright is genuinely unsettled, and anyone who tells you otherwise is selling something. Training data disputes, output ownership questions, the murky line between inspiration and reproduction — courts and regulators are still working through the fundamentals. For a studio that makes its own intellectual property, this isn't just a compliance concern. It's an existential one.





At Wonder, we sit in an unusual position. We're not just a company that uses AI tools — we're a studio that builds original IP and productions using them, while also running an agency that creates content for some of the world’s leading brands and artists. That dual identity matters when it comes to AI ethics, because the stakes are different on each side. When it's our own IP, we’re accountable to ourselves. When it’s a client’s brand, we’re accountable to them too - and their risk tolerance, their legal exposure, and their audiences are all part of the equation. That means the ethical questions around AI aren't abstract for us. They show up in our work every day, often in two different registers at once, and the answers we arrive at have real consequences for the creators on our team, the clients we serve, and the industry we're helping to shape.


So here's my honest take on where we are, and where I think this is going.


The tool landscape moves faster than any policy can

The pace of change in generative AI is genuinely hard to overstate. In the time it takes to develop a thoughtful internal framework for one tool, three more have launched, each with different training data, different licensing terms, and different risk profiles. For a studio like Wonder, where creative teams are naturally drawn to every new capability, this creates real tension. The interest in exploring new tools isn't reckless — it's core to what makes us good at what we do. But "move fast" and "protect the business" are not always comfortable bedfellows.


What we’re working toward is a posture of structured curiosity. We want to enable our team to explore, but with guardrails that make the risk visible, named and understandable before it becomes a problem. That means trying to ask harder questions earlier: Who trained this model, and on what? What are the indemnification terms? What do our client contracts actually say about AI-generated content? These aren't questions that slow creativity down. They're questions that make creativity sustainable.


Copyright is the defining question of this moment

The legal landscape around AI and copyright is genuinely unsettled, and anyone who tells you otherwise is selling something. Training data disputes, output ownership questions, the murky line between inspiration and reproduction — courts and regulators are still working through the fundamentals. For a studio that makes its own intellectual property, this isn't just a compliance concern. It's an existential one.

Ali Keegan, Chief Legal Officer, Head of Policy and Head of IP Acquisition and Distribution, Wonder

Our ambition is to treat copyright as a value, not just a rule. That means working to be more deliberate about which tools we use for which outputs, maintaining clear records of human creative contribution, building clearer records of human creative contribution, and pushing toward workflows where authorship isn't an afterthought. We're also watching the licensing and indemnification space closely — some tool providers are starting to offer meaningful protections, and that matters when we're advising clients or defending the integrity of our own IP.


What I want Wonder to stand for here is a creative culture that genuinely respects the rights of other creators — not because we have to, but because we're creators too. The human artists whose work trained these models deserve to be part of that conversation.


The longer-term picture

I think the studios and agencies that will be trusted in five years are the ones building rigorous practices now, while the legal and regulatory frameworks are still forming. The companies that treated ethics as a PR exercise will find themselves exposed — either legally, or in the court of client and talent trust.


For AI-native studios like Wonder, the opportunity is real: we can set a standard rather than inherit one. That requires us to stay close to the policy conversations happening at the industry and legislative level, to invest in internal education, and to be willing to say no to tools or workflows that create risks we're not prepared to own.


We don't have all the answers. The honest truth is that nobody does right now. But we think that's an argument for more rigor, not less — and for being the kind of studio that takes these questions seriously enough to keep asking them.

All Rights Reserved

All Rights Reserved