What’s new
This quarter has seen advances in foundation models, including enhanced reasoning and true multimodal understanding. We’ve continued to see significant advances in generative video and synthetic voice. In response, we’ve expanded our principles to explicitly address their increased sophistication. The following AI talking points also caught our attention:
Claude’s new constitution and why governance is moving away from rules
As we navigate complex new ethical landscapes, Anthropic’s Claude Constitution offers a sophisticated new roadmap, rather than a rigid set of rules. It’s a ‘detailed description of Anthropic’s intentions’ for the LLM. At certain points, the document refers to the LLM as a self, much like a parent might to a child.
No more walkie talkies
NVIDIA PersonaPlex is a full-duplex conversational AI model. What does that mean? It can talk and listen at the same time, just like a human can. This isn’t the first full-duplex model we have seen, but it is the first that allows for a range of voices and prompts that you can craft. This could be the next step in conversational AI. Expect to see some demos from us soon.
Ads in ChatGPT
OpenAI has set out how they plan to bring ads into ChatGPT. What they said was a last resort, financially, is here. They have said that ads won’t influence answers, they’ll be clearly labelled and separate, and they won’t sell your conversation data to advertisers. The risk is the slow erosion of that separation over time, the same way that “ads vs organic” in Google search has become increasingly difficult to untangle.
Our approach to using Generative AI – Q1 2026
As regulation and creative possibilities continue to evolve rapidly, this approach reflects where we are today, shaped by experimentation, close collaboration with clients, and ongoing industry developments.
Generative AI is no longer speculative. It is embedded across ideation, prototyping, and production, and increasingly within live, customer-facing experiences. We’re sharing our approach openly to encourage collective learning, dialogue, and co-creation across the creative industry, while remaining mindful of the legal and ethical considerations involved.
Recognising that every project and client is different, this approach tries to provide guidance and clear boundaries without being prescriptive, leaving room for judgement, craft, and creativity.
We will treat Gen AI as a tool to extend our creativity
It’s not a replacement for our creativity, but it can allow us to work more efficiently, explore more broadly, communicate more richly and create more value for our clients. Wherever appropriate, we will be unafraid to use it to enhance our work.
AI will not be a substitute for our original work
We will never rely solely on AI-generated solutions; we will ensure that the final deliverable is the product of our creativity. We will ensure that our AI outputs are original and distinct, avoiding inputs that mimic or replicate third-party content or aim to create in another’s style. This maintains the IP, authenticity, uniqueness and value in our work.
We will not use AI as a source of truth
Generative AI is known to lie or “hallucinate”, stating fiction as though it is fact. We will always evaluate the outputs of generative AI with a critical eye and not rely on it as a single source of truth.
We will be transparent about how and when we use AI in our work
We will be open about how we use AI in our work. We will not use AI outputs as final, client- and market-facing deliverables without both written client and Imagination Legal team permission. As with all our work, wherever possible, we will give proper credit and obtain permission for any artist’s, musician’s or author’s work that we incorporate.
We will limit the information we share on open platforms
We know that many AI tools are cloud-based and may reuse the data we provide. We will not share personal data (including a person’s image or voice), project-specific data such as images or identifiable text from past projects, confidential information, or third-party intellectual property without consulting with the Imagination Legal team.
We will experiment, share and learn
We will share different tools, ideas, prototypes, successes and failures so we can all learn and improve, including which tools, prompts, and methods we are using to get the most out of these tools. We will ensure that anyone who is using AI tools for client work will be given the appropriate guidance and training.
We will be proactive in recognising and mitigating bias in AI
We create fair and inclusive work. We will be attentive to potential biases in AI outputs such as ethnicity, age or gender, and we will engineer prompts or other inputs to help mitigate this. Good input in, good input out.
We apply additional care when using generative video, voice, and AI agents
Generative video, synthetic voice, and conversational agents introduce heightened ethical, legal, and reputational considerations. When using these tools, we will be particularly mindful of consent, likeness and voice rights, transparency, and brand trust. We will not create or deploy synthetic representations of real people’s faces, voices, or identities without explicit permission and appropriate legal approval. Where generative video, voice, or agents are used in client-facing or public contexts, we will ensure their use is intentional, disclosed where appropriate, and aligned with the values of both Imagination and our clients.
We know the landscape of this emerging technology is constantly shifting, and we’ll be posting more of our latest thinking, proof of concepts and insights. If you have any questions on the above or want to talk to us about running Gen AI workshops for your teams, please contact us at info@imagination.com.