At Imagination we have developed a set of principles for people using Generative AI in their work. Designed to support an experimental, share and learn approach within Generative AI to extend our creativity, whilst prioritising confidentiality, transparency, fairness and removing bias.
We are all trying to understand the risk and opportunity of these new tools and we want to share our methodology as an invitation for collective learning, dialogue, and co-creation to help others across the creative industry.
Many of us have been exploring and experimenting with Generative AI and, for some, Midjourney and ChatGPT have become everyday tools. With new AI applications evolving and coming onto the market all the time, it's important to innovate and find new ways of enhancing our creativity in our work and the experiences we create. At the same time, it is critical that we safeguard the work we create for our clients, considering the risks as well as the opportunities of using these services.
Our approach to Generative AI is to encourage experimentation while being conscious and careful of the legal risks and ethical considerations that surround it. Recognising that every project and every client is different, these principles are designed to give guidance and set clear boundaries, without being prescriptive.
We will treat Gen AI as a tool to extend our creativity
It's not a replacement for our creativity, but it can allow us to work more efficiently, explore more broadly, communicate more richly and create more value for our clients. Wherever appropriate, we will be unafraid to use it to enhance our work.
AI will not be a substitute for our original work
We will never rely solely on AI-generated solutions, we will ensure that the final deliverable is the product of our creativity. We will ensure that our AI outputs are original and distinct, avoiding inputs that mimic or replicate third-party content or aim to create in another's style. This maintains the IP, authenticity, uniqueness and value in our work.
We will not use AI as a source of truth
Generative AI is known to lie or “hallucinate”, stating fiction as though it is fact. We will always evaluate the outputs of generative AI with a critical eye, and not rely on it as a single source of truth.
We will be transparent about how and when we use AI in our work
We will be open about how we use AI in our work. We will not use AI outputs as final, client- and market-facing deliverables without both written client and Imagination Legal team permission. As with all our work, wherever possible we will give proper credit and obtain permission for any artist’s, musician’s or author’s work that we incorporate.
We will limit the information we share on open platforms
We know that many AI tools are cloud based and may reuse the data we provide it. We will not share personal data (including a person’s image or voice), project specific data such as images or identifiable text from past projects, confidential information, or third party intellectual property without consulting with the Imagination Legal team.
We will experiment, share and learn
We will share different tools, ideas, prototypes, successes and failures so we can all learn and improve - including which tools, prompts, and methods we are using to get the most out of these tools. We will ensure that anyone who is using AI tools for client work will be given the appropriate guidance and training.
We will be proactive in recognising and mitigating bias in AI
We create fair and inclusive work. We will be attentive to potential biases in AI outputs such as ethnicity, age or gender and we will engineer prompts or other inputs to help mitigate this. Good input in, good input out.
We know the landscape of this emerging technology is constantly shifting and we'll be posting more of our latest thinking, proof of concepts and insights. If you have any questions on the above or want to talk to us about running Gen AI workshops for your teams please contact us on firstname.lastname@example.org.