Layer Artist Guide: Bringing artist-first AI into your studio workflows

Layer Artist Guide: Bringing artist-first AI into your studio workflows

Perhaps you’ve heard of generative AI and the ethical implications it has around art. How it’ll replace artists, how it is stealing artwork, and most importantly how it impacts industries like film and video games. 

At Layer, we believe that human artists and AI should work together, not against each other. This multi-blog series aims to show how game studios can utilize Layer to accelerate their art team’s workflows, all while adhering to Layer’s ethical AI mission. We want to make sure that every artist is able to leverage this technology in a way that doesn’t harm the industry, and instead pushes it forward.

What is a Game Art Pipeline?

A game art pipeline is a general process that developers follow to create visual assets for games. This includes everything from characters, objects, UI elements, backgrounds, and more. 

The pipeline consists of steps and techniques that begin with ideation and end with final testing before the game’s release. Traditionally, these steps require a whole art team, or can take an individual artist many hours of work. However, Layer is able to accelerate this process, allowing gamedevs to create faster and better.

Game Art Pipeline Overview

The steps of a game art pipeline differ depending on if the game is 2D or 3D. Regardless, both 2D and 3D art pipelines’ steps can be divided into 3 main stages:

Pre-production > Production > Post-production

In each step, we will go over the traditional way that developers have worked through it, and highlight how Layer can greatly accelerate the process. AI tools like Layer do not completely replace or complete artists in the game development process, but they do allow them to move much more quickly.

Example of a traditional game art pipeline.

AI models and copyright

Gen-AI art models employ artificial intelligence techniques, particularly machine learning, to create images. Here's how these models generally operate and the copyright implications involved.

Training

AI models are trained on large datasets containing thousands or even millions of images. Through this training, the model learns various artistic styles, patterns, textures, and compositional techniques. The datasets can include everything from classical paintings to modern digital art. Many of the most popular AI models trained on copyrighted materials from artists without their consent.

Neural Networks

The core of a generative AI model is a neural network, often a Generative Adversarial Network (GAN) or a Variational Autoencoder (VAE). These networks learn to generate new images that mimic the characteristics of the training data.

  • GANs involve two parts: the Generator, which creates images, and the Discriminator, which evaluates them. The Generator's goal is to make images so convincing that the Discriminator can't tell whether they are real (from the training set) or fake (newly generated).
  • VAEs encode input data into a lower-dimensional space and then decode it back to original dimensions, aiming to retain as many of the original data's characteristics as possible in the process.

Prompting

Users can input parameters or prompts, which the AI uses to guide the generation process. These inputs can specify elements like style, color scheme, or subject matter. Based on the prompt, the AI model generates an image by drawing from its learned data. The result is typically an image that reflects both the input parameters and the model’s training.

It's often unclear who owns the copyright to images produced by AI. Generally, copyright protects human authors' original works, but AI-generated works don't fit neatly into this framework. Different jurisdictions may have different rules regarding whether an AI can hold copyrights.

Artist Rights and Attribution

There are ethical considerations regarding attribution to the creators of the training data. Artists whose works were used to train the model may deserve recognition or compensation, especially if the AI creates works substantially similar to original copyrighted material.

Layer’s view

At Layer we believe that human artists and AI should work together, not against each other. We've built enterprise-grade tools like our AI canvas and Layer Library that AAA game developers already use to interact with existing popular AI models like Stable Diffusion’s SDXL and OpenAI’s DALL-E 3. This is based on our mission to make the best tool for game artists everywhere.

However, Layer is AI model-agnostic, meaning that it also works with models like BRIA AI, which is a copyright-safe model trained on licensed data sets and offers rewards to their contributors. Eventually we hope for a future where all models are able to provide artists whose work is used to train them with proper compensation and recognition. Until then, Layer recommends that non-copyright safe models be only used during pre-production, and that production artwork is still made by human artists to preserve copyright.

Pre-production

The pre-production stage is where the majority of the brainstorming and ideation occurs. It’s also one of the most important steps in the game art pipeline, as it informs the production and post-production stages.

Ideation

During the ideation step, a game team starts from a “blank slate” and begins to generate ideas on what they want their game to look like.

If a game team already has other documents like a game design doc or a pitch proposal, those are generally a great start to visual brainstorming. During this stage, moodboards might be created compiling similar ideas from other media franchises as inspiration points.

Images from Dragon City, How to Train Your Dragon: Isle of Berk, Dragons World, Serge Lozo

For example, let’s say our game designers have a cool new idea for a mobile match-3 game involving dragons. You play through levels and get resources to build up a dragon village. Since dragons fly, we want to make it a floating village in the clouds. This is a pretty common game concept since we’re able to find various examples in existing games.

Within our moodboard, we also pick a brighter color palette since we want the game to feel inviting to players of all ages and demographics.

How Layer can help

Using Layer AI’s concepting and exploration tools, artists and art directors can turbocharge the ideation step by prompting AI models for ideas based off existing moodboards. They can even create completely brand new inspiration material by utilizing Layer’s forge tools.

These generations can then be incorporated into moodboards to help refine and define the visual style for a game.

Time savings

Throughout this blog series, we’ll be noting down the time savings from using Layer. While these might not be completely accurate to every game’s art team, they serve to illustrate how using Layer to accelerate your game art workflows

Most art directors have experienced some frustrations during this stage. There’s a great idea in their head but they can’t find the perfect image to communicate it. They might spend multiple hours searching for specific references, only to eventually need to just sketch / draw something out themselves.

Layer can easily cut the process of mood board creation from a full 8 hour workday into less than 2 hours. Our next guide will explore the mechanics of creating concepts and moving them into generation using AI tools like Layer.

Read more