A simple task for neural networks at the prototype stage

A zombie shooter has recently been released which took quite a long time to develop. However, in one place the process was definitely accelerated.

Detailed drawing of a splash screen in some cases take up to a week of an artist’s work, or even more, depending on the volume. Anyway, in the prototype and testing phase, when core-mechanics and overall look&feel are crucial, it’s not a fact that it is rational to spend so many resources on a single image that flickers during loading.

Not drawing a splash screen at all — isn’t an option either, as players lose the sense of completeness of the project and that affects engagement.

Midjourney stepped in to help. I’ll tell you right away that we’re redrawing the splash screen manually now because the project is growing nicely and we need to polish every aspect. But when we didn’t have the first metrics yet I made a picture in half an hour by myself without involving artists who have more important tasks. You could say that Midjourney’s corporate license paid off from one picture.

With the development of neural networks there will be more such cases in the industry, and in the meantime, a small illustration of where they can be applied to example of our hyper casual first-person zombie shooter Dead Raid.

Preparing

I used Discord for the generation where a separate bot was connected to make it easy to see all the iterations.

One word zombie was sent as the first query. Result:

Not bad, but not good for splash screen (with the creation of icons neuronet can also help but more on that another time).

After that I tried long and complex queries but they’re usually good when you don’t have a certain vision of the result. The pictures can be beautiful and original but not at all what was expected.

Then I decided to make two different easier pictures to combine into one.

Stage 1

The query for the basis was a long abandoned hospital corridor. The first result was close to the picture I had in my head:

At the same time, repeating the same query produced similar variations with slight differences. Then I started experimenting with the lighting:

long abandoned hospital corridor in blue and white colors
long abandoned hospital corridor in white colors at the midnight

After several iterations (each took seconds) I stopped at the last query, adding to it the parameter –test –creative which increases the freedom of “creativity” of neural network.

As a result, we got a variant which became the final one:

Stage 2

For the second image I asked to generate a little horror girl in a white dress. However, for the query horror girl in a white dress standing on a floor and little horror girl in a white dress with black long hair the results were too cartoonish:

Added in black shoes, and it got a bit more interesting:

Kept asking Midjourney for a horror girl in a white dress with black long hair in black shoes until I got this one:

There are some peculiarities with the legs but I don’t think it’s critical for a ghost. The face on the back of the head was just painted over.

Stage 3

I transfer both images to Photoshop, a couple of color-correcting moves, add transparency, a few noise filters, and voila:

Of course, it doesn’t replace the hands of a professional artist, but it’s fine for a prototype. Then it’s up to A/B tests and metrics which we will rely on as we always do.

As for working with neural networks, you need to go deeper into the query parameters to exclude obviously unnecessary results or, on the contrary, to add creativity if you run out of ideas. Neural networks also work for brainstorming.

Then we test the prototype on a real audience and if a response is seen we work on improving it manually.

Back to blog