AI-generated digital art sparks debate over news illustrations


Artificial intelligence has infiltrated many creative professions – from urban planning to translations to painting. The latest: visualizations in journalism.

Why is this important: Computers are getting better at doing what humans can do, including creating art from scratch. But the design is much more than a mixture of known shapes on a colored background.

  • Artists interpret and convey abstract stories through a single image that must not only grab someone’s attention, but also stand the test of time.

Example : Brian Rumao, Managing Director of Next Play Ventures, posted a sample on Twitter using an AI image-generating tool from Open AI called Dall-E 2 to try and replicate some Axios visuals, with some success (see example below).

  • The center-left image was created after entering “a long black chain-link fence with barbed wire dollar signs on top, against a yellow background” into the tool, Rumao noted.
Screenshot by Brian Rumao Twitter feed

The post generated a lot of internal discussion among the 23 visual journalists of Axios.

  • This led to the question: What is the difference between a human-developed editorial illustration and an AI-generated visual composite?

The big picture: Digital art generated from new AI systems like Midjourney and OpenAI’s Dall-E 2 has sparked a lot of discussion in the newsroom, especially since the Dall-E 2 website explicitly mentions the using the tool to generate newsletter art.

  • On the one hand: Machine learning platforms can create photo composites from inputs to mimic fine art, photography, or illustration.
  • On the other hand: Editorial illustration requires a level of conceptualization, visual metaphor, and connection to the story that these types of tools currently cannot replicate.

How it works: At Axios, illustrators start by identifying the key ideas of a story.

  • Then they conceptualize visual representations of the themes.
  • Then they think about how to tie the themes together to create a metaphor.
  • After producing several mockups, illustrators often select a version that is the fastest to read – or one that is “reader first” – that conveys an idea at first glance.

An example: A story about the merger of two healthcare startups can prompt an illustrator to depict “digital”, “health” and “merger” – the key elements of the story – using symbols that represent each.

  • The final version could be two syringes tied together with an Ethernet cable.

One of the problems is that AI technology can be prone to coded biases. Although people are also influenced by their biases, there are steps they can take to overcome them.

  • Editorial journalists are trained to consider these issues and, by encouraging conversations within a diverse newsroom, they can see possible problematic interpretations of a visual.
  • People can also keep a close eye on language changes and how those changes should be reflected visually. They can watch out for tropes and guard against overused symbolism.

What to watch: For now, an AI like Dall-E 2 is unable to replicate the creative problem-solving process needed to create successful editorial illustrations.

  • However, they prove to be powerful tools for generating colors, ideas, patterns, textures or mood boards for larger projects.


Comments are closed.