Synaesthetic Syntax – Painting Music: Using Artificial Intelligence to Create Music From Live-Painted Drawings

Painting Music uses Artificial Intelligence (AI) to create music from live-painted drawings, in real time and unique for each performance. The AI algorithm is based on the type of learning used by the human brain, which responds and produces musical notes that reflect the development of live-painted drawings (Starkey et al., 2020).

The project aim is not to create a process that always results in harmonious music and melodies, but that the AI can make ‘mistakes’. Harmonious notes may be considered uninteresting by some people but beautiful to others; or may result in an interesting and novel musical form for some listeners but jarring for others. Similarly, conflict transpires in the visual component through the subject of each drawing and the application of dramaturgical drawing techniques.

Previous studies (Fast and Horvitz, 2017) have shown how public perception of AI has changed over time, and that although this perception has been more optimistic than pessimistic, the fear of loss of control of AI is steadily increasing. The exploration of this question is thus at the heart of this production aiming to tap into the public’s fear or excitement of AI and the change that this will bring to all facets of society.

This project has been developed with a focus on live stage performance with the outputs from the artist unfolding on a projector screen in real time and the musical outputs from the AI process amplified through loudspeakers. A prototype of this system was implemented in a live stage performance at Aberdeen May Festival 2019, whose narrative centred on the question Is AI good or bad?. Other outputs are a 20-minute film and a body of visual artwork for digital platforms and gallery environments.

This paper focusses on the visual and aural outputs of the live stage performance exploring whether AI is good or bad. The answer to this question is explored through the interaction of the painting and the generated music itself. The generated music by the AI is different each time the process is run, and so the artist does not truly know what to expect, which brings an element of danger to the work. In addition, the audience will react in their own way to the generated music, and this reaction will be linked to the individual’s own fear or confidence in AI – when the AI creates an unexpected note, they may be excited by this, or find it jarring. Either way it will generate an emotional reaction which is a combination of the live evolving artwork; the AI generated music (an artistic representation of the painting in itself); and the individual’s own preconceived ideas of AI.

Fast, E. and Horvitz, E. (2017), ‘Long-term trends in the public perception of artificial intelligence’,
AAAI, pp. 963–69.
Starkey, A., Steenhauer, K., Caven, J. (2020). Painting Music: Using Artificial Intelligence to create music from live painted drawings. Drawing: Research, Theory, Practice DRTP 5.2.