Synaesthetic Syntax – Liveware: Improvisation, Interaction, and Process Intensity

Liveware [1] is an audio-visual duo combining live-coded animation and musical performance. “Liveness” for each modality is understood as a fluid interplay between pre-composed and improvised dynamic processes. Similarly, the correspondence between image and sound exhibits changing levels of control and indeterminacy. Taken together as matters of degree not kind, these two dimensions – performativity and synaesthetic syntax – provide the grounding for theorizing an expanded animation practice substantiated in our paper with documentation of Liveware’s past performances and work-in-progress. [2]

From computer game design, we recontextualize the concept of Process Intensity [3] as the ratio between “operations” and “data” – between the degrees of freedom in a dynamic process (whether carried out by a human or nonhuman agent or in their interactions) and the invocation of pre-rendered elements (such as samples, recordings, conventionally notated scores, or training data). From music theory, we understand composing as slowed-down improvisation, but equally that the improvisor must to some degree anticipate before playing. [4] In Liveware, both human and non-human agents improvise, applying their “trained wits” to unforeseen circumstances as they arise on the fly. [5]

In Piano Counterpoint, performed to a 1973 score by Steve Reich performed by Century, six interlocking musical canons correspond to a live-coded geometric visual “score” that is entirely generated by Lawson’s live-coded algorithms. Improvisation for Expanded Piano animates abstract imagery in correspondence to an algorithmically and manually controlled player-piano. Small Infinities feeds Century’s modulated and spatialized accordion [6] into Lawson’s semi-controlled/semi-autonomous visual system, which reads and overwrites images from a looping sequential image memory buffer creating temporally-shifting feedback. The Isle is Full of Noises evokes a scene from Shakespeare’s play The Tempest, with live generation of images using an auto-visual system built with a machine learning (ML) algorithm [6] trained on contrasting feature films: Videodrome and Planet of the Apes. During the performance the ML utilizes its hyper-dimensional space of learnt imagery to create real-time animation from audio spectra and audio feature data ingestion that are generated using eight asynchronous loops of granular-processed human, animal and nature sounds mixed live. In current work in progress, the visual system is built with a ML algorithm [7] trained to “translate” from one image to another, while the sonic system expands the piano instrument system along the dimension of algorithmic autonomy. The performer hand-draws images that are captured by a video camera and translated through the ML to create real-time animation.

1. Initially a humorous slang term designating the “human factor” in computing.
2. https://www.shawnlawson.com/improvisation-interaction-and-process-intensity/
3. Crawford, Chris. “Process Intensity.” Journal of Computer Game Design 1, no. 5 (1987).
4. Schoenberg, Arnold. “Brahms the Progressive.” In Style and Idea: Philosophical Library, 1950.
5. Ryle, Gilbert. “Improvisation.” Mind 85, no. 337 (1976).
6. Using the Expanded Instrument System, designed by Pauline Oliveros.
7. The Generative Adversarial Network (GAN) technique of training was used.