Printed Program in PDF (short)
Reservations
- Lectures and workshop are free, reservation highly recommended here
- Concert Tickets : Concert #1, #Concert 2, Concert #3.
Program Table
- CONCERT #1, Ircam, Dec. 2
- CONCERT #2, Ircam, Dec. 5
- CONCERT #3 and Master Class, Dynamo, Dec. 7
- INSTALLATIONS-CONCERTS, Ircam Dec. 6
- WORKSHOPS, Ircam Dec. 3-5
- LECTURES, Ircam Dec. 4-5
Program Notes
December 2 Tuesday
20:00 IMPROTECH CONCERT #1 at Espace de Projection, Ircam
Psappho, after Iannis Xenakis
Lorenzo Colombo percussions
Marco Fiorini AI-Agents Somax2, live electronics
Décembre - premier livre du cycle Arc’
Marco Suárez-Cifuentes composition, AI-Agents Somax2
Johanna Vargas voice
Nicolas Crosse bass
Aurélien Gignoux drums
Nieto texts
Traversée III
Nicolas Brochec composition, AI-Agents Somax2
Kanami Koga flute
REACHin’ Marseille
Turner Williams Jr shahi baaja
György Kurtág Jr synthetisers
Jean-Marc Montera guitars, electronics
The Who/Men (Gerard Assayag, Mikhail Malt, Marco Fiorini AI-Agents Somax2)
Feat. Mari Kimura, violon, Pierre Couprie, live électronics
Deleph
Jaap Blonk voice, sound poetry
Benny Sluchin trombone
Georges Bloch AI-Agents Omax5, Somax2, immersive electronics
In albireo luogo…
Lara Morciano composition
Joëlle Léandre, Nicolas Crosse bass
José-Miguel Fernandez AI-Agents Somax2Collider, immersive electronics
December 5 Friday
20:00 IMPROTECH CONCERT #2 at Espace de Projection, Ircam
Boulez Reloaded
Elaine Chew piano
Thierry Miroglio percussions
Gérard Assayag AI-Agents Somax2
Solo for sliding Trombone
After John Cage composition
Benny Sluchin trombone
Mikhail Malt AI-Agents Somax2, immersive electronics
Taideji
Lara Morciano composition, piano
Thierry Miroglio percussions
José-Miguel Fernandez AI-Agents Somax2Collider, immersive electronics
Transe III
Justin Vali Malagasy Zither, voice
Marc Chemillier AI-Agents Djazz
NSDOS electronic hack, live coding, dance
NaN - Not a Number
Alberto Maria Gatti composition, immersive electronics, AI-Agents Somax2
Anaïs del Sordo voice
in memoriam Susan Alcorn
Miya Masaoka composition, koto
Nomadologies
George Lewis, composition
Joëlle Léandre bass, voice
Miya Masaoka koto
Marco Fiorini, Damon Holzborn, George Lewis, Gérard Assayag SoVo AI system
December 7 Sunday
14:00 Master Class at La Dynamo, Pantin
Steve Lehman, Saxophones
Miles Okazaki, Guitar
The Somax Brothers (Gérard Assayag, Marco Fiorini) Somax2 AI Agents
18:00 IMPROTECH CONCERT #3 at La Dynamo, Pantin
REACHin' Paris
Steve Lehman, Saxophones
Miles Okazaki, Guitar
The Somax Brothers (Gérard Assayag, Marco Fiorini) Somax2 AI Agents
December 6 Saturday
14:00 - 19:00 INSTALLATIONS / CONCERTS in Studio 5, Ircam
WWW
for Spat'Sonore: Tentacular Physical Spatialisation
José-Miguel Fernandez composition, AI-Agents Somax2Collider, immersive electronics
Amaryllis Billet (violin)
Philippe Bord (French horn)
Nicolas Chedmail (French horn)
Roméo Monteiro (percussions)
Maxime Morel (brass instruments)
Joris Rühl (clarinet)
Batterie Fragile
Yves Chaudouët creator of the Fragile Drum
Jean-Brice Godet, clarinets
Thierry Miroglio, percussions
Aurelien Gignoux, percussions
NOSFELL, composer, voice, texts
Mikhail Malt, composer, Somax2 AI Agents
December 3 Wednesday Workshops
10h - 13h Workshop Resounding Bodies in Space in Studio 2, Ircam
Alberto Maria Gatti, composer, sound designer, computer music designer
Anaïs del Sordo, voice, body harness
Marco Fiorini PhD researcher (Ircam, Sorbonne University), Guitar, Spatial Agents Somax2
14h - 19h Workshops in Studio 5, Ircam
Breathing Media Projects featuring 1000+Yr old traditional Gagaku music reimagined with cutting-edge sensors technology
Tamami Tono, sho player, composer
Atsushi Todokoro, creative coder, visual artist (Maebashi Institute of Technology)
Mugic Magic
Mari Kimura, violin, Mugic Sensors inventor (UC Irvine)
Tamami Tono, sho player, composer
Minako Ito, Bugaku dancer
The Sophtar: an electroacoustic feedback instrument with embedded algorithms for human-machine improvisation
Federico Visi, composer, performer, Sophtar Instrument inventor (Universität der Künste Berlin)
First Meeting
Alain Blesing, composer, electric guitar, Somax2
Claudie Boucau, flutes
December 4 Thursday Workshops
14h - 19h Workshops in Studio 5, Ircam
AI at Carnegie Hall and Electronic Instrument Design
Levy Lorenzo, percussion and electronics
Activate Cities: Urban Inspiration and Live-coding
NSDOS (Kirikou Des), composer, electro-hacker, dancer
Noam Assayag, writer, visual artist, performer
Change : the two following workshsops will take place in Stravinsky Room at 16:30
PURE MALT: Augmented Improvisations, from instrumental gesture to telematics
Mikhail Malt, composer, Somax2 AI Agents, live electronics, telematics
Cassia Carrascoza, flute, telematics (Universidade de São Paulo)
Li-Chin Li, sheng
TikTok Djam
Yohann Rabearivelo, PhD researcher (EHESS)
Ulysse Roussel, PhD researcher (Sorbonne University)
Martin Mahieu, musician
Heny Zouari, violin, PhD researcher (EHESS)
December 5 Friday Workshops
11h - 13h Workshops in Studio 5, Ircam
Hypercept ~ quatuor d’improvisation
Gyorgy Kurtag, synthesizers
Donatien Garnier, Metaphorminx instrument, poet
Emmanuelle Pépin, dancer
Live Coding Practices
Raphaël Forment, musicologist and music software designer
Rémi Georges, sound artist, composer and computer music designer
Guillaume Piccarreta, programmer and digital artist
December 4 Thursday Lectures
9h30 - 13h Lectures in Stravinsky hall, Ircam
Go to Zoom connexion starting at 9:15 Paris Time
Mixed Initiative Co-Creative Design for Long-Term Human-AI Musical Partnership
Ken Deguernel, CNRS, Laboratoire CRISTAL Lille
Generative Spatial Synthesis of Sound and Music (ERC G3S)
Alain Bonardi, Université Paris 8 – CICM / MUSIDANSE – Projet ERC G3S
Emma Frid, Université Paris 8 – CICM / MUSIDANSE – Projet ERC G3S
Paul Goutmann, Université Paris 8 – CICM / MUSIDANSE – Projet ERC G3S
Axel Chemla-Romeu-Santos, Université Paris 8 – CICM / MUSIDANSE – Projet ERC G3S
Somax2 for Live Visual Music: Co-Improvisation with Creative Agents
Sabina Covarrubias, digital artist, Synesthesic Devices founder
The meaning of « co » in « co-creative »
Pierre Saint-Germier, CNRS, Ircam
Gestures in Electronic Improvised Music
Pierre Couprie, Evry Paris-Saclay University
Real-Time Recognition of Instrumental Playing Techniques for Composition and CoCreative Interaction
Nicolas Brochec, Phd researcher (Geidai University Tokyo)
Marco Fiorini, PhD researcher (Ircam, Sorbonne University)
December 5 Friday lectures
14h - 19h Lectures in Stravinsky hall, Ircam
Go to Zoom connexion starting at 13:45 Paris Time
The Odd Couple - Human & AI Making Music in the Moment
Oded Ben Tal, Kingston University, London
David Dolan, Guildhall School of Music & Drama London
Unity Interfaces for Djazz and Somax: Ludic and Narrative Perspectives on Musical Machine Co-creativity
Daniel Brown, University de Picardie
Steve Horowitz, Composer
Transmettre son style à l'IA : récit d'une collaboration improbable
Jean-Rémy Guédon, composer, performer
Composition and improvisation in contemporary opera
Sivan Eldar, composer
Jean-Louis Giavitto, CNRS, Ircam
Augustin Muller, computer music designer (Ircam, Le Balcon)
Evaluation of AI based improvisation systems
Gilbert Nouno, Haute Ecole de Musique de Genève
Christophe Fellay, École de design et Haute école d'art du Valais
Nathalie Hérold, IReMus - Sorbonne Université
Pierre Alexandre Tremblay, Conservatorio della Svizzera italiana
Extensymbiosis — The Audio–Visually Augmented Trumpet and Multi-modal Corpus-based Synthesis as a Shared Instrument
Nicolas Souchal, PhD researcher (Ircam)
Diemo Schwartz, Ircam
Abstracts
Concerts
Psappho is the third instance of the project Xenakis Reloaded, after Evryiali (2022) and AI-Komboï (2023), paying homage to Xenakis through the choice of a well known piece and an improvised extension using AI. As the continuation of a broader research and performance project developed by Lorenzo Colombo and Marco Fiorini, two artists and researchers dedicated to exploring the creative potentials at the intersection of human and artificial intelligence in music, this new chapter will use Psappha (1975) as its central material. The project will investigate new modes of cyber-human co-creativity by integrating Somax2 and gestural control into an immersive spatialized performance environment. Psappha will serve as the compositional and performative ground, with a focus on gestural and temporal density, rhythmic articulation, and structural complexity, core features of Xenakis’s language, and fertile terrain for this novel interaction.
Décembre - premier livre du cycle Arc’ est un projet qui intègre musique écrite et improvisation, inspiré par l’univers graphique de Daichi Mori, en particulier par l’un de ses dessins : un emakimono au titre énigmatique, La Naissance du Centaure. Décembre en constitue le premier chapitre ouvrant le cycle. Conçu pour et avec trois musiciens interagissant avec un dispositif électroacoustique, qui spatialise une multiplicité de voix improvisées par l’IA, cette œuvre est une étape d’une recherche portée sur la voix et la parole dans les systèmes d’improvisation génératifs, ainsi que sur la mise en espace des structures musicales générées dans l'interaction avec le logiciel Somax2..
Traversée III s’inscrit dans une série de pièces pour flûte et dispositif électronique en temps réel, dans lesquelles la génération à la volée du matériau électronique repose sur la reconnaissance automatique par IA des modes de jeu de la flûte grâce à ipt~, un outil puissant et original développé par Nicolas Brochec, Joakim Borg et Marco Fiorini. Alors que, dans les pièces précédentes, cette génération était directement pilotée par le système de reconnaissance, Traversée III s’en distingue par l’intégration d’agents improvisateurs artificiels Somax2 qui assument désormais ce rôle. Ces agents interagissent avec la partie instrumentale avec des réponses allant du contrepoint à l’accompagnement, en fonction des modes de jeu interprétés. Il en résulte parfois l’introduction d’événements inattendus, susceptibles d’influencer, ou non, la partie de flûte et d’en modifier le déroulement. Flute: Kanami Koga, Composition, AI-agents Somax2: Nicolas Brochec.
Traversée III is part of a series of compositions for flute and real-time electronic system, in which the real-time generation of electronic material relies on the automatic recognition of flute playing techniques thanks to ipt~, a new and groundbreaking tool developed by Nicolas Brochec, Joakim Borg, and Marco Fiorini. Whereas in the previous Traversée pieces, the generation of electronic material was directly controlled by ipt~, Traversée III differs in that it integrates Somax2 artificial improvisation agents, which now share this role. The Somax2 agents interact with the instrumental part, producing sonic responses that range from counterpoint to accompaniment, depending on the playing techniques performed. This sometimes results in the insertion of unexpected events, which may or may not influence the flute part and alter its course. Flute: Kanami Koga, Composition, AI-agents Somax2: Nicolas Brochec
Reachin’Marseille La série de performances REACHing OUT dont fait partie REACHin’Marseille célèbre, tout autour du monde, l’improvisation la plus jubilatoire autour de grandes personnalités musicales invitées accompagnée des Who/Men, des musiciens-chercheurs avec leurs machines dopées aux algorithmes d’IA créatives. Le programme de recherche et création REACH à l’initiative de cette nouvelle forme de performance, formule l’hypothèse de la co créativité entre agents de ces interactions improvisées incorporant des machines, comme une sorte de réalité mixte : construisant une forme musicale toujours renouvelée, surgissant d'un matériau sonore co-construit à la fois imprévisible et contrôlé, du bruissement d’aile à l’explosion volcanique. Et si l’humain et la machine se rêvaient l’un l’autre, hybridant l’énergie créative humaine avec les processus d’écoute et d’apprentissage croisés et leurs boucles de rétroaction, en pur plaisir ? Comme le dit Joëlle Léandre, régulièrement invitée de REACHing OUT, parlant de ces concerts : « Une vraie rencontre, une jubilation... C’est un risque et un moment unique, infini ! C’est sans doute chercher et peut-être trouver... Au fond, c’est "savoir ne pas savoir". » Joëlle Léandre
Deleph se promène dans l'univers entre le son et le langage, et plutôt dans ces régions où le langage, dès l’origine, touche à l’absurde. Nous sommes partis d’un court poème qui combine des mots utilisant, dans l’ordre, les sons D, L puis F. Afin de s’en inspirer en imaginant ce qu’auraient pu en faire Offenbach, le Schwitters de la Ursonate, le Robert Erickson de General Speech, ou Jaap Blonk.
Deleph ventures in the territory between sound and language. But, mostly in these areas where language is fundamentally absurd. We took as a point of departure a short poem in French which combines words using the sounds D, L and F in this order. Taking inspiration from this text, and imagining what Offenbach, Schwitters of the Ursonate, Robert Erickson of General Speech, or Jaap Blonk might have done with it.
Deleph
Delphine la dauphine ainsi médit d’Olaf :
Il veut être Calife en place de Deleph !
Dieu quel défi ! Et même si Deleph se trompe
Car se passer d’Aleph, c’est bête…
Il se trompe, Deleph, embouchant l’olifant,
Delphes ? Pourquoi partir ? Delphes n’est que pythie,
Mesquins oracles dans le fond déiquescents.
Et se passer d’Aleph, c’est bête…
In albireo luogo… "Dans cette nouvelle œuvre, j’explore les relations entre écriture, improvisation et électronique, dans la continuité de ma recherche sur les combinaisons dynamiques entre jeu instrumental, traitement en temps réel, espace d’écoute et interaction avec des agents cocréatifs. La rencontre entre deux solistes d’exception jouant le même instrument puissant, la contrebasse, avec leur style et leur énergie propres, et le logiciel Somax2, ouvre un monde sonore fascinant et un champ d’interactions improvisées où émergent des textures spatiales et des formes inédites. Cette coévolution du parcours formel repose sur une « musicalité partagée », fondée sur l’écoute, la réactivité et une intelligence collective, humaine comme machinique. Le projet s’inscrit dans une démarche expérimentale, relevant d’un défi à la fois musical et esthétique. Il mobilise des processus croisés et des comportements coopératifs qui génèrent des actions et échanges imprévus. Le matériau sonore devient un point d’ancrage, un levier pour dessiner des trajectoires et transformations articulant différents plans sonores. Entre détail de la notation et liberté interprétative, l’équilibre recherché est à la fois instable, subtil et incisif. La dimension électronique intensifie cette tension et ambiguïté, en instaurant un système d’interaction cyclique, organique et imprévisible. Dans l’espace sonore tridimensionnel, rendu possible par une spatialisation ambisonique, l’intégration d’agents spatialisés (Somax2) génère une interconnexion dynamique entre les parties écrites et improvisées, les inscrivant dans une logique d’interaction spatiale en constante évolution.” Lara Morciano, Juin 2025
Un poème de Chat-GPT:
(Albireo est une étoile double dans la constellation du cygne)
In albireo luogo
Dans un lieu albireo, rien n’est fixe.
Deux lumières cohabitent, l’une froide, l’autre brûlante —
elles tournent l’une autour de l’autre, sans jamais se fondre.
C’est un point d’équilibre instable,
une tension suspendue entre contraste et fusion.
Ici, les frontières entre le composé et l’improvisé se dissolvent.
Le geste écrit rejoint le souffle libre,
le traitement électronique devient partenaire de jeu.
C’est un terrain mouvant,
où la forme n’est jamais donnée mais toujours en devenir.
Le lieu albireo est un espace d’écoute active,
de co-présence sensible entre humains et machines,
où chaque son porte la trace d’une décision partagée,
et chaque silence, celle d’un dialogue en suspens.
C’est là, dans ce lieu imaginaire mais audible,
que cette œuvre prend naissance :
entre précision et dérive,
entre structure et surprise,
entre étoile bleue et étoile dorée.
Boulez Reloaded Conceived in homage to Pierre Boulez on his centennial celebrated worldwide, “Boulez Reloaded” is a musical trio that brings to the live stage human-AI cocreativity and skilled human improvisers. An open, semi-improvised dialog between multiple confronting music streams will draw material from the rich musical history that shaped Boulez and his music. Elaine Chew will perform, adapt, and improvise on excerpts of pieces such as Boulez’s Fragment d’une ébauche (1987) or 1st piano sonata (1946). Gérard Assayag will guide somax2 AI agents' improvisations towards different interaction strategies and sonic material drawn both from Elaine Chew’s live piano playing and corpuses of Boulez’s works including Pli Selon Pli, sometimes switching to totally different sonic universes. Thierry Miroglio will blend in with various percussions in a “tintinnabulant” (tinkling) spirit that Boulez would have much appreciated. The musician’s and the machine’s reciprocal listening and adaptation produce rich and previously unheard combinations. A variety of new possible readings of Boulez' complexity emerge, offering a way of bringing his works to a new life beyond their finitude in a homage spirit, and a resolutely new aesthetical and ethical approach to AI. In this musician-AI dialog, the music that Boulez loved, analyzed and conducted can slip in, notably Ravel whose 150th birthday is also celebrated this year, extending the idea of re-creation to that of a meta-level dialogue of musics and influences fueled by AI.
Solo for Sliding Trombone This Performance presents an artistic research project exploring the performance of John Cage’s “Solo for Sliding Trombone” using AI generative tools within the Somax2 environment. The performance investigates the interplay between human interpretation, AI-assisted performance, and Cage’s core concepts of silence, indeterminacy, and unintentionality. By integrating AI agents as virtual performers and employing techniques like “coloring the silence” and “expansions,” the research aims to push the boundaries of Cage’s indeterminacy. This artistic research resulted in a unique set of improvised performances, captured and presented in a box set with 7 distinct tracks, showcasing the dynamic interplay between human and AI creativity within the framework of Cage’s innovative musical philosophy.
Taideji In this piece, the collaboration between the performers is rooted in an exploration of the dynamic relationship between acoustic instruments and electronics, incorporating the interactive possibilities offered by Somax2. The sonic palette of the piano at times low, distorted, and percussive, at others bright and resonant, interacts with the rich colors of the percussion, shaping a musical journey marked by strong contrasts and an ever-shifting energy, oscillating between density and rarefaction. On the electronic side, the two acoustic instruments are processed in real time through various techniques, while Somax2 establishes a sensitive and reactive connection between instrumental performance and generated sound material, enriching the dialogue between human and machine.
Transe III World Music star zitherist Justin Vali will propel this collective into the realms of musical trance as practiced in Madagascar. The crystalline swirls of Justin Vali's zither are relayed by the obsessive improvisations generated by Marc Chemillier using the Djazz artificial intelligence system, trained through machine learning with zither playing data. This captivating dialogue is underpinned by hypnotic grooves, notably the diabolical ternary rhythms of the Indian Ocean. Electronic musician NSDOS adds enveloping and sophisticated textures, drawing on the audio features of these improvisations, while sounds of Malagasy nature are projected into the room space, as in a traditional context they are inseparable from the music. In this confrontation between AI and the world of trance and spirits, the focus is on the delicate balance between the treasures of tradition and technological innovation.
NaN - Not a Number is an immersive sonic experience, a real-time dialogue between the human voice and artificial intelligence, where the concept of feedback becomes a dynamic and creative principle. This performance-concert explores co-creation between the organic and the synthetic—an unpredictable fusion of the living presence of the voice and the responsiveness of an AI agent system that transforms, reacts to, and reshapes sound into a continuum of sonic metamorphosis. The performer, as the primary acoustic source, interacts with a latent AI environment that analyzes, processes, and reinterprets her vocality, generating an ever-evolving sound ecosystem. Every breath, every timbral inflection, and every harmonic extension propagates through space, giving rise to mutable and dynamic sonic configurations. The performance unfolds in an expanded and immersive environment, enriched by a resonant proscenium made of diverse materials (metal, glass, paper), which serve as surfaces for sound diffusion and vibration. These elements become active parts of the concert, amplifying and transforming the sound into a material choreography that engages the audience in a multi-sensory experience. The result is a seamless sonic flow, where the boundary between human and machine dissolves, giving way to a new, ever-transforming sonic organism. NaN is not just a musical performance—it is an experiment in symbiosis between voice and artificial intelligence, a journey into the unexpected, the indeterminate, and the beauty of emergent interaction.
in memoriam Susan Alcorn by kotoist Miya Masaoka will celebrate the great musician Susan Alcorn who left this earth in January of this year. Miya, Susan and viola player LaDonna Smith performed as a trio at Improtech 2017 in Philadelphia.
Nomadologies is a musical work realized as a real-time dialogic collaboration among contrabassist Joëlle Léandre, kotoist Miya Masaoka, and SoVo, a new cocreative improvisation system conceived to expand the communicative terrain of human-machine collaboration in ways discussed for quite some time by George Lewis and Gerard Assayag during their encounters at preceding Improtechs.
Designed and implemented in 2024-25 by Marco Fiorini, Damon Holzborn, George Lewis and Gérard Assayag, the SoVo system represents the convergence of decades of innovation in artificial intelligence and improvisation. SoVo is not a fixed composition, but a dynamic environment where creative agency is distributed among human virtuoso performers Léandre and Masaoka, and improvising software agents operating in an integrated architecture combining Lewis’ Voyager system with the program Somax2 from the REACH project.
The system merges symbolic and audio-based interaction, combining in real time rule based algorithmic generation with machine learning and cognitive modeling, and uses machine listening and adaptive strategies both between the program and the musicians and between the So and Vo components.
This performance will also feature ipt~ a new, groundbreaking tool for real-time instrumental playing techniques recognition by Nicolas Brochec, Joakim Borg and Marco Fiorini, that will enhance "machine listening" to better "understand" musical gestures. The system engages in the improvisational process with creative autonomy, where intention, memory, and form emerge through co-authored musical discourse.
REACHin' Paris Internationally recognized as leaders in the field of modern improvisation and compositional thought, Miles Okazaki (electric guitar, live electronics) and Steve Lehman (alto saxophone) will join the Ircam Somax Brothers (Gérard Assayag and Marco Fiorini) with their Somax2 AI Agents, in a collection of short compositions and improvisations showcasing a variety of approaches to musician-machine interaction. In this cocreative experiment of a new kind, the musicians may enjoy their full freedom of expression while the algorithmic systems in presence (Dogstar by Miles Okazaki and Ircam’s Somax2) navigate their way to follow their steps and expand their flows in an electroacoustic and generative space. This concert will also feature ipt~ an innovative tool for real-time instrumental playing techniques recognition by Nicolas Brochec, Joakim Borg and Marco Fiorini that will enhance the “machine listening” process to better “understand” musical gestures. Concert Page at the Dynamo
Installations Concerts
WWW La notion de WWW (« Wood Wide Web ») ou « Internet des arbres » repose sur la découverte que les arbres et les plantes ne sont pas des entités isolées, mais qu'ils entretiennent des relations complexes et symbiotiques entre eux et avec d'autres organismes. Cette expression est utilisée pour décrire le réseau complexe de communication et d'échange de nutriments qui existe entre les arbres, les plantes, les champignons et d'autres organismes vivants dans les écosystèmes forestiers. Cette communication souterraine joue un rôle essentiel dans le partage de ressources, la résistance aux maladies et la survie des plantes dans la nature. Ainsi, une autorégulation s'opère entre les différents composants de la forêt, façonnant un univers riche en interactions et en diversité. Cette idée constitue le fondement et la source d'inspiration de cette composition/improvisation avec le Spat’Sonore, que l’on peut définir comme un vaste instrument tentaculaire avec des ramifications comme « des plantes grimpantes en tubes de cuivre coiffées de pavillons-corolles, forment un igloo sonore dans lequel vient s’installer l’auditeur. Des machinistes se postent aux cerveaux à pistons de leur spat’ » . Les instruments spatialisés de l'ensemble Spat’Sonore, conjugués à l'électronique immersive en temps réel créeront un réseau de communication complexes entre tous les éléments du système. L’autorégulation sera assuré par des agents spatiaux de type Somax2Collider, dans un système de haut-parleurs distribués, jouant tout au long de la performance en tenant compte non seulement des descripteurs audio, mais aussi de la localisation des sons émis par les différents instruments de l’ensemble.
La Batterie fragile
Conçue en porcelaine biscuit par le plasticien, écrivain et metteur en scène Yves Chaudouët, la Batterie fragile est une sculpture qui demande des égards particuliers. Sorte d’ « oxymore musical », comme l’évoque le philosophe Pierre Sauvanet, elle se voulait initialement simple invitation à la rêverie. C’était sans compter sans le désir des musicien·nes de s’en emparer. Elle est donc régulièrement activée par des percussionnistes. Bernard Lubat, Valentina Magaletti, Sylvain Darrifourcq, Julian Sartorius, Aurélien Gignoux, Iker Idoate, Amélie Grould… jouent avec les limites de sa fragilité dans des formes solo présentées sur les scènes de musiques actuelles. Le prototype de la Batterie fragile, à l’ÉSAD-Pyrénées en 2016, fait partie de la collection du FRAC Nouvelle-Aquitaine MÉCA. La V2 de Batterie fragile, développée avec les céramistes de l’ENSAD-Limoges, est celle qui sera activée pendant les journées d’Improtech. La batterie fragile sera jouée par Thierry Miroglio, Aurélien Gignoux, avec divers musiciens dont Jean-Brice Godet (Clarinette), Nosfell (voix, texte) et le système d'IA générative cocréatif Somax2.
Workshops
Alain Blesing, Claudie Boucau, First Meeting
Ce workshop essaiera de déterminer à quelles conditions le discours de deux musiciens improvisateurs (Alain Blesing : guitare électrique et Claudie Boucau : flûte) pourra se rapprocher voire s'inspirer d'un corpus pensé et revendiqué comme totalement aux antipodes de l'improvisation, à savoir la musique de Pierre Boulez. Nous proposerons un travail autour d'extraits des 12 Notations pour piano, cycle de courtes pièces composées en 1945. De courtes performances ainsi que des extraits audio viendront commenter et illustrer le propos. Dans un 2ème temps une performance plus longue sera proposée en double solo et en duo. Ce moment de jeu fera intervenir Somax 2 en plus des deux musiciens dans divers modes de jeu. La dernière partie du workshop sera consacrée à un échange et aux questions qui pourront éventuellement être posées.
This workshop will attempt to determine under what conditions the discourse of two improvising musicians (Alain Blesing: electric guitar, and Claudie Boucau: flute) can be brought closer to, or even inspired by, a body of work considered and claimed to be the complete opposite of improvisation, namely the music of Pierre Boulez. We will propose a work based on excerpts from the 12 Notations for Piano, a cycle of short pieces composed in 1945. Short performances and audio excerpts will comment on and illustrate the subject. A second part will feature a longer performance, both solo and duet. This performance will involve Somax 2 in addition to the two musicians, in various playing modes. The final part of the workshop will be devoted to a discussion and possible questions.
Raphael Forment (musician, researcher, free-software programmer), Rémi Georges (sound artist, composer and computer music designer) et Guillaume Piccaretta (Ircam, programmer and digital artist) Pratiques du Live Coding
Cet atelier constitue une introduction par des regards et des récits croisés aux techniques et aux pratiques du live coding. Rémi Georges, Guillaume Piccarreta et Raphaël Forment présenteront au cours de cette heure les enjeux technologiques, musicaux, éthiques et militants qui les encouragent à pratiquer le live coding. Ils présenteront les outils, sites web et œuvres qu'ils conçoivent. Au cours d'une démonstration de 20~25 minutes, ils improviseront ensemble en réseau, à l'aide d'ordinateurs, de synthétiseurs modulaires et d'autres machines. Le live coding n'est pas seulement une technique de production, il s'agit plutôt d'une manière de penser l'action musicale et de se situer en tant qu'artiste par rapport à la technologie, aux outils de création, au partage communautaire des connaissances et à la création collective.
This workshop constitutes an introduction to live coding techniques and practices through intersecting perspectives and narratives. Over the course of one hour, Rémi Georges, Guillaume Piccarreta and Raphaël Forment will present the technological, musical, ethical and activist issues that encourage them to practice live coding. They will present the tools, websites and works they design. During a 20-25 minute demonstration, they will improvise together over a network, using computers, modular synthesizers and other machines. Live coding is not merely a production technique; it is rather a way of thinking about musical action and positioning oneself as an artist in relation to technology, creative tools, knowledge sharing and collective creation.
Livecoding.fr, gcode.tools
Alberto Maria Gatti (Composer, designer), Anaïs del Sordo (Voice), Marco Fiorini (Guitar, Somax2 AI AGents), Resounding Bodies in Space
This workshop-performance on Audio-Tactile Listening & 3D Spatialization invites participants to explore a gentle convergence of touch and sound in space. A multimodal harness, combining vibrating transducers and bone conduction, works alongside Somax2 and Spatial Agents to shape responsive, real-time interactions. The
system processes audio on the fly, adapts material for the transducers, and coordinates diffusion
between the harness and an ambisonics system. Through microphone input and motion capture, participants can subtly guide the flow and trajectories of sound. The session opens a space to reflect, through practice, on the relationship between audio-tactile perception and spatial composition, touching on assisted improvisation and multimodal diffusion.
Format
• Each hour features three active stations with
2. Guided spatial listening within the 3D field
3. Interactive control with microphone + motion capture (harness + ambisonics)
• The three stations are designed for three individual participants (one per station) per slot; auditors are welcome to attend and listen.
• The session includes short improvisations by the performers, with opportunities for participant interaction.
• We close with a Q&A on spatial improvisation, multimodal diffusion, and artistic/technical
insights.
Who it’s for:
Artists, musicians, sound designers, researchers, and curious listeners interested in embodied
listening, interactive spatial audio, and performance-driven research.
Mari Kimura (UC Irvine), Tamami Tono (Sho Player), Minako Ito (Bugaku Dancer) MUGIC® Magic
MUGIC® is a 9-axis motion sensor similar to other generic 9-axis sensors available on the market. However, what sets MUGIC® apart is its comprehensive, user-friendly design. Created by violinist and composer Mari Kimura, MUGIC® is a turnkey product that allows musicians to create their art immediately without requiring extensive programming or electrical engineering skills. The first version of MUGIC® sold out following a significant bulk order from the Lincoln Center in NYC this spring. As MUGIC® v.2 is under development, Kimura will demonstrate the importance of fostering a community around new technology and how MUGIC® users are expanding its application not only in music but also in other forms of art and beyond.
1) Demonstration and Workshop by Mari Kimura
2) SOMAXMOBILE for violin, SOMAX, MUGIC®
3) “Genjyoraku" (還城楽), traditional Gagaku with MUGIC®
Tamami Tono, Sho, Minako Ito, dance, Mari Kimura, violin
Tamami Tono (Sho Player), Atsushi Tadokoro (Maebashi Institute of Technology) Breathing Media Projects featuring 1000+Yr old traditional Gagaku music reimagined with cutting-edge sensors technology
1)“Hyojyo no Choushi” (平調 調子), traditional Gagaku: Tamami Tono, Sho Solo
2)Sho and IBUKI, breath controller demo
3)Tamami Tono: “dinergy 4“ for interactive audio/video: Tamami Tono, Sho, Atsushi Tadokoro, computer graphics
4)”Etenraku" (越天楽), traditional Gagaku: Tamami Tono, Sho, Mari Kimura, violin, Atsushi Tadokoro, with Live cording
5)Atsushi Tadokoro audio visual Solo
Tamami Tono, sho; Atsushi Tadokoro, visuals - Breathing Media Project
György Kurtag jr (Synthetiseurs), Emmanuelle Pépin (Danseuse), Donatien Garnier (Poète, concepteur) Hypercept ~ quatuor d’improvisation
La performance suivie de présentation s’articule autour de la mise en jeu d’un instrument hybride, la MÉTAPHORMINX, implantée dans un grand cep de syrah prélevé au Mas Foulaquier (Pic Saint-Loup). L’électronique comprend un accéléromètre trois axes, un potentiomètre, un capteur de souffle, cinq boutons, un boîtier d’émission-réception. Il interagit à distance, via wifi et des messages midi. Le programme informatique créé spécialement pour ce dispositif par Joseph Larrald. Hypercept est née de la volonté d’explorer la richesse de l’instrument en le déviant de l’objet performatif très écrit pour lequel il a été conçu, de fouiller les registres chorégraphiques, musicaux et verbaux qu’il propose à l’improvisation libre.
Levy Lorenzo (Percussion & electronics) AI at Carnegie Hall and Electronic Instrument Design
This workshop describes the musical and technical process for presenting a new work Pliages at Carnegie Hall, that uses Boulez's Pli Selon Pli No. 3 as musical training material for corpuses for Somax2. The work is an improvisation for Somax2 and members of the International Contemporary Ensemble (Voice, Harp, Piano, Percussions, Electronics). The presenter, Levy Lorenzo, will present work to offer consideration between manual vs automatic music, as well as embodied and disembodied performance.
Mikhail Malt (Ircam), PURE MALT: Augmented Improvisations, from instrumental gesture to telematics
This workshop highlights two research-creation projects that explore new frontiers in musical improvisation through interactive technologies and the internet. These case studies analyze how digital environments can not only expand the expressive possibilities of instrumentalists, but also create a shared performance space that transcends geographical limitations.
The first research project focuses on improvisation with traditional instruments and artificial intelligence. Li-Chin Li plays the sheng (Chinese mouth organ) in interaction with the Somax2 interactive system. This system, capable of learning and responding in real time, becomes a full-fledged improvisational partner. We will show how the combination of the ancestral gesture of the sheng with the generative responses of the machine opens up a field of hybrid sound exploration, combining tradition and innovation.
The second research project explores improvisation in a telematic performance. Mikhaïl Malt, in Paris in Studio 5 with the Somax2 environment and generative electronics, interacts live with flutist Cássia Carrascoza Bomfim, located in São Paulo. This configuration serves as a laboratory for examining the creative challenges and opportunities related to network latency, synchronization, and establishing a shared listening experience despite distance. The workshop will highlight strategies developed to transform technical constraints, such as delays and audio artifacts, into compositional and expressive elements.
This workshop explores the future of improvisational practice by comparing two experiences: one focused on extending instrumental gesture and the other on delocalizing interaction. It examines how modern technologies are transforming relationships between performers, altering human-machine interactions, and redefining spaces for collective musical creation.
NSDOS (musician, hacker, choreographer), Noam Assayag (writer, translator, collagist) Activate Cities: Urban Inspiration and Live-coding
In this workshop, the two artists will perform a fragment of their joint project, Activate Cities, where ambient music is woven around the pages of a manual of techniques for urban attention and inspiration. Attendees will explore the artistic dimension of "a cellular automaton" and the ability to live-code music with the open-source software Orca.
Just as the machine evolves from an initial set of parameters, we can inject prompts and constraints—even in a simple walk, for example with an altered deck of cards—to step outside of our automatisms and notice things that may have otherwise escaped us. Between attention and intention, this gleaning of sensations, snapshots, and words from the street inspires the musical session that transforms these gleanings "into something rich and strange.
Activate Cities Project, Insta Norkhat, Insta NSDOS
Yohann Rabearivelo (EHESS, Ircam), Ulysse Roussel (Sorbonne Université), Martin Mahieu, Heny Zouari (Violon) TikTok Djam
Ce dispositif explore la co-créativité en ligne et hors-ligne en faisant dialoguer des musiciens avec le logiciel Djazz autour de la recommandation algorithmique de la plateforme TikTok. En prenant comme source le contenu en ligne ou streamé en live sur la plateforme et le son généré par les musiciens, le logiciel Djazz devient l’intermédiaire et le modérateur entre musique improvisée et musique recommandée algorithmiquement en temps réel. Ainsi il génère à son tour de nouvelles recombinaisons improvisées afin de créer de nouveaux patterns sonores à partir des diverses sources. L’algorithme de TikTok est préalablement calibré pour proposer à l’utilisateur du contenu majoritairement musical. L’écran du smartphone est projeté sur un écran. Les vidéos défilent et leur son est enregistré dans Djazz recombinant le contenu enregistré, sa mémoire et les musiciens invités sur scène. L’ensemble est retransmis en direct sur TikTok.
Federico Visi (Universität der Künste Berlin), The Sophtar: an electroacoustic feedback instrument with embedded algorithms for human-machine improvisation
The Sophtar is a tabletop string instrument with an embedded system for digital signal processing, networking, and machine learning. It features an array of actuators and controlled electroacoustic feedback capabilities that can be activated algorithmically by the models running on the embedded computer. These respond to the actions of the player, making the instrument a platform for electroacoustic human-machine improvisation. Other features of the Sophtar include a pressure-sensitive fretted neck, two sound boxes, and bespoke interface elements. It combines conventional tactile musical affordances with recent machine learning models and digital signal processing algorithms, which are deeply integrated in the design of the instrument and have a strong influence on how it is played and the way it sounds. During this workshop I will presents the instrument and its distinctive techniques, and improvise with different algorithms and models.
Lectures
Oded Ben Tal (Kingston University), David Dolan (Guildhall School of Music), The Odd Couple: Human and AI Making Music in the Moment
This performance/talk will present an ongoing collaboration between composer Oded Ben-Tal and pianist David Dolan. Ben-Tal has been developing an AI-inspired system (JHAIMI – Joint Human AI Music Improvisation) that ‘listens’ to the pianist (extracting musical data from microphone input) and generates responses in real-time during the performance. The responses combine generative compositional processes on the one hand and real-time musical inferences about the pianist’s improvisation on the other. The aim is to create a strong, sophisticated and nuanced musical dialogue between human and machine. Dolan’s improvisations are based on an expanded tonal-modal idiom but do not conform to a specific musical style nor adhere to a preplanned scheme such as a chord progression, agreed tempo, key, or meter. The result is a new form of musical dialogue, created by the possibilities of new technology and drawing on the wealth of 300 years of music making. Ben-Tal is adjusting parameters in the system during the performance to shape larger-scale aspects, but the moment-to-moment generation of musical material is done automatically by JHAIMI. See also.
Alain Bonardi, Emma Frid, Paul Goutmann, Axel Chemla-Romeu-Santos (Université Paris 8 – CICM / MUSIDANSE – Projet ERC AdG G3S) Generative Spatial Synthesis of Sound and Music (ERC G3S)
While the industrial markets for 3D audio and artificial intelligence in music are expanding rapidly worldwide, the spatial qualities of sound are still an unthought-of aspect of AI. In practice, spatial audio is generally carried out in post-production and is usually confined to the spatialization of sound and the acoustic modelling of rooms. The aim of the G3S project is to propose new ways of creating, modelling and analyzing sound by natively integrating its spatial dimension. We want to extend the theoretical and practical approach to spatial diffusion, whether with loudspeakers or headphones, by using the generative capabilities of artificial learning.
Daniel Brown and Steve Horowitz, Unity Interfaces for Djazz and Somax: Ludic and Narrative Perspectives on Musical Machine Co-creativity
We present an interface for controlling Djazz and Somax with Unity. Unity is a video game engine: an environment for creating games, a process that involves the design of many aspects such as visuals, physics, characters, and rules for winning, as well as music. While game engines like Unity traditionally offer linear control of looped playback of prerecorded audio tracks for music design, Unity can also be used as a control interface for generative music systems such as Djazz and Somax. The combination of the systems is appealing: unscripted, dynamically changing musical accompaniments offer richer experiences in video games. Nonetheless, a friction arises between the game engine and the music system, as the relationship of control and interaction is called into question. Are the experiential aspects of game play that are considered ‘meaningful” similar to those in musical improvisation? If so, do the same musical parameters affect these experiential aspects? Should a musical co-improvisation system be used as a musical accompaniment to an extra-musical experience such as a game? Or is this last question misleading: is game play another form of interaction that invites as much co-creativity as direct musical interaction? Two perspectives that let us start answering these questions are the ludic and the narrative; these perspectives have been used in analysis of both music and video games. We will present examples of Unity in combination with Djazz and Somax which illustrate each of these perspectives and suggest methods of further development and exploration into this combination.
Nicolas Brochec, Tokyo University of the Arts Geidai, Marco Fiorini (Ircam) Real-Time Recognition of Instrumental Playing Techniques for Composition and CoCreative Interaction
Playing techniques have been practiced throughout the history of music in every cultural context. With the advent of contemporary music in the West in the second half of the 20th century, playing techniques became one of the central parameters of musical expression, if not the primary parameter in specific genres. Despite the advent of computers in music practices and the emergence of computer music in the 1970s, playing techniques have been largely disregarded by music algorithms due to their complex nature. Recently, with the advent of artificial intelligence, which enables more precise real-time music analysis, the possibility of designing computer music tools based on playing techniques has increased. Yet, the compositional desire of composer Nicolas Brochec to create playing technique-based human-machine music interactions for mixed music composition led to collaboration with improviser and engineer Marco Fiorini, who shared the same vision for improvisation and cocreative interaction. In this lecture, we are going to present the compositional and improvisational motivations behind ipt~, a Max/MSP external for real-time recognition of instrumental playing techniques, as well as methods and techniques that led to a reliable and robust system, spanning from sound bank recording to algorithm design and training.
Pierre Couprie (Evry Paris-Saclay University) Gestures in Electronic Improvised Music
This presentation revisits Philippe Descola’s concept of “worlding,” emphasizing the creative process through which improvising musicians construct their own sonic environments. In electronic improvisation, the musician actively develops and refines their instrument, adapting its interfaces and performance techniques, thereby continuously shaping their unique musical world. The discussion first examines the role of the instrument in electronic improvisation, highlighting its hybrid nature, which integrates electro-mechanical, analog, and digital elements. Unlike traditional instruments, electronic instruments evolve through rehearsals and performances, reflecting Thor Magnuson’s perspective on the genetic development of digital instruments and Georgina Born’s phenomenological view of performance as an ongoing process. The instrument is not a static entity but a “configuration” in Foucault’s sense—defined not by its structure alone but by its temporal evolution. Furthermore, electronic music devices function as complex networks of hardware, software, and protocols, challenging conventional definitions of musical instruments and redefining how musicians interact with their tools. The second key concept, the trace, explores how performance documentation transforms into an artifact. In performance analysis, musicologists work with traces—recordings that capture an event and are subsequently shaped through selection, editing, and mastering. Alessandro Arbo’s distinction between a “recorded-document” and a “recorded-work” is particularly relevant in this context, as recordings of improvisation are not mere reproductions but constructed objects that reflect interpretive choices. The final section addresses the complexities of musical gestures in electronic improvisation. Gestures are multifaceted, spanning micro-movements to full-body actions, and serving various functions, from sound production to expressive or auxiliary roles. Some gestures are directly linked to instruments, while others exist internally as mental representations. Moreover, gestures operate across multiple modalities, such as the visual gestures seen in Iannis Xenakis’ scores, which simultaneously function as musical gestures. The definition of gesture varies across disciplines: for musicians, gestures are tied to their instrument and performance setup; for scientists, they are studied in terms of their functional outcomes; for computer scientists, they are examined in relation to sensor-mapping and interaction; and for sociologists and musicologists, they are analyzed within broader performance contexts. Given the inherent challenges in defining and analyzing gestures, this presentation introduces the concept of a “gestural chain” or “catena,” drawing from geological metaphors to illustrate how gestures influence and build upon one another in a non-hierarchical structure. This interconnected network of gestures shapes the act of performance, encompassing movements made by the musician, interactions with the instrument, and the perception of these actions by the audience. The presentation is accompanied by audiovisual examples, including an improvisation, as well as a performance where gestures of diffusion are explored within a spatialized orchestration setup. Ultimately, the study of music, particularly in improvisation, is not merely an analysis of sound objects but of the complex web of gestures that bring music into being.
Sabina Covarrubias (Synesthesic Devices) Somax2 for Live Visual Music: Co-Improvisation with Creative Agents
Visual music has a rich history, yet live audio-visual (A/V) practice is often limited by fixed control schemes. Within the framework of co-improvised interaction and distributed creativity, we argue that Somax2 offers new perspectives on visual music by organizing audiovisual forms through agents, players, and influences that operate in real time. Instead of treating the image as a passive display, we use Somax2’s multi-agent capabilities and corpus-level tools—Regions, Filter, and Atom—to structure performance and support emergent and cohesive behaviors across media. We outline practice-ready configurations in which: (i) internal and external influences, peaks/matches, and reactive/continuous modes shape local and global form, (ii) continuity, quality/sparse, probability, and beat alignment provide macro-temporal control suitable for large-scale audiovisual structures, and (iii) musical information dynamics, including information rate, function as analysis and design principles during performance. In this context, the strategies of dialogue, enrichment, convergence, and divergence employed by the agents become visible and audible as co-creative phenomena rather than predetermined mappings. We also propose lightweight documentation via state indices, matches, and region activity to evaluate perceived agency and formal clarity ex post facto. Overall, our contribution redefines Somax2's role in visual music, positioning it not as a unidirectional driver but as an operational framework for co-creativity in live, improvised audiovisual performances. This approach aligns with REACH's objectives for creative agents and multi-timescale adaptation.
Ken Deguernel (CNRS, Laboratoire CRISTAL, Lille) Mixed Initiative Co-Creative Design for Long-Term Human-AI Musical Partnership
Current Mixed Initiative Co-Creative systems (MICC) have opened new avenues in music generation. These systems facilitate novel creative processes and modes of interactions. However, these systems lack the ability for long-term adaptation between user and machine. Current ways of adapting are either unilateral, where the user adaptively learns to operate the AI system, or self-engineered, where musician-engineers modify their own systems over time. The recently started ANR research project MICCDroP: Mixed Initiative Co-Creative Design for Long-Term Human-AI Musical Partnership aims to address this limitation by developing AI systems for music performance based on lifelong learning, with a focus on adaptation and personalisation. These systems will allow us to explore the evolution of human-AI partnerships using ethnographic studies and creativity theory, as well as conduct artistic experimentations and live performances to inform future research directions. In this presentation, I will introduce the theoretical and practical underpinnings of the project, as well as its future prospects.
Sivan Eldar (composer), Jean-Louis Giavitto (CNRS Ircam), Augustin Muller (Ircam, Le Balcon), Composition and improvisation in contemporary opera
This presentation addresses the issue of synchronization between electronics and instrumentalists in contemporary opera, focusing in particular on the technical and aesthetic challenges of integrating real time, distributed control, and sound spatialization. Two works, Like Flesh (Fedora Prize for Digital Innovation, 2022) and Nine Jewelled Dear (Aix-en-Provence Festival, 2025), will provide a concrete opportunity to address the tensions between fixed writing and improvisation, strategies for coordination between instrumental scores and electronic devices, as well as the perspectives opened up by the reactive music programming tools developed at IRCAM to make electronics expressive and lively by integrating human musical time into the heart of the concert.
Jean-Rémy Guédon (ArchiMusic) Transmettre son style à l'IA : récit d'une collaboration improbable.
Cette recherche examine la rencontre entre intelligence artificielle et intuition musicale dans le cadre d'une démarche compositionnelle expérimentale. En mobilisant des outils no-code et des modèles conversationnels de dernière génération (GPT-5, Codex, Claude 4.5), j'ai entrepris d'entraîner l'IA à partir d'un corpus d'œuvres personnelles issues de 25 années de composition pour mon ensemble Archimusic cherchant à lui transmettre mon langage musical singulier – ni tonal ni atonal, d'une polyphonie complexe – afin que l’IA génère des variations respectant cette identité stylistique. Au terme de plusieurs mois de recherche ayant mené à la création de quatorze prototypes d'applications musicales, un constat s'impose : un fossé persiste entre le potentiel promis par l'IA et sa réalité compositionnelle. La machine reproduit sans saisir, répète sans intuition véritable. Cette conférence présente un bilan à la fois critique et artistique de cette expérience de co-création entre compositeur et algorithme. Elle interroge les notions d'itération, d'intuition et de style algorithmique, et inclut l'audition d'une courte pièce pour clarinette née de cette collaboration interprétée par Eric Lamberger. Au-delà de l'aspect technique, je souhaite ouvrir une réflexion sur la possibilité d'une « boîte compositionnelle ouverte » : un espace où chaque compositeur pourrait entrer en dialogue avec sa propre mémoire musicale par l'intermédiaire de la machine.
Gilbert Nouno (Haute Ecole de musique de Genève (HEM)), Christophe Fellay (École de design et haute école d'art du Valais (EDHEA)), Nathalie Hérold (IReMus - Sorbone Université), Pierre Alexandre Tremblay (Conservatorio della Svizzera italiana) Evaluation of AI based improvisation systems
Drawing on exploratory studio sessions and a workshop at HEM Geneva, we offer a practice-based evaluation of AI devices for free improvisation. We articulate conditions of stage operativity with poietic/esthetic effects, that is, how the agent transforms the act of playing and the act of listening, in order to probe their digital organology. The study yields development directions for agents conceived as co-acting partners within a shared capacity for action between human and device, aiming at the conjunction of surprise and formal coherence.
Pierre Saint-germier (CNRS) The meaning of « co » in « co-creative »
It is increasingly common for researchers in the field of computational creativity to substitute the question of co-creativity for the question of creativity. The common pattern of reasoning seems to be that creativity requires agency, computational machines lack agency, and therefore cannot be creative on their own. So we should look instead for ways to be creative WITH computational machines, rather than trying to design creative ones. But if computational machines lack agency, and a fortiori creativity, how can you be CO-creative with them? If co-creativity is to be a viable substitute for creativity, some explanation is in order regarding the meaning of « co ». One way to approach this question is to look for a theory of distributed creativity, taking traction from similar efforts in the theory of distributed agency and the theory of distributed cognition, as developed in the fields of philosophy, anthropology, and cognitive science. The presentation will propose some preliminary results in this research taking the case of Somax as a running example.
Nicolas Souchal, Diemo Schwarz (Ircam), Extensymbiosis — The Audio–Visually Augmented Trumpet and Multi-modal Corpus-based Synthesis as a Shared Instrument
We present three technologies for free non-idiomatic audio–visual improvisation, the fruit of long-term research and musical application in performance and compositional contexts. First, corpus-based concatenative synthesis leverages machine listening and allows to play music by selecting grains from pre- or live-recorded sound via gesture-controlled navigation in a timbre space defined by perceptual audio descriptors. Second, the extension of corpus-based synthesis to the domain of images enables audio–visual improvisation via cross-modal mappings between the audio and visual perceptive dimensions. Third, the audio-augmented trumpet uses the sound of the instrument itself to control sound processing, unlike typical sensor-based augmented instruments. The audio-augmentation is based on real-time sound analysis driving sound processes such as additive synthesis, resonators, and auto-convolution. The main aim of this audio-visually extended trumpet is to explore human/augmented-instrument relationships that introduce unpredictability, navigating between moments of control and moments of adaptation to situations of non-control, which is particularly relevant to the practice of improvisation. We will perform a short piece that combines these three technologies by live-recording the augmented trumpet to feed the corpus-based embodied instrument CataRT, and to control image generation from a corpus of drawings by Elizabeth Saint-Jalmes, creating a symbiotic shared multi-modal instrument.
Master Class
Steve Lehman and Miles Okazaki give an exceptional masterclass open to the public before the concert, in which they will discuss their musical trajectory as composers and instrumentalists and shed valuable light on their creative relationship with technology. They will be accompanied by the Somax Brothers (Gérard Assayag and Marco Fiorini) to share demonstrations of the generative software (Dogstar and Somax2) used in the concert and thoughts on interaction strategies and the type of listening and responsiveness that such an experience with the machine brings into play. The masterclass will end with a session of Q&A and a discussion with the audience.