Programme


Access to concerts and evening lectures is free, but reservation is mandatory here. Access to afternoon workshop / advanced courses is only upon registration to max Summer School here



29 July Monday

16:30 - 18:00 MSS advanced course / IK Workshop #1

Introduction to REACH co-creative software
Marco Fiorini, Mikhail Malt

TThis course will present the REACH Project on cocreative interaction with AI, and the Somax2 system, detailing its basic concepts, the applicative user interface, the main controls, the interaction strategies and musical scenarios. Somax2 allow Max to improvise in collaboration with humans by capturing human performances, navigating through music corpuses and latent spaces of musical features, and adapting continuously to the evolving musical context using generative model and audio / midi rendering.

18:00 - 19:30 Improtech lecture session #1

Miller Puckette (Ircam), Irwin, An inside view of an instrument

Marc Chemillier (EHESS), Keeping the swing, AI cocreative musicianship in collective idiomatic settings



30 July, Tuesday

16:30 - 18:00 MSS advanced course / IK Workshop #2

Somax2 advanced course
Marco Fiorini

This course will dive into the advanced use of Somax2, including mastering of expert controls in the UI, accessing and programming the Max Library interface, scripting for real life performances, and taking advantage of multi-agent network connectivity.

18:00 - 19:30 Improtech lecture session #2

Shlomo Dubnov (UCSD, Qualcomm Institute), Advanced Machine Learning and Music Information dynamics for Deep and Shallow CoCreative Systems

CANCELLED - - Steve Lehman , Professor of Music at CalArts, Current Trends in Computer-Driven Interactivity with Tempo-Based Rhythm

Steve Lehman's presentation is replaced by a live Q & A session swith Pr. Miller Puckette.



31 July, Wednesday

16:30 - 18:00 MSS advanced course / IK Workshop #3

Somax2 under the hood
Marco Fiorini

The internal technical parts of Somax2 will be explained, including the client - server Max / Python architecture ; the AI core responsible for machine listening, representation learning, and adaptive generativity ; the segmentation and recognition of audio streams and the reactive strategies.

18:00 - 19:30 Improtech lecture session #3

Nao Tokui (Qosmo Inc.), Surfing musical creativity with AI — what DJing with AI taught me

Mari Kimura (UC Irvine), MUGIC®: endless possibilities extending musical expression



1 August, Thursday

16:30 - 18:00 MSS advanced course / IK Workshop #4

Using Somax2 in advanced research and creation
Marco Fiorini, Nicolas Brochec, Jose-Miguel Fernandez

Interacting with Somax2 agents by means of automatic recognition of complex Instrumental Playing Technique and state of the art Machine Learning. Integration of Somax2 into real-size composition for ensembles, presentation of Somax2Collider for Spatial Interactive Agents

18:00 END of Workshop and Lecture Program at Geidai

19h00 Performance at Konnoh Hashimangu Shrine in Shibuya

Tokyo Bout à Bout

Georges Bloch (composer, generative electronics), Taketeru Kudo (Butoh dancer), Takashi Seo (Bass)



2 August Friday

16:00 IMPROTECH CONCERT #1

REACHing OUT!

Joëlle Léandre (Double Bass) and the Who/Men (Gérard Assayag, Marco Fiorini, Mikhail Malt, generative electronics)

Rigbiss

Miller Puckette (PD live electronics), Irwin (Aliboma_2014, live electronics)

Quartet

Mari Kimura (Violin, MUGIC® motion sensor), Jim O'Rourke (improviser, composer and producer), Akira Sakata (Saxophone, Clarinet, Voice and Bells), Jean-Marc Montera (Guitars, live electronics)

Taideji

Lara Morciano (composer, pianist), Thierry Miroglio (percussion), Jose-Miguel Fernandez (generative electronics)

Trans(e)-musical

Justin Vali (Malagasy zither, voice), Marc Chemillier (Generative electronics), Nao Tokui (Generative Electronics)



19:30 IMPROTECH CONCERT #2

Collidepiano II

Suguru Goto for Piano and SoMax2. Piano: Hyun-Mook Lim, Computer: Suguru Goto

Antiphonie

Ko Sahara, for Pianist, Player Piano and Electronics. Piano: Satsuki Hoshino, Computer: Ko Sahara

Sedimentation

Takeyoshi Mori, for Cello and SoMax2. Cello: Asami Mori, Computer: Takeyoshi Mori

Traversée II

Nicolas Brochec, for Flute and Electronics, Kanami Koga (flute)

Trio

Mari Kimura (Violin, MUGIC® motion sensor), Michiyo Yagi (Electric 21-string Koto), Tamami Tono (Sho, Kugo)

Spectral Light

CANCELLED - - Steve Lehman (Saxophone, live electronics), Marco Fiorini (Generative electronics)

String Theory

Turner Williams Jr (Shahi Baaja, electronics), Anaïs del Sordo (Voice), Jean-Marc Montera (Guitars, live electronics), Marco Fiorini (Generative Electronics)



Abstracts

Miller Puckette (Ircam), Irwin, An inside view of an instrument

Signal delays are very bothersome to live musicians, especially percussionists. As a duo using percussion, we have worked out a way to avoid having to send audio signals between computers, which would always add some delay. Instead, we work as a duo within one computer by making plug-ins that can be remotely controlled. The plug-ins can be any kind of patch, either Max or Pure Data, and can be hosted by any digital audio workstation. The result is a single software percussion instrument played live by a musician but simultaneously played by a second performer using controllers that act within one or several plug-ins in a single signal chain.

Marc Chemillier (EHESS), Keeping the swing, AI cocreative musicianship in collective idiomatic settings

Artificial intelligence can be seen as antagonistic to certain traditional activities, particularly music. We are going to criticize this stereotype by showing how machine learning can be used for music with an oral tradition. During the REACH project, we developed an improvisation software programmed in Max/MSP, which has the particularity of taking a regular pulse into account. All pulse-based musical sequences captured by the software can be reused after the learning phase, retaining the same culturally relevant rhythmic position. The improvisation software is thus able to play in the style of native players. The outputs of the program are good enough to allow duets between a musician and the computer. Musicians reacting to the outputs of the machine can shed new light on the analysis of their repertoires. By refining the generation parameters, we can get closer to an optimal characterization of the music studied. We’ll show examples of experiments with musicians from Madagascar. Moreover, the system can also explore various degres of hybridation. One can inject in the context of Malagasy music generated solos from other traditions (for instance jazz) and study how it fits the musical context according to the native musicians point of view, which can shed new light on the boundaries of a given musical tradition.

Shlomo Dubnov (UCSD, Qualcomm Institute), Advanced Machine Learning and Music Information dynamics for Deep and Shallow CoCreative Systems

In the talk Shlomo Dubnov will survey his recent research on advanced generative music AI methods with emphasis on diffusion methods and information theory. He will then describe creative applications of text-to-music, voice conversion and multi-track synthesis, and analysis of polyphonic music in terms of multi-information dynamics. Questions of co-creativity, artistic sensibility and Kansei in AI will be discussed.

Nicolas Brochec, Marco Fiorini (Geidai, Ircam) : Real-Time Recognition of Instrument Playing Techniques for Mixed Music and CoCreative Interaction

We are going to detail the techniques, methodologies, and outcomes that led to the development of an interactive system based on real-time Instrumental Playing Technique (IPT) recognition. Starting from exploratory studies on the flute, we will discuss soundbank recording, data format, and data augmentation, as well as state-of-the-art machine learning model architectures developed in our research. By connecting our model to the co-creative AI system Somax2, we are able to interact with generative agents by means of real-time recognition of IPT classes, adding a new dimension to its interaction paradigm and addressing potential scenarios of co-creative human-machine interaction in mixed music for improvisation and composition.

Mari Kimura (UC Irvine), MUGIC®: endless possibilities extending musical expression

MUGIC® is a 9-axis motion sensor similar to other generic 9-axis sensors available on the market. However, what sets MUGIC® apart is its comprehensive, user-friendly design. Created by violinist and composer Mari Kimura, MUGIC® is a turnkey product that allows musicians to create their art immediately without requiring extensive programming or electrical engineering skills. The first version of MUGIC® sold out following a significant bulk order from the Lincoln Center in NYC this spring. As MUGIC® v.2 is under development, Kimura will demonstrate the importance of fostering a community around new technology and how MUGIC® users are expanding its application not only in music but also in other forms of art and beyond.

Jose-Miguel Fernandez and Lara Morciano (Ircam) Composition and Interaction with Somax2

In this presentation, we will discuss the integration of Somax2 into musical composition through the works of Lara Morciano and José Miguel Fernández. We will also present the Somax2Collider environment for Spatial Interactive Agents, which is a preliminary approach to using agents in the context of spatialized improvisation using the SuperCollider software and a system of wireless connected speakers.

Nao Tokui (Qosmo Inc.), Surfing musical creativity with AI — what DJing with AI taught me

Nao Tokui discusses the progression of his AI DJ project, which incorporates machine learning systems for live performances, and shares the insights he gained from it. He also explores the potential implications of the latest AI technology in music improvisation.

Steve Lehman , Professor of Music at CalArts, Current Trends in Computer-Driven Interactivity with Tempo-Based Rhythm

Steve Lehman will present a survey of current trends in experimental musics that draw from tempo-based modalities of rhythm, with a particular focus on their application to computer-driven models for real-time interaction.