The Antescofo project was born in a quite peculiar situation in 2007, Marco Stroppa wanted to have a musician and a computer talking to each other on stage. Those dialogues produced an interaction, quite alive, between computer and musician. We realized that before getting into the creation stage of that new work, maybe we needed to study what already existed in the various music styles, and I actually remember very well, we started to perform very simple studies on existing works to see how the computer could understand Claude Delangle playing, and how we could use that understanding afterwards to create new dialectics between computer and musician. So Antescofo is based on two sections. There's is a listening unit, which means it is like an artificial ear that listens to what the musician is playing on stage in real time. There is a second unit that takes the result of that hearing, and it tries to react. On stage, the musician anticipates. If you look at a string quartet, for example, you often see that they are playing quite complex stuff together and they manage to end simple sentences together. That's a rather incredible ability that humans possess that we need to bring to computers. So that means our listening unit must be able to take into account, to detect, let's say absolute and relative information and it also has to anticipate. in case the sound is too loud When the computer is listening, it has to be able to start its own actions in accordance with how you play music, that's the second unit's role. These tasks need to be started in real-time and in a fashion that is musically consistent with what has happened. This means that if the listening unit makes a mistake, the second unit must not. If the musician makes a mistake, the second unit must not, if there is a surrounding noise, it must not make a mistake, and most of all, we must not stop. Now, when we started the Antescofo project, we realized that despite the richness we have, any musician is actually also able to express himself thanks to that richness, is able to read that kind of writing. Whether it is jazz, or contemporary, or classical. That richness of the language is missing in informatics, in musical informatics. I play the electronic part, and it's true that Antescofo allows that even more, but I really get the feeling of having an instrument, so something pretty much alive, linked to a whole history of instrumental craftsmanship with a repertoire,with an instrumentalist, who is sitting behind his instrument, with all its effect, the differences occurring from one concert to another, with all that a musician is on stage. During development, we realized that we have a machine that more or less understands... that is able to listen to the musician, it can extract very important performance parameters on the spot. But what actually became clear during that process, which was more or less surprising, is that we could use those parameters during the writing phase, which means during composition. For example, in this part we are now writing a piece with Ichiro Nodaira, he came at the beginning of the session, so I showed him a variety of tools according to what he wanted to do, and then, he kind of seized the thing for himself, he made himself familiar with some of these tools so that he could work independently in japan And the, he composed, with them I gave it to him and that would be for the best because it's a good thing, he already incorporated in his writing the electronic concepts that he was able to develop at the same time. That system allows us to write a really unique, unified thing, between the electronic and the instrumental world, which, as a composer, can be fixed in my mind. During the last years, we built a language with which composers can easily transcribe their musical thoughts, which is quite a complex task for the machine. How did we proceed before? We had a big patch and we tried to click on it, we improvised a bit on the electronic part by following, the musicians, a machine or someone playing the electronic part. With Antescofo, all those actions which used to be improvised, and it did sound improvised to be honest, can be registered on a written score that we will be able to read when the technology has disappeared and it will disappear anyway, and we still have a written evidence. For the absolute informations, so the height, we use - and all that in real time - we use more or less classical signal processing methods. However, for relative elements, something else has to happen. For that, we use a phenomenon discovered by Huygens in the 17th century, which is the sympathy of clocks. The sympathy of clocks is a rather funny phenomenon. If you start two pendulums in a completely random way, if the strings holding the pendulums are connected, after some time, you will notice that those pendulums will synchronize themselves. We try to do the same thing between the machine and the musician. We did something that became what it has become. and which will continue to evolve. So it's an enormous adventure I had never experienced before, despite being close to the IRCAM for 30 years, and I hope I will be able to enjoy it for the years and decades to come, to see where this baby will get us, when it will be in its teens, maybe even at full maturity.