For some time now, the subject of immersive sound has been considered in a project led by the Conservatoire de Paris in partnership with the Yamaha/NEXO group. Entitled Plateau 1 (after the venue in which it’s hosted) this project impacts a number of areas in which the Conservatoire is active.
In this interview the Conservatoire’s heads of Sound and Audiovisual departments, Denis VAUTRIN and Alexis LING, discuss with NEXO Head of Engineering François Deffarges a project that is at the heart of the Conservatoire’s activities.
What is the Plateau 1 project and how did it come about?
The Plateau 1 project was born of the desire to create a laboratory for experiments in immersive sound, located and named after Plateau 1, a place in the Conservatoire that we felt was not being used enough.
A while back, we had a discussion with one of the members of an invited jury, François Deffarges, an expert in sound from the professional sound system manufacturers NEXO. We wanted to create a platform dedicated to experiments in immersive sound, and it turned out that François was working simultaneously on new sound diffusion tools for live performances. We had several questions on this subject in common, and gradually we identified opportunities for collaboration.
The complexities of the subject soon became apparent, along with the significant investment that would be required to find answers to our questions and provide our students with the latest technologies necessary to explore immersive sound.
NEXO were able to cooperate with our students, specifically in the area of distributing sound in an immersive audio production, in either live or broadcast applications. We have recently experienced a small revolution in the way we think about sound by integrating this idea of immersion. And such a revolution inevitably throws up many questions. So, the idea was to explore these issues in a laboratory for which we did not have the resources to create alone. And that’s where the idea for the partnership was born.
The timing was good because we were already working on the development of an immersive solution that would allow sound objects to be positioned in space. This partnership enables important interaction with users to develop and refine the format.
Could you describe this immersive experience? How does it work?
It’s a system that offers an authentic virtual reality experience with sound. It can render virtual sound scenes and enhance the real experiences we are living. It’s really the counterpart of the visual virtual reality that we all know now, but in the world of sound.
We will now be able to produce events and broadcast recordings that it will enable listeners to move away from a conventional, frontal stereophonic representation of sound, to a fully immersive experience.
In outline terms, it is a dome composed of a rectangular framework, with several rings of speakers: one at ear level surrounding the listener, another surrounding the listener at a higher elevation and others at points located above the listener.
The system has two main aspects. The image part allows a sound to be sent to any speaker, placing it precisely at any point in the system and even at points that the listener perceives to be beyond the speakers, thus creating a 3D sonic environment that the listener inhabits. Upstream, processors allow this operation to be controlled by sending the spatial coordinates of the sound, coordinates that can evolve in the same way as the sound, as a function of time. In this way, it is possible to create the perfect illusion of sounds occurring all around the listener.
The so-called enhance part of the process is an augmentation of the room’s acoustics to complement the speaker system. Twelve microphones positioned in the ceiling capture all the sounds emitted inside the room and are connected to a system that reproduces the acoustics of existing rooms. When a sound reaches one of these microphones, it is sent to a DSP which processes it as if it were emitted in one of these rooms, and then sends an acoustic simulation of this room to all the speakers. In this way we can reproduce the acoustics of a cathedral and give the illusion of being there. It’s amazing!
This twin capacity from a single system, which makes it possible to recreate both an acoustic augmentation and a spatialization of the sound, which is itself processed in the acoustic augmentation, is truly a world first. We’re naturally very pleased to be able to host and share this experience.
In fact, immersive sound is not so much a revolution as an evolution, because it’s existed for thousands of years. This work on virtual sources in space is an evolution of the immersive dimension of sound. That fact that we are able to immerse an audience, not just a listener: this is the revolutionary development. We are leaving the stereo presentations we know and moving towards performances where sound in space will become the norm.
Can you tell us about the types of users for which this process is intended? What can it do specifically for each of them?
One major area that we didn’t immediately identify is the experience of the performer during the learning and rehearsal phases. This system is a fantastic tool for improving the experience of the performance, especially during rehearsals. Performers regularly work in difficult and tiring acoustic conditions, and this process can provide real enhancements.
This tool also makes sense in the world of theoretical concepts, as it allows us to illustrate how the performance will interact with and be affected by the acoustics in which it takes place. This is difficult to convey performers because it’s purely related to their experiences. The strength of this system is that it allows performers to move quickly from one acoustic environment to another. This makes it possible to highlight any issues with acoustics before the actual performance takes place.
This touches on the problem of aesthetics in the representation of works and the mode of expression and interaction with acoustics, which are difficult issues to illustrate in a Conservatoire today.
It’s also a tool for the creation and reflection of sound in space for several other types of users. Improvisers approach this tool differently because it challenges them think about using the space differently. Composers, particularly of film music and electronic music, are very curious about it because electronic music material manipulated in this way creates new possibilities very different from classical instrumentation.
Sound engineers also find it to be an interesting playground: they learn to master all these technologies and to master the recording, broadcasting, and post-production of sound in immersive situations.
One of the beauties of this project is indeed the way in which it engages with the teaching of the Conservatoire, intersecting with all areas from writing to composition, from production to mixing, recording and sound. And the link this creates is really very interesting, because music cannot be segmented, it only exists when it is treated in its entirety, from its writing to its broadcasting.
The audiovisual department is also involved in this project, which is a continuation of more than 25 years of research work by the team here. This work, which is also linked to teaching with composers and students in the sound professions, is in fact oriented towards the spatialisation of sound. They have recently developed an application that allows them, via headphones, to simulate a 3D space. Initially for headphones only, the protocol created makes it very easy to export this development work directly to Plateau 1. After exporting to this stage, even without further adaptation, it already sounds quite incredible. Further adjustments make it possible to obtain an even more relevant rendering.
These same activities are also undertaken with the composition and electroacoustic students, which creates a pleasing transversality and a valuable pedagogical contribution from the services to the student departments.
From a pedagogical point of view, it is also very beneficial to be able to make performers aware of the importance of sound and of feedback from the room. It is one of the Conservatoire’s major missions to train its entire population in listening.
It’s also a fantastic tool for the staff involved in research because it allows us, via different technologies, to support several projects carried out at the Conservatoire with various partners, and in particular a research project that we are currently working on the acoustics of Notre-Dame de Paris cathedral. So, it provides very valuable support that’s very much in line with what’s already going on the Conservatoire.
Are you able to imagine the impact of this technology on people who listen to music?
We all have our listening experiences and, when we go to a show or a concert, we always expect similar experiences each time. Gradually, the expectations of listeners have progressed, which has led to a huge technological leap in the field of sound, driven by the audience requirement for better quality of sound.
This development is particularly relevant to the younger generation, which listens a lot to music through headphones. These young people will demand better sonic experiences.
We are not yet sufficiently aware of this new way of thinking about sound, in which sound is no longer ‘frozen’ in the material that we are going to broadcast. We are gradually changing our way of recording and broadcasting music to move towards something much more interactive, which will be reinterpreted by the listener. An experience that will not be unique and fixed, but an experience with which the listener will interact more.
Can these different user profiles work together on the same project using this tool?
For a production or residency project, Plateau 1 is a nerve center where composers, performers, engineers, musicians and visual artists will interact. Then eventually, commercial operations will also come in to provide solutions. It is effectively a laboratory dedicated to collective exploration.
It is not a turnkey residency center to produce a product, but a place for experimentation. Thanks to this partnership, our collaborators are product designers, and this is an asset in this project because they bring concrete, reliable and qualitative contributions. But the organization remains that of a workshop, a place of creation.
We could very well imagine, for example, a virtual set in which a dancer triggers a treatment element with his or her movements, with musicians playing live on the stage.
This system allows us to simulate acoustics but also distribution systems: It’s possible to reproduce the acoustics of other rooms in the Conservatoire and thus to prepare the mix on Stage 1 for an event that will then take place in another room, enabling a very thorough pre-production of the works. It’s important to imagine this system as a device with a rendering engine capable of modelling anything you want.
Does this technology allow the realisation of projects that would otherwise not be possible?
The strength of this technology lies in the quality of the definition: 50 loudspeakers well distributed around the listener from which to reproduce rooms that are ultimately a little less complex, less precise, this is essential.
To this should be added its spectral diffusion capacity: the range of representation and the speaker systems with a frequency response ranging from extremely low to extremely high are new concepts on a work stage. Today, these assets make this technology unique to the Conservatoire.
Without forgetting that this immersive tool is developed to be able to reproduce any place identically. The rendering engine allows for this quantification and what is produced is easily transposed to a room three or four times larger, simply by correctly setting the positions of the sources and speakers, to give the same sound. So, there is both a unique side and a universal side.
What makes this process so effective is also everything and everybody that surrounds it: musicians, sound engineers, dancers, instrumentalists, our incredible range of instruments … all of this contributes to the richness of this project, it’s the whole infrastructure of the Conservatoire and this environment which is absolutely unique.
(Question to François Deffarges): How important is this collaboration with the CNSMDP for you?
We are very proud to have been asked and we didn’t hesitate for a second to get involved. We are very happy to be part of this adventure, which is taking place in one of the most prestigious schools in the world, coupled with a training program for sound professions that is among the best. It’s a perfect collaboration.
Are you aware of the existence of a similar project in another organisation?
It’s a process that’s being explored in other institutions, in Sweden and Vienna in particular, where others are looking into this question of active reverberation. As far as we know, this dual system that we have today does not exist elsewhere.
The strong point of the system is that it allows two types of use. On the one hand, it’s a standardised, documented and therefore shareable solution, but there is also a point of entry in this system that allows more experimental, less structured actions to be carried out and tested. Indeed, the work currently being carried out allows for a simple use of the stage: any teacher or student at the Conservatoire can use this space without risk and use the active reverberation tool unsupervised with a simple and well-thought-out interface.
But at the same time, a researcher with a PC and an interface under development can use the tool and test things that are still in the prototype stage. And that is the strength of this place: the stage can go from a basic home automation system to an extremely complex laboratory.
What are the next steps in this project?
One of our next tasks will be to document the work. We installed this system with a high degree of certainty and anticipation about all the facets of using this system, but we now realise that it affects a wide variety of users. We therefore have a duty to analyse these processes, to document the strengths and weaknesses, and to share information, because this is still a place for research and sharing. We must proceed step by step, and document and publish our findings.
We also have a duty to continue improving, and this must also be done by the industrial partners who are associated with the project. It’s not about delivering a turnkey product that is intended to be used along with an operational manual, but a question of rewriting the manual together, getting feedback from users and improving the tool on a daily basis.
The functioning of our tool has been stable since the beginning, but we are constantly improving it. It’s a long process which is, above all, educational, because it makes all the users of the site aware of their responsibilities. They are aware that they are participating in our journey by contributing their feedback and needs to the project.
In the field of research and the connection that can be made with the project, we realised that there was a whole area of research opening on the relevance of current tools in the face of the challenge of spatialising sound.
We met young engineers specialising in virtual reality at the Ecole des Moines who were very quickly able to invent interfaces that were better adapted to our future needs. We quickly identified the multidisciplined aspect of this project, and the importance of sharing experiences.
We have also set up a research group with teachers, audiovisual service agents and Yamaha/NEXO professionals, and we would also like to invite some students to join us. Sharing experiences is a real challenge, since these tools are fairly new. We believe in collective intelligence when it’s implemented in a voluntary group. The participants will share their successes and failures, their ideas, publish certain results and thus enable the project to progress. This hybrid group is very interesting, because there are few other places that bring all these users together.
We have been developing this solution for the last five years and today it is the baptism of fire that reveals the details that we hadn’t thought of, on the ergonomic or technical levels for example. This allows us to feed off feedback from users, who are a very demanding and diverse population, which is a huge advantage for us and for this project.
As part of the project, Plateau 1 will be equipped like a studio with a small video control room and three robotic cameras. It will therefore be possible to film on three different axes in the room and to broadcast live, providing an additional and easily accessible transmission medium.
For me, this provides a glimpse of what could be available on a very large school like the Paris Conservatoire, and an example of the type of equipment that should be installed routinely in large schools.
Who knows what tomorrow’s classrooms will look like, but we are working to better define what tomorrow’s learning, creation, teaching and workspaces could look like in an institution like ours. It’s a form of prototyping of a project that could be on a larger scale and develop further.
And if the equipment becomes more widely available, imagine studios with reverberation enhancement systems? Of course, not to the same extent and perhaps not with the same realism but, as we said earlier, it is much more conducive for musicians to have an acoustic context for their music.
This is an interesting – even a key moment in time, because it’s the moment of the fusion of acoustics and electroacoustics, which originally came from very different cultural worlds. It heralds a combining of these two worlds in a more open and less segmented culture.
Finally, three words to describe this partnership, this project, this relationship.
Enthusiasm, trust, creation.
Sharing, enrichment, teaching.
Everything has already been said, so I’ll just add one: success!
IN PARTNERSHIP WITH YAMAHA/NEXO
Ron Bakker / Delphine Hannotin – Yamaha Music Europe, design and tuning
Christophe Girres – NEXO / Alain Roy – Espace Concept, production
Denis Vautrin, Conservatoire’s Head of Sound education
Alexis Ling, Conservatoire’s Head of Audiovisual
François Deffarges, NEXO’s Head of Engineering