Using methods such as deep learning and artificial intelligence (AI) to analyze people's emotional feedback on music, composition, arrangement, and singing Recently, the Second World Music Artificial Intelligence Conference and the "Future Concert" were held in Beijing. This event was jointly organized by the Central Conservatory of Music and the Chinese Artificial Intelligence Society, with experts from fields such as music artificial intelligence, music and brain science, music therapy, and representatives of music industry related enterprises participating. It showcases cutting-edge achievements in the integration of music and artificial intelligence, and explores future trends in the intersection of art and science.
Playing the 'Future Concert'
Welcome everyone to the 'Future Concert'. I am Yu Feng, the robot conductor of this conference. Under the spotlight, the simulated robot waved and made an opening speech to the audience.
This year's World Music Artificial Intelligence Conference kicked off with the "Future Concert", inviting Chinese musicians and artificial intelligence teams to co create, utilizing techniques such as facial expression and emotion recognition, 3D sound field, machine hearing, and AI generated vision in multiple modern music works
At the concert, the work "Continuum" borrows the concept of "spatiotemporal continuum" in physics, calculates and transforms sound parameters through Lorentz equations, and presents them through a 3D sound field; The song "Er Er - Si Gang" uses Wa folk instruments such as "De", "Kou Xian", and "Ling" for live performance, and uses gesture controllers, trajectory tracking controllers, etc. to deform and reshape the sound; The Starry Night "utilizes AI based virtual instruments and uses a deep neural network model to detect and obtain gesture information from guzheng performers, controlling different parts of the music in real-time.
The fusion of art and science is amazing, "Professor Sheihab Shama from the Department of Electronic and Computer Engineering and the Institute of Systems Research at the University of Maryland told this newspaper. The performance of the concert is comparable to that of the world's top concerts, fully demonstrating the progress made by the Chinese academic community in the field of artificial intelligence in music..
Dai Qionghai, academician of the CAE Member and chairman of the Chinese Society of Artificial Intelligence, said that the combination of music and technology is an innovative trend in the music field. It is of great significance to conduct cross research on music, artificial intelligence and brain science, which will have a profound impact on the innovation of the music industry.
The rapid development of artificial intelligence will provide a new space for the development of music. Currently, artificial intelligence has had an impact on traditional music secondary disciplines, and is expected to bring innovation to disciplines such as composition and composition technology theory, music performance, musicology, music education, music technology, and music management. "said Li Xiaobing, director of the Department of Artificial Intelligence and Music Information Technology at the Central Conservatory of Music.
The Combination of Art and Science
Behind a concert, there are years of hard work from the academic community. In the keynote report section, Chinese and foreign experts shared the research progress of music, artificial intelligence, and brain science.
“AI has emerged in the field of artistic creation in recent years. Through technologies such as deep learning and generative adversarial networks, AI can learn and imitate artistic works and generate new creations. Universal large model technology provides more possibilities for machine creativity The Chief Vice President of the Hong Kong University of Science and Technology and Fellow of the Royal Academy of Engineering, Guo Yike, introduced that since 2021, the Hong Kong University of Science and Technology and the Hong Kong Baptist University have collaborated to develop a "human-machine symbiotic art creation platform". At present, the team has made multiple breakthroughs in vocal synthesis, facial expression simulation, automatic choreography, and image and video generation based on text descriptions.
The brain perceives the colorful acoustic environment and beautiful music in nature through the auditory system. Music signals have special time-domain and frequency-domain characteristics, and the auditory system extracts information and features from music through a series of complex processes. The brain forms a perception of music based on these features, including advanced cognition of music emotions and aesthetics, and further generates music memory Professor Wang Xiaoqin said that over the past 20 years, there has been rapid development in global brain science research on how the brain processes music. Currently, he and his team are conducting research on the neural mechanisms by which the brain processes music.
During the forum, experts conducted multiple discussions on the relationship between art and science.
Music contains human perception and experience, and also stimulates emotional feedback. In the process of exploring the integration of music and artificial intelligence, an important direction is to make music created by artificial intelligence truly touch people's hearts and prevent artificial intelligence music from being 'careless'. Yu Feng, Dean of the Central Conservatory of Music, told this newspaper that in 2018, the Central Conservatory of Music proposed the integration of music and artificial intelligence, Create the Department of Music Artificial Intelligence and Music Information Technology, and establish the discipline of Music Artificial Intelligence and Music Information Technology in 2019. Collaborate with top domestic and international artificial intelligence experts to jointly explore contemporary music art for the future.
Zhu Songchun, the dean of Beijing General Artificial Intelligence Research Institute and the dean of the School of Intelligence at Peking University, proposed that the integration of science and art is a collision of rationality and sensibility. Exploring the integration of intelligence disciplines and humanities and art is expected to "put machines at heart". At present, he and his team are developing a new music structured modeling theory that aligns artificial intelligence algorithms and model generated music with the aesthetic and cognitive implementation of human creators, promoting the better application of artificial intelligence music in fields such as music creation and video score.
Music therapy will play a greater role
Experts say that the combination of music and science has broad prospects for development.
Sun Maosong, a foreign academician of the European Academy of Sciences and executive vice president of the Artificial Intelligence Research Institute at Tsinghua University, stated that music artificial intelligence has been applied in fields such as music generation, lyrics creation, sound source separation, technique recognition, and music analysis in recent years. In addition, artificial intelligence has also made innovative applications in repairing guqin scores and protecting ethnic music data. As the academic community continues to make breakthroughs in cross modal alignment between music, images, videos, and text data, music artificial intelligence is expected to create more possibilities.
Li Jiang, President of Huawei Central Media Technology Institute, stated that artificial intelligence has strong capabilities in melody, rhythm, harmony, melody, polyphony, orchestration, and other aspects. However, it still faces challenges in fully showcasing music style and expressiveness. In the future, by doing a good job in data processing and computing power coordination, the development of music artificial intelligence technology is expected to make rapid progress. By building a music "skeleton" through AI large models and improving the efficiency of music creation, supplemented by AI small models for fine and customized adjustments, AI can better serve as an assistant to musicians and achieve expressive power of music.
Yu Feng stated that studying the therapeutic function of music is one of the breakthrough points in the cross research between music and artificial intelligence. Music has been proven to have an impact on people's emotions and sleep, but its mechanism and intensity of action are still in the dark box. Next, through research in brain science and music artificial intelligence, music therapy will play a greater role.
Isri Nielken, a professor of neurobiology at the Hebrew University of Jerusalem in Israel and director of the Edmund and Lilisafra Brain Science Center, told this newspaper that the neural mechanisms that support music processing in the human brain may have broad applications in sound processing, and it is expected that such mechanisms also exist in the animal kingdom. There are still many unresolved issues in the interdisciplinary field of music and neuroscience, and we look forward to collaborating with Chinese university researchers to uncover more mysteries about sound and create more valuable research results, "Nielken said.