Modern technology has been incorporated into almost everything we do, and the way people listen to music is now not immune. There is no denying that technological innovation has hugely impacted on today’s music sector. But what might these developments mean for the music itself? Could we soon be serenaded by robots?
Artificial intelligence (AI) is continuing to experience rapid development. In the entertainment sector, AI being used in smart home speakers and interfaces, such as Amazon Echo, Google Home, and DingDong, is already transforming the home entertainment experiences, even helping users choose songs and recommend content.
As AI technology gets smarter, it will further revolutionise music suggestion processes with logically customised playlists, using algorithmic discovery to reshape consumption patterns.
Machine-learning based technology is also being used to drive fan engagement via music-centred chatbots. It improves ‘streaming services’ cataloguing capabilities, helping to generate background tracks for presentations, and support talent-spotting by scanning stacks of information to recognise up-and-coming online artists.
The music industry is also gradually embracing algorithm-generated compositions. A valid example is Sony’s Flow Machines project, a system that has successfully created two entire pop songs.
Another example of how AI has been used in music has come from Switzerland. Researchers from the École Polytechnique Fédérale de Lausanne (EPFL) have produced a deep-learning computer algorithm that can generate brand-new melodies in various musical genres, to mimic a given music style or genre.
What is new with the deep artificial composer (DAC) is that the AI learns to compose complete melodies without any music theory, exclusively based on a large database of existing music. The DAC extracts the style of the music by figuring out how a given piece of music shifts from one note to the next, and then calculates the probability of the following note’s pitch and length duration.
The DAC could one day generate music for multiple instruments in real time, with applications ranging from video games to helping composers in the creative process.
On the whole, human input is still required to set the parameters, inspire the style, and polish the machine generated final products. The rapid progression of AI, and the examples outlined above, however, raise an important question. Will AI composers eventually displace human musicians?
Music is practically as much about identity and back story as it is about a final result. Nevertheless, technology will prove vital as it helps to support the creative process and empower human composers to try new ideas with greater ease and more speed.
Are we that far from the age of real robot artists? Only time will tell.