Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Will Rappers Be the First to Lose Their Jobs to AI?
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Will Rappers Be the First to Lose Their Jobs to AI?

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    With the gradual integration of AI into daily life, AI-powered music composition, equipped with communication, networking, and human-computer interaction capabilities, is becoming a trend in fields like education, art performances, and entertainment services. But what mysteries lie behind AI-generated music, and how will it impact the music industry?

    Recently, the U.S. digital research agency Space150 conducted an interesting experiment: using artificial intelligence (AI) technology to mimic the voice and musical style of the famous rapper Travis Scott, creating a rap robot named 'Travis Bott.'

    The goal of the experiment was to explore the creative limits of AI. In fact, 'Travis Bott' successfully composed a song titled 'Jack Park Canny Dope Man,' with both lyrics and melody generated independently. Space150 also used AI-based human image synthesis technology, 'Deepfake,' to produce a music video for the song.

    Honestly, unlike previous AI-generated songs, this one, after continuous learning from human artists, nearly replicates the auditory experience of a real performer. Comments under the MV from international netizens included: 'better than real Travis,' 'Pretty amazing, this is only the beginning,' and even concerns about AI enslaving humanity—though they’d still buy tickets to see it.

    Technically, Space150 employed an additional neural network to create the melody and percussion accompaniment, then fed Travis Scott’s lyrics into a 'text generator model.' Two weeks later, the AI 'Travis Bott' began crafting rhyming lyrics.

    In terms of results, Travis Bott’s imitation of Travis Scott is almost indistinguishable, fully capturing the most prominent stylistic and charismatic traits of Scott’s work—so much so that it was joked the AI could join Spotify’s popular rap playlist, Rap Caviar. This project also further demonstrates the advancements in artificial neural networks (ANNs), paving the way for exploring AI’s future applications in music.

    Undeniably, AI is increasingly embedded in our daily lives. Against the backdrop of the 'Internet+' and 'Industry 4.0' era, AI-powered music composition—equipped with communication, networking, and interactive capabilities—is becoming ubiquitous in education, art, and entertainment. Given AI’s impressive musical output, it raises the question: Will human musicians face AlphaGo-level disruption in their coexistence with AI music?

    In reality, AI composition (or algorithmic composition) is nothing new, and replicating Travis Scott isn’t particularly difficult.

    As early as 2016, researchers Gaëtan Hadjeres and François Pachet from Sony’s Computer Science Laboratory (CSL) developed a neural network called 'DeepBach.' They trained it on 352 of Bach’s compositions, enabling it to generate 2,503 chorales.

    The first AI virtual composer to gain global recognition was AIVA (Artificial Intelligence Virtual Artist), launched in 2016 by the startup Aiva Technologies.

    Initially focused on classical and film scores, AIVA has since expanded to other genres like rock and pop.

    As a virtual musician, AIVA is legally registered with France and Luxembourg’s authors' rights society (SACEM) and holds its own copyright. In the AI field, replicating the styles of one or multiple musicians has likely been underway for some time.

    Currently, whether it’s DeepBach, AIVA, or Travis Bott, AI composition relies on deep learning technology based on artificial neural networks. In this process, programmers must construct a multi-layered 'neural network,' with each layer programmed to process various input and output data points.

    For example, DeepBach was trained on 362 of Bach’s works, AIVA on a vast database of classical composers like Bach, Beethoven, and Mozart, while Travis Bott was fed Travis Scott’s music, vocals, and sound effects.

    After data input, the neural network identifies patterns across the inputted works, forming an understanding of musical style.

    However, this stylistic understanding isn’t the final product—it’s primarily used for prediction. The AI program continues running with its stylistic predictions, encountering validation datasets along the way.

    These datasets provide feedback on prediction accuracy, which the AI remembers. Through rapid iterative learning, the AI’s predictive ability strengthens, eventually mastering the stylistic patterns in the programmer’s dataset and composing original pieces.

    The breakthrough with 'Travis Bott' lies in its input not just of Travis Scott’s music but also his vocals and sound effects, advancing the integration of text and audio in deep learning.

    This deep learning, though seemingly a simplified model of human neural structures, can, to some extent, 'think' like a human.

    It enables AI to comprehend and model highly abstract concepts, such as melodic patterns or facial features.

    However, in the evolution of AI music, neural networks are just one of many techniques, each with strengths and weaknesses.

    On the plus side, neural networks excel in self-learning, associative memory, and rapid optimization compared to other algorithms.

    But their drawbacks are also evident:

    In practice, even the most advanced deep learning algorithms require weeks to fully train a neural network. Currently, AI composition lacks a definitive technical solution, often relying on hybrid algorithms.

    Moreover, AI composition has broader limitations. As mentioned earlier, it fundamentally relies on big data and cloud computing. The AI generates music by extracting and recombining features from a vast database based on programmer-defined parameters.

    This raises a critical question: How does the database distinguish between copyrighted and public data? How do database builders protect copyrighted material, and how do users avoid infringement?

    Currently, AI composition still struggles to autonomously address these issues, with copyright compliance often depending on programmer intervention.

    In 2017, Aiva Technologies’ explanation for focusing AIVA on classical music highlighted this deliberate design: 'The classical music database used to train Aiva avoids copyright issues because the rights have expired.'

    For Travis Bott, obtaining Travis Scott’s authorization for sampling his work and likeness is necessary, but how does it avoid plagiarism in its output?

    This challenge contributes to the uneven quality of AI-composed music today, where plagiarism may be hard to avoid.

    Plagiarism checkers and their criteria are crucial here, but even human music plagiarism standards remain inconsistent—let alone for AI compositions.

    Even if an AI overcomes these hurdles to produce a truly original, non-infringing work, it still faces copyright certification issues.

    According to China's Copyright Law, copyright is defined as "the rights granted by copyright law to civil subjects over works and related objects." Here, civil subjects refer to citizens, legal persons, or unincorporated organizations. AI, lacking recognized legal identity, faces complexities in acquiring or relinquishing rights, making infringement disputes difficult to resolve.

    For example, Microsoft's Xiaoice independently created the poetry collection Sunshine Lost the Glass Window, which was widely pirated and improperly cited upon release. Such typical infringement cases remain unresolved due to the absence of clear legal provisions, leaving copyright ownership ambiguous.

    Notably, while China lags in this area, countries like the UK, South Africa, and New Zealand have explicitly recognized AI copyrights. The US, Japan, and Australia, though lacking statutory provisions, have made judicial attempts. However, as a civil law country, China relies on statutory law rather than case law, making it essential to clarify AI works institutionally.

    Globally, achieving widespread recognition for AI works remains challenging due to varying legal and technological standards. A workaround involves crediting human artists in AI-generated works. For instance, AIVA's 2018 album Eva (Vol. 3 from Artificial Composer Aiva) credited human contributors like "feat. Aiva Sinfonietta Orchestra, Brad Frey," enabling commercial use.

    Replicating Travis Scott's music with AI is feasible, but resolving copyright issues and advancing AI technology is a long-term endeavor. AI music, a burgeoning industry, traces its roots to 1974 with the Rader system, which used rule-based AI for melody and harmony generation. Subsequent milestones include the Snobol and Choral systems, and later, neural network-based systems like Musact and Harmonet.

    Modern AI composition began with Google's Magenta in 2015, an open-source project using TensorFlow to create music and art. Notable developments include Amper Music (2017), Sony's Flow Machines (2018), and OpenAI's MuseNet (2019). Leading AI music companies include Google, Sony, AIVA, and OpenAI, with Jukedeck acquired by ByteDance in 2019.

    In China, companies like Baidu, Tencent, Alibaba, and NetEase are investing in AI music. Ping An Technology, collaborating with universities, won an AI composition contest in 2018. Microsoft's AI music technology, capable of composing, arranging, and singing, has been commercialized. Apps like "Whale Song" enable AI-powered合唱 (chorus) effects.

    AI's role in music faces challenges in algorithms and copyright. Hybrid algorithms and personalized music customization are key trends. While AI can mass-produce music, personalized定制 enhances originality. Collaborations between AI and human musicians offer a short-term solution to copyright issues, with AI boosting creativity and efficiency. Reports suggest AI-human collaborations are 20 times faster than human-only创作.

    As Alibaba Music's chief scientist noted in 2018, AI can inspire artists during creative blocks, offering snippets that spark ideas. This synergy highlights AI's potential to revolutionize music creation.

    With the deepening of AI technology in deep learning, its increasing proficiency in understanding human emotions, and the gradual refinement of legal definitions regarding computer-generated works and entities, the current status of AI as an auxiliary tool for human musicians may not last long. After all, neither technology nor laws remain static.

    From streaming platforms using AI for intelligent recommendations to guide listeners' musical tastes, to scientists creating AI composers that once again disrupt the music industry, people have mixed feelings about AI's development.

    On one hand, AI's involvement can make the music industry more robust and efficient. On the other hand, as machines created by humans, AI-composed music's sales and quality might put many musicians to shame.

    In the long run, the relationship between AI and human musicians or radio DJs may not be an either-or scenario. Just like the current competition between digital music and vinyl records, while vinyl's decline is evident, its value is still recognized by the public and even cherished by a niche audience.

    In other words, technological progress and comprehensive industry advancement will most likely make AI music a standard component of music creation. Of course, people's expectations for human musicians' originality and aesthetic value in music will also increase.

    However, whether it's AI-generated music or human-created music, from the birth of music to today's diverse musical products, the core purpose remains providing service. As long as this core remains unchanged, the relationship between humans and music won't fundamentally change.

    Ultimately, artificial intelligence still originates from human wisdom. Rather than saying musicians will lose jobs or face AlphaGo-style obsolescence, it's more accurate to describe this as technological-driven industry transformation. In choosing between works or music services, listeners now have more diverse options.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups