“A computer would deserve to be called intelligent if it could deceive a human into believing that it is human.” —Alan Mathison Turing, The Father of AI
Artificial intelligence (hereafter AI) has a fascinating history that began in the 1950s and has evolved tremendously since.
The idea of AI stems from the goal of creating “thinking machines” and became formalized as a field of study during the 1950s.
Alan Turing, a British mathematician considered one of the founding fathers of AI, developed the “Turing Machine” in 1936 and proposed a method to evaluate whether a machine could “think”—in other words, whether it could imitate human behavior in a way indistinguishable from a real person.
AI as a formal field was born at the Dartmouth Conference in 1956, where John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon brought together experts in mathematics, computer science, and neuroscience. McCarthy coined the term “artificial intelligence.” During that decade, early progress was made in programming machines to perform cognitive tasks.
From the 1950s to the 1970s, AI researchers aimed to develop algorithms that could solve logical and mathematical problems and machines that could play games like chess.
Despite initial progress, AI went through several disillusionment phases known as “AI winters.” During these periods, expectations for AI were very high, but real-world results were slow and difficult to achieve. This led to a lack of funding and interest in research for several years.
With the advent of more powerful computing and the availability of vast amounts of data, AI experienced a renaissance starting in the 1990s.
Researchers began to realize that “machine learning” was key to AI’s advancement. New approaches emerged that enabled machines to learn from data rather than being explicitly programmed for every task.
The real breakthrough in modern AI came in the 2010s thanks to three key factors:
- Increased computing power: With the development of more powerful graphics processing units (GPUs) and cloud computing, researchers could train much larger and more complex models.
- Big Data: Massive data collection from various sources—social media, sensors, smart devices—provided the necessary fuel to train more accurate AI models.
- Deep Learning: In the 2010s, deep neural networks revolutionized AI. With architectures like Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), AI reached new milestones in tasks such as computer vision, machine translation, and voice recognition.
Today, AI is used in a variety of fields, such as:
- Virtual assistants like Siri, Alexa, and Google Assistant
- Voice and text recognition
- AI-assisted medical diagnostics
- Autonomous vehicles (e.g., Tesla, Lyft [Toyota], Aurora, Cruise [General Motors], Mobileye [Intel], Argo AI [Ford], AutoX, and Baidu)
- Generative AI in art, music, and text through models like GPT-3, which inspired this article
- Personalized recommendations on platforms like Netflix, Amazon, and YouTube
Are AI Creations Currently Protected by Copyright?
Several countries have implemented or are developing legislation to regulate the use of artificial intelligence, aiming to balance innovation with the protection of fundamental rights and safety.
In March 2024, the European Parliament passed legislation aimed at regulating AI use across various fields, including music. The regulation seeks to protect artists’ rights by prohibiting the unauthorized use of AI to create content that imitates their work without consent. It also imposes transparency requirements on AI systems used in musical creation, including risk assessments and human oversight. (source: moosbox.com)
The European Union has proposed harmonized AI rules and modifications to existing IP-related legislation. The proposal addresses data protection and copyright issues in the context of AI, including mechanisms to help copyright holders detect potential infringements and facilitate future licensing.
Transparency obligations are also planned for AI companies regarding the data used to train their models. (source: elpais.com)
The Government of Spain has drafted the “Artist Statute” to protect actors from the unauthorized use of their voices in AI training. This statute affirms artists’ rights to compensation if their image or voice is used in AI contexts. (source: elpais.com)
Spain’s Ministry of Culture also proposed a Royal Decree to regulate generative AI, aimed at protecting authors’ rights. The proposal includes collective licensing provisions, allowing rights management entities to issue non-exclusive licenses even without the express consent of all rights holders. However, some artists argue this could benefit large tech companies and harm their working conditions. (source: elpais.com)
In Brazil, in April 2024, various cultural and creative industry groups—including the Brazilian Union of Composers, the Brazilian Intellectual Property Association, and the Brazilian Association of Music and Arts—sent an open letter to the government requesting specific regulations on AI use in music creation. They voiced concerns about the unauthorized exploitation of copyrighted works in AI model training. (source: institutoautor.org)
In February 2025, Colombia approved a new public policy to promote AI, investing approximately 480 billion pesos through 2030. While the initiative encourages research and ethical AI use, it has been criticized for lacking specific regulations on data protection and human rights—especially those tied to music and intellectual property. (source: elpais.com)
In July 2024, Ecuadorian lawyer Ramiro Rodríguez gave a talk titled “Music in the Digital Age: AI and Copyright” at the Ecuador Audio and Sound Conference. His presentation highlighted the legal implications of AI in musical creation and the need to adapt to technological changes while protecting creators’ rights. (source: uartes.edu.ec)
While not exclusive to Latin America, legislative efforts like the “NO FAKES Act” in the U.S. aim to prohibit unauthorized use of artists’ voices and likenesses for AI training. This type of regulation could influence similar initiatives in Latin America. (source: humanartistrycampaign.com)
In November 2024, the U.S. Congress introduced the “Transparency and Responsibility for Artificial Intelligence Networks Act,” which would require disclosure of the sources of content used to train AI tools. Though a U.S. initiative, it may serve as a reference for legislation in Latin America. (source: institutoautor.org)
These examples reflect the challenges and efforts different countries and regions face in attempting to regulate AI’s impact on music and copyright, striving to balance innovation with the protection of creators’ rights
Intellectual Property: Unauthorized Use of Voice in AI Applications
In recent years, the use of AI in music creation has sparked debate about copyright and the unauthorized use of artists’ voices. Notable cases include:
- “NostalgIA” by FlowGPT (2023): Chilean producer MauryCeo released “NostalgIA”, a track featuring AI-generated voices of Bad Bunny, Daddy Yankee, and Justin Bieber. The song went viral on platforms like TikTok, racking up millions of plays before being taken down at Bad Bunny’s request, who publicly criticized the unauthorized use of his voice.
- Platforms like Udio and Suno AI have faced criticism and legal actions for allegedly using copyrighted music to train their AI models. The Recording Industry Association of America (RIAA) sued these companies for massive copyright infringement, claiming they trained their systems using protected material without authorization.
- AI-generated versions of iconic songs have been produced without consent, such as a version of “Soy un truhán, soy un señor” by Julio Iglesias, where both his voice and image were altered using AI—raising concerns over copyright and image rights violations.
These incidents have fueled discussions around the need for stronger regulations to protect artists from unauthorized AI use in music. Many countries have begun to make progress in this area.
Ongoing Challenges
Despite AI’s remarkable progress, it still faces significant challenges, including:
- Copyright infringement: Unauthorized use of songs and voices violates the intellectual property rights of artists, songwriters, producers, and others.
- Creation of false or misleading content: Voice cloning and AI music generation technologies can produce fake audio or videos (deepfakes), potentially misleading the public or manipulating opinions.
- Privacy and consent: Unauthorized use of human voices, especially when cloned or recreated without explicit consent, raises serious privacy concerns.
- Disinformation and manipulation: AI can generate voices of public figures or celebrities, creating fake statements that appear real.
- Impact on the music industry: AI-generated music may displace human artists, significantly affecting the music industry, including song production and distribution.
- Propagation of harmful or discriminatory content: AI can be used to generate voices that spread hate speech, racism, or xenophobia, with serious social consequences.
- Compromise of audio authenticity: AI-manipulated recordings can undermine the reliability of audio evidence in legal proceedings.
- Security risks: Voice cloning poses threats to systems that use voice authentication, increasing the risk of fraud or identity theft.
- Displacement of artists and creators: Artists may feel pushed out by AI’s ability to generate music and voices, potentially creating inequality in the creative economy.
- Advertising manipulation: Brands might use AI to create songs or voices imitating celebrities without consent, leading to the exploitation of public figures’ images.
- Ethical and moral dilemmas: Using voices and songs without permission, especially for unintended or harmful purposes, creates ethical conflicts.
Each of these challenges highlights the urgent need for stronger legal frameworks and ethical debates around AI’s use in content creation.
AI has come a long way—from its humble beginnings in the 1950s to becoming a central technology in modern life.