There have been a few times that I’ve written about the inherent dangers associated with artificial intelligence (AI). There was the blog about an AI trying to write a chapter for a Harry Potter book.
The AI wrote:
“To Harry, Ron was a loud, slow, and soft bird. Harry did not like to think about birds.”
“The tall Death Eater was wearing a shirt that said ‘Hermione Has Forgotten How to Dance,’ so Hermione dipped his face in mud."
“The castle grounds snarled with a wave of magically magnified winds. The sky outside was a great black ceiling, which was full of blood."
“Leathery sheets of rain lashed at Harry’s ghost as he walked across the grounds towards the castle. Ron was standing there and doing a kind of frenzied tap dance. He saw Harry and immediately began to eat Hermione’s family.”
“Harry tore his eyes from his head and threw them into the forest.”
Bad literature aside, I’ve also warned that some artificial intelligences are being taught how to write news stories through an advanced text-generating algorithm.
I’ve also written about an AI that was given a specific task to accomplish. And apparently it was extremely smart. It did, however, have a complete lack of ethical guidance. This particular AI was told to transform aerial images into street maps, and then back into aerial images. The AI found out that it could cheat. It has no ethics, so it cheated.
From techcrunch.com:
“In some early results, the agent was doing well—suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process.
“As always, computers do exactly what they are asked, so you have to be very specific in what you ask them. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network—that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.”
I have a new one for you today. It is AI generated Death Metal. I’m pretty sure you’re going to hate it. I know I do.
According to IFLScience.com:
“CJ Carr and Zack Zukowski, two music-loving machine learning engineers, are the brains behind DADABOTS, a’“band’ that imitates the sounds of black metal, death metal, and many other genres using artificial intelligence tools.
“DADABOTS have created a number of albums since their inception in 2012, bridging a whole number of genres, from skate punk to black metal. For their latest project, called ‘Relentless Doppelganger,’ the duo have created a live stream that plays an endless symphony of AI-generated death metal for 24 hours a day.
“In their words, the project is a step towards ‘eliminating humans from black metal.’
“‘Most style-specific generative music experiments have explored artists commonly found in harmony textbooks such as The Beatles, Bach, and Beethoven, but few have looked at generating modern genre outliers such as black metal,’ reads a previous paper of theirs from 2017.”
And here it is, a live stream from DADABOTS. A neural network that’s generating technical death metal, via livestream, 24/7 to infinity.