Barack Obama: “For elevator music, AI is going to work fine. Music like Bob Dylan or Stevie Wonder, that’s different”::Barack Obama has weighed in on AI’s impact on music creation in a new interview, saying, “For elevator music, AI is going to work fine”.
There is no way this ages well.
I think the statement was more about the impact, which will depend on each person’s subjective experience
Personally I agree. Even if AI could produce identical work, the impact would be lessened. Art is more meaningful when you know it took time and was an expression/interpretation by another human (rather than a pattern prediction algorithm Frankenstein-ing existing work together). Combine that with the volume of AI content that’s produced, and the impact of any particular song/art piece is even more limited.
I’d say art is more meaningful when it’s a unique experience. It’s like those myths about glassmakers being
killedblinded after the cathedral is finnished so that no one can replicate the glass color… without the killing.People are social, if enough people feel the same way about one thing it’ll succeed. It doesn’t matter where it came from or how it was made, like how people can still admire and appreciate nature. Or maybe the impact will be that it reduces all impacts. Every group and subgroup might be able to have their own thing.
I don’t know. I think Obama kind of nailed it. AI can create boring and mediocre elaborations just fine. But for the truly special and original? It could never.
For the new and special, humans will always be required. End of line.
At this point I want a calendar of at what date people say “AI could never” - like “AI could never explain why a joke it’s never seen before is funny” (such as March 2019) - and at what date it happens (in that case April 2022).
(That “explaining the joke” bit is actually what prompted Hinton to quit and switch to worrying about AGI sooner than expected.)
I’d be wary of betting against neural networks, especially if you only have a casual understanding of them.
I mean the limitations of LLMs are very well documented, they aren’t going to advance a whole lot more without huge leaps in computing technology. There are limits on how much context they can store for example, so you aren’t going to have AIs writing long epic stories without human intervention. And they’re fundamentally incapable of originality.
General AI is another thing altogether that we’re still very far away from.
Nearly everything you wrote is incorrect.
As an example, rolling context windows paired with RAG would easily allow for building an implementation of LLMs capable of writing long stories.
And I’m not sure where you got the idea that they were fundamentally incapable of originality. This part in particular tells me you really don’t know how the tech is working.
A rolling context window isn’t a real solution and will not produce works that even come close to matching the quality of human writers. That’s like having a writer who can only remember the last 100 pages they wrote.
The tech is trained on human created data. Are you suggesting LLMs are capable of creativity and imagination? Lmao - and you try to act like I’m the one who’s full of shit.
That’s why you pair it with RAG.
They are trained by iterating through network configurations until there’s diminishing returns on how accurately they can complete that human created data.
But they don’t just memorize the data. They develop the capabilities to extend it.
So yes, they absolutely are capable of generating original content that’s not in the training set. As has been demonstrated over and over. From explaining jokes not found in the training data, solving riddles not found in it, or combining different concepts to result in a new synthesis not found in the original data.
What do you think it’s doing? Copy/pasting or something?
I think, it will eventually become obsolete, because we keep changing what ‘AI’ means, but current AI largely just regurgitates patterns, it doesn’t yet have a way of ‘listening’ to a song and actually judging whether it’s good or bad.
So, it may expertly regurgitate the pattern that makes up a good song, but humans spend a lot of time listening to perfect every little aspect before something becomes an excellent song, and I feel like that will be lost on the pattern regurgitating machine, if it’s forced to deviate from what a human composed.
I have seen a couple successful artists in different genres admit to using AI to help them write some of their most popular songs, and describe it’s use in the songwriting process. You hit the nail on the head with AI not being able to tell if something is good or bad. It takes a human ear for that.
AI is good at coming up with random melodies, chord progressions, and motifs, but it is not nearly as good at composing and producing as humans are, yet. AI is just going to be another instrument for musicians to use, in its current form.
Yeah, I do imagine, it won’t be just AIs either. And then, it will obviously be possible to take it to an excellent song, given enough human hours invested.
I do wonder, how useful it will actually be for that, though. Often times, it really fucks you up to try to go from good to excellent and it can be freeing to start fresh instead. In particular, ‘excellent’ does require creative ideas, which are easier for humans to generate with a fresh start.
But AI may allow us to start over fresh more readily, if it can just give us a full song when needed. Maybe it will even be possible to give it some of those creative snippets and ask it to flesh it all out. We’ll have to see…
As someone who is doing software engineering and my company jumped on AI bandwagon and got us GitHub Copilot. After using it for a while I think overall experience is actually net negative. Yes, sometimes it gets things right, sometimes it provides a correct solution, but often I can write much more concise code. Many times it provides code that looks like it is correct, but after looking in more detail it actually is wrong. So now I’m need to be in guard what code it inserts, which kills all the time that it supposedly saved me. It makes things harder because the code does look like it might work.
It is like pair programming with a complete moron that is very good at picking patterns and trying to use them in following code. So if you do a lot of copy and paste I think it will help.
I think this technology can make bad programmers suck less at programming. I think the LLM problem is that it was trained with existing works and the way it works is that its goal is to convince other human that the result was created by another one, but it isn’t capable to do any actual reasoning.