Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’::Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’: ‘The worst bits of everything this industry is’
Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’::Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’: ‘The worst bits of everything this industry is’
Copyright IS too strong, but paradoxically artists’ rights are too weak. Everything is aimed to boost the profits of media companies, but not protect the people who make them. Now they are under threat of being replaced by AI trained on their own works, no less. Is it really worth it to defend AI if we end up with less novel human works because of it?
And they themselves trained on the work of other artists too. It’s just the circle of life. AI just happens to be better at learning than humans.
AI doesn’t need defending, it fill steamroll us all just by itself. We don’t really have a choice in this.
The “circle of life” except that it kills the artists’ careers rather than creating new ones. Even fledgling ones might find that there’s no opportunity for them because AIs are already gearing to take entry-level jobs. However efficient AI may be at replicating the work of artists, the same could be said of a photocopier, and we laws to define how those get to be used so that they don’t undermine creators.
I get that AI output is not identical and its output doesn’t go foul under existing laws, but the principles behind them are still important. Not only Culture but even AI itself will be lesser for it if human artists are not protected, because art AIs quickly degrade when AI art is fed back into it en masse.
Don’t forget that the kind of AI we have doesn’t do anything by itself. We don’t have sentient machines, we have very elaborate auto-complete systems. It’s not AI that is steamrolling artists, it’s companies seeking to replace artists with AIs trained on their works that are threatening them. That can’t be allowed.
It will kill all the ones that are stuck on old technology. Those that can make the best use of AI will prevail, at least for a little while, until AI replaces the whole media distribution chain and we’ll just have our own personal Holodeck with whatever content we want, generated on demand.
It’s not copying existing works and never did. Even if you explicitly instruct it to copy something existing, it will create its own original spin on the topic. It’s really no different than any artist working on commission.
You are free to ignore the reality of it, but that’s simply not the case. AI systems are getting filled with essentially all of human knowledge and they can remix it freely to create something new. This is the kind of stuff AI can produce just by itself, within seconds, the idea is from AI and so is the actual image. Sentience is not necessary for creativity.
When the artists are that easy to replace, their work can’t have been all that meaningful to begin with.
It’s sad to see how AI advocates strive to replicate the work of artists all the while being incredibly dismissive of their value. No wonder so many artists are incensed to get rid of everything AI.
Besides, it’s nothing new that media companies and internet content mills are willing to replace quality with whatever is cheaper and faster. To try to use that as an indictment against those artists’ worth is just… yeesh.
You realize that even this had to be set up by human beings right? Piping random prompts through art AI is impressive, but it’s not intelligent. Don’t let yourself get caught on sci-fi dreams, I made this mistake too. When you say “AI will steamroll humans” you are assigning awareness and volition to it that it doesn’t have. AIs maybe filled with all human knowledge but they don’t know anything. They simply repeat patterns we fed into them. An AI could give you a description of a computer, it could generate a picture of a computer, but it doesn’t have an understanding. Like I said before, it’s like a very elaborate auto-complete. If it could really understand anything, the situation would be very different, but the fact that even its most fierce advocates use it as a tool shows that it’s still lacking capabilities that humans have.
AI will not steamroll humans. AI-powered corporate industries, owned by flesh and blood people, might steamroll humans, if we let them. If you think that will get to just enjoy a Holodeck you are either very wealthy or you don’t realize that it’s not just artists who are at risk.
It’s such a shame too. Like you can have a million sensible takes and opinions and views on the topic, pro-AI, but the discussion revolves around the same shit on both sides.
It is an amazing tool, and could be used (and is used, it’s just obscured by the massive amount of shit and assholes trolling other people/artists) in so many creative ways. I’d been in a bit of a rut for quite a few years (partially because my brain no make happy chemicals or sleep), but I haven’t been as excited about the possibilities and inspired maybe ever in my life (at least not for a decade or nearly two) with art and my own stuff. I’m finally drawing again after way too many years of letting my stuff gather dust.
I used to think techno supremacists were an extreme fringe, but “AI” has made me question that.
For one, this isn’t AI in the scifi sense. This is a sophisticated model that forms an algorithm to generate content based on patterns it observes in a plethora of works.
It’s ridiculously overhyped, and I think it’s just flash in a pan. Companies have already minimized their customer support with automated service options and “tell me what the problem is” prompts. I have yet to meet anyone who is pleased by these. Instead it’s usually shouting into the phone that you want to talk to a real human because the algorithm thinks you want a problem fixed instead of the service cancelled.
I think this “technocrat” vs “humanities” debate will be society’s next big question.
I used to be on the tecnocrat side too when I was younger, but seeing the detrimental effects of social media, the app-driven gig economy and how companies constantly charge more for less changed my mind. Technocrats adopt this idea that technology is neutral and constantly advancing towards an ideal solution for everything, that we only need to keep adding more tech and we’ll have an utopia. Nevermind that so many advancements in automation lead to layoffs rather than less working hours for everyone.
I believe the debate is already happening, and the widespread disillusionment with tech tycoons and billionaires shows popular opinion is changing.
Very similar here, I used to think technology advancement was the most important thing possible. I still do think it’s incredibly important, but we can’t commercially do it for its own sake. Advancement/knowledge for the sake of itself must be confined to academia. AI currently can’t hold a candle to human creativity, but if it reaches that point, it should be an academic celebration.
I think the biggest difference for me now vs before is that I think technology can require too high of a cost to be worth it. Reading about how some animal subjects behaved with Elon’s Neuralink horrified me. They were effectively tortured. I refuse the idea that we should develop any technology which requires that. If test subjects communicate fear or panic that is obviously related to the testing, it’s time to end the testing.
Part of me still does wonder, but what could be possible if we do make sacrifices to develop technology and knowledge? And here, I’m actually reminded of fantasy stories and settings. There’s always this notion of cursed knowledge which comes with incredible capability but requires immoral acts/sacrifice to attain.
Maybe we’ve made it to the point where we have something analogous (brain chips). And to avoid it, we not only need to better appreciate the human mind and spirit – we need people in STEM to draw a line when we would have to go too far.
I digress though. I think you’re right that we’re seeing an upswell of the people against things like this.
All the ills you mention are a problem with current capitalism, not with tech. They exist because humans are too fucking stupid to regulate themselves, and should unironically be ruled by an AI overlord instead once the tech gets there.
You are making the exact same mistake that I just talked about, that I have also made, that a bunch of tech enthusiasts make:
An AI Overlord will be engineered by people with human biases, under the command of people with human biases, trained by data with human biases, having goals that are defined with human biases. What you are going to get is tyranny with extra steps, plus some of its own concerning glitches on the side.
It’s a sci-fi dream to assume technology is inherently destined to solve human issues. It takes human concern and humanites studies to apply technology in a way that actually helps people.
It’s pretty much exactly what the ship computer in StarTrek: TNG is along with the Holodeck (minus the energy->matter conversion).
You’ll be up for a rude awakening. What we see today is just the start of it. The current AI craze has been going on for a good 10 years, most of it limited to the lab and science papers. ChatGPT and DALL-E are simply the first that were good enough for public consumption. What followed them were huge investments into that space. We’ll be not only seeing a lot more of this, but also much better ones. The thing with AI is: The more data and training you throw at it, the better it gets. You can make a lot of progress simply by doing more it, without any big scientific breakthroughs. And AI companies with a lot of funding are throwing everything they can find at AI right now.
I haven’t watched Star Trek, but if you’re correct, they depicted an incredibly rudimentary and error prone system. Google “do any African countries start with a K” meme and look at the suggested answer to see just how smart AI is.
I remain skeptical of AI. If I see evidence suggesting I’m wrong, I’ll be more than happy to admit it. But the technology being touted today is not the general AI envisioned by science fiction nor everything that’s been studied in the space the last decade. This is just sophisticated content generation.
And finally, throwing data at something does not necessarily improve it. This is easily evidenced by the Google search I suggested. The problem with feeding data en masse is that the data may not be correct. And if the data itself is AI output, it can seriously mess up the algorithms. Since these venture capitalist companies have given no consideration to it, there’s no inherent mark for AI output. It will always self regulate itself to mediocrity because of that. And I don’t think I need to explain that throwing a bunch of funding at X does not make X a worthwhile endeavor. Crypto and NFT come to mind.
I leave you with this article as a counterexample: https://gizmodo.com/study-finds-chatgpt-capabilities-are-getting-worse-1850655728
Throwing more data at the models has been making things worse. Although the exact reasons are unclear, it does suggest that AI is woefully unreliable and immature.
Oh noes, somebody using AI wrong and getting bad results. What else is new? ChatGPT works on tokens (aka words or word segments converted to integers), not on characters. Any character based questions will naturally be problematic, since the AI literally doesn’t see the characters you are questioning it about. Same with digits and math. The surprising part here isn’t that ChatGPT gets this wrong, that bit is obvious, but the amount of questions in that area that it manages to answer correctly anyway.
Whenever I read “just” I can’t help but think of Homer Simpson’s: It Only Transports Matter?. Seriously, there is nothing “just” about this. What ChatGPT is capable of is utterly mind boggling. Humans worked on trying to teach computers how to understand natural language ever since the very first computers 80 or so years ago, without much success. Even just a simple automatic spell checker that actually worked was elusive. ChatGPT is so f’n good at natural language that people don’t even realize how hard of a problem that is, they just accept that it works and don’t think about it, because it’s basically 100% correct at understanding language.
ChatGPT is a text auto-complete engine. The developers didn’t set out to build a machine that can think, reason, replicate the brain or even build a chatbot. They build one that tells you what word comes next. And then they threw lots of data at it. Everything ChatGPT is capable of is basically an accident, not design. As it turns out, to predict the next word correctly you have to have a very rich understanding of the world and GPT figures that out all by itself just by going through lots and lots of texts.
That’s the part that makes modern AI interesting and a scary: We don’t really know why any of this works. We just keep throwing data at the AI and see what sticks. And for the last 10 years, a lot of it stuck. Find a problem space that you have lots of data for, throw it at AI and get interesting results. No human set around and taught DALLE how to draw and no human taught ChatGPT how to write English, it’s all learned from the data. Worse yet, the lesson learned over the last decade is essentially that human expertise is largely worthless in teaching AIs, you get much better results by simply throwing lots of data at it.
That is utterly meaningless. OpenAI is constantly tweaking that thing for business reasons, including downgrading it to consume less resources and censoring it to not produce something nasty (Meta didn’t get the memo). Same happened with Bing Chat and same thing just happened with DALL-E3, which until a few days ago could generate celebrity faces and now blocks all requests in that direction.
When you compare GPT-3.5 with the new/pay GPT-4, i.e. a newly training versions with more data, it ends up being far superior than the previous one. Same with DALLE2 vs DALLE3.
Also note that modern AIs don’t learn. They are trained on a dataset once and that’s it. The models are completely static after that. Nothing of what you type into them will be remembered by them. The illusion of a short-term memory comes from the whole conversation history getting feed into the model each time. The training step is completely separate from chatting with the model.
Values change. Images used to be difficult and time consuming to create, thus they had value. They are trivial to create now, so it becomes worthless. That’s progress. Yet instead of using that new superpower to create bigger projects and doing something still valuable with it, all the artists do is complain.
You obviously don’t realize that it didn’t. That’s prompts generated by AI put into another AI. There was no human telling it what to draw. The only instruction was to draw something original and than draw something different for the next image.
I don’t do any of that, I just acknowledge their superior and constantly improving performance. The thing doesn’t need to be self aware to put all the artists, and the other humans, out of a job if it can work 1000x faster than them.
Also AI will get awareness and volition real soon anyway, ETA for AGI is around 5 years, at the current pace I wouldn’t even be surprised if it arrives sooner. Human exceptionalism has the tendency to not age very well these days.
They don’t. See, it would be way easier to lake you Luddites seriously if you at least had any clue what you were talking about. But the whole art world seem to be stuck playing make-believe, just repeating the same nonsense that they heard from other people talking about AI instead just trying it for themselves.
Most of that AI stuff is publicly available, lots of it is free, and some can be run on your own PC. Just go and play with it to get a realistic idea of what it is and isn’t capable of. And most important of all: Think about the future. People always talk like issues with current AI systems are some fundamental limit of AI, when in reality most of those problems will be gone within six months.
Also it’s just bind boggling how people ignore everything AI can do, just to focus on some minuscule detail it still gets wrong. The fact that it can’t draw hands is not terribly surprising (hard structure to figure out from low res 2D images), meanwhile the fact that it can draw basically everything else, way faster and often better than almost any human, is rather mind boggling, yet somehow ignored.
CEOs are target for AI replacement just like everybody else. And AI that pays its own bills and runs on some rented cloud computing won’t be far off either. Either way, you don’t even have to go into doomsday scenarios with evil AI, the fact that AI will outcompete humans at most tasks alone ist enough to drastically reshape the world. If it’s ethically trained Open Source AI or some cooperate run thing really doesn’t matter, since either way, the changes will be huge.
Well, we are already way closer to that spooky scifi future than you’d think.
And you mean to tell me they decided to do it themselves? No, we both know that’s not what happened. That setup was arranged by people. You come with accusations of cluelessness and luddism only to say the exact same thing with different words.
You’d rather burst into wild speculation while acting superior rather than acknowledging matters as they are.
Who do you think are making the calls to replace people? Do you seriously believe that executives, who hold the highest power, will decide to replace themselves? They might as well use AIs just fine and reap all the benefits while doing none of the effort. Like many CEOs already do with their human subordinates.
As impressive as AIs might be and become, while you get lost on sci-fi fantasies you are losing sight of who is going to decide what they will be used for and how that will affect regular people.
Hell, we are already have a glimpse of how that’s going to play out. Most of the internet is molded by algorithms that, however inscrutable they may be, are directed to serve the interests of wealthy business owners. Some decades ago people dreamed of systems that would recommend things for you before you even knew that you wanted it, but some didn’t expect that it would be used to manipulate and advertise to us.
This is why keeping human interest in mind is of the utmost important.
You think oil paintings lost all worth when photography and printing and digital painting came about? That art isn’t worth it if it’s not expressed through the biggest and newest means?
That is what you think progress is? Human expression and passion being treated like trash because it’s not as optimal? What a dreary mindset.
If not to enable people to dedicate themselves to what they love, what is even the worth of technological advancement?
Don’t get mistaken. I love technology, I just can’t get excited about people being crushed by technology that is getting harnessed in the most cynical, greedy way. But you? You just seem to be eagerly praying for the day you will be turned into a paperclip, for “value”.
No human told them what to draw and you can let it keep drawing just by itself forever and generate original images. By your logic no AI can ever do anything by itself just because a human pressed the power button on the computer once. That’s nonsensical.
The shareholder will demand it when it becomes clear that an AI would do a better job.
When was the last time the average person bought an oil painting? I can’t even remember the last time I saw one.
Once upon a time aluminium worth was as much as gold. Then we figured out how to refine it for cheap and we build our Coke cans out of it now. Values change. Nobody is going to pay hundreds of dollars for an image that AI can generate better in 10sec. Just as nobody is paying monks to copy books anymore, we have printers for that. The whole idea of a static image is starting to feel bizarre once you played around with AI for a while.
The progress here isn’t replacing the artist, but that replacing the artists allows you to build bigger and better things. The artists that used to draw a single image, now has the power to draw the whole rest of the comic book just by themselves. The filmmaker that used to make a little 10min short can now to the full 2h movie. And the guy that had their head full of ideas, but no skill to draw, can now produce compelling images as well. The bar has been raised and it will keep raising.
I simply don’t pretend that we ever cared about the artists in the first place. Most of the great artists of the past died poor. Their images and fame came much later, long after their death. Today we watch movies and have little to no idea who or how they were created. We care about if the movie entertained us. Not the process of its creation or the hundreds of names scrolling by in the credits. Once AI keeps making movies that will entertain us, we’ll watch them.
People that are passionate about creating something manually, can still do as they please, they just can’t expect other to pay for it, when there are cheaper and better alternatives around.
Tired of your disingenuous responses. By your definition a die is intelligent because “you didn’t tell it what number to roll”. Stop playing dumb about that AI. I know you understood it.
Humans, again.
Trying to make big claims based on your own indifference towards art and artists only convinces me you are the last person I’d want an opinion about it. There’s a lot of discussion to be made about what makes art “better”. It’s not just making it bigger and longer.
This just sounds weirdly cultish.
Lemmy is full of Luddite Twitter artist types. It’s an echo chamber in here.