- cross-posted to:
- tech@kbin.social
- cross-posted to:
- tech@kbin.social
Visual artists fight back against AI companies for repurposing their work::Three visual artists are suing artificial intelligence image-generators to protect their copyrights and careers.
I am able to answer your questions for myself. I have lost interest in doing so for you.
But can you do so from the ground up, without handwaving towards the next unexplained reason? That’s what you’ve done here so far.
Yes.
I once held a view similar to the one you present now. I would consider my current opinion further advanced, like you do yours.
You ask for elaboration and verbal definitions, I’ve been concise because I do not wish to spend time on this.
It is clear we cannot proceed further without me doing so. I have decided I won’t.
Bummer. You could have been the first to bring actual argument for your position :)
Not today. I have too much else to do.
And it’s not like my being concise makes my argument absent.
The issue isn’t you being concise, it’s throwing around words that don’t have a clear definition, and expecting your definition to be broadly shared. You keep referring to understanding, and yet objective evidence towards understanding is only met with “but it’s not creative”.
Are you suggesting there is valid evidence modern ML models are capable of understanding?
I don’t see how that could be true for any definition of the word.
As I’ve shared 3 times already: Yes, there is valid evidence that modern ML models are capable of understanding. Why do I have to repeat it a fourth time?
Then explain to me how it isn’t true given the evidence:
https://arxiv.org/abs/2210.13382
I don’t see how an emergent nonlinear internal representation of the board state is anything besides “understanding” it.
Cool. But this is still stuff that has a “right” answer. Math. Math in the form of game rules, but still math.
I have seen no evidence that MLs can comprehend the abstract. To know, or more accurately, model, the human experience. It’s not even clear, that given a conscious entity, it is possible to communicate about being human to something non-human.
I am amazed, but not surprised, that you can explain a “system” to an LLM. However, doing the same for a concept, or human emotion, is not something I think is possible.