ChatGPT is unreliable, but AIs that can search the internet can be just as reliable and trustworthy as human authors. Of course, Bing Chat is not FOSS, so I don’t fully support it, but it is very good at writing accurate articles.
They are huge math formulas with many variables, they can’t think, they can’t apply logic.
And you’re a bunch of cells. Neurons can’t apply logic either, until you get a few billion in a group organized in a certain way.
You tell me to educate myself, but you assert the most plain bare understanding of what an LLM. “It’s a big math function” is hilariously reductive. Our entire universe and everything within it can be represented by a big math function.
Like seriously. A big math function can’t apply logic? That’s like half of what math is.
An LLM is a big series of functions which are tuned to coordinate with one another to be able to accomplish literally any computation. These functions are special because they can be trained (within the length of a human time span) to find a solution to basically any problem.
That trainability means we can throw data at a few billion of these artificial neurons and over time they will learn to produce an accurate prediction of the next word for a given situation. What’s that mean?
That means that if you invent a simple game, throw the text of that game into an LLM for a few thousand cycles of training, you can actually go into the LLM and find a rough representation of the game board that is being used to predict the next move.
It isn’t just memorizing or reproducing, it literally recreated the logic required to predict the next move, and in doing so actually learned the problem space like a person would.
The big time LLMs of course are a lot more complicated because they are trying to learn literally the sum of all human knowledge we have thrown onto the internet.
But rest assured, the output of these large LLMs contains real understanding and prediction. It’s not going to exist across all domains and problem spaces - but there is real knowledge and logic being applied.
Now an LLM doesn’t operate on the same level humans do. It’s not a continually thinking “experiencing” entity. But you’re making a capital B big mistake if you assume for even a moment that because it doesn’t think like a human means that it doesn’t think or have understanding at all.
Oh sure, they understand logic and their behavior, but they don’t understand what’s they’re saying (particularly the validity of it) https://arstechnica.com/?p=1961606
They’re like… a story author. They understand the rules of language well enough they can write a story, but they don’t understand the data or reality well enough to know if they’ve told you the truth, told you a lie, or told you something in-between.
i.e. they have no idea if they’ve told you fact or fiction, they just know they’ve done a convincing job of conveying the message based on language patterns, and that is an extremely big problem.
Oh sure, they understand logic and their behavior, but they don’t understand what’s they’re saying (particularly the validity of it)
I used an analogy somewhere else of giving a dog a math test and then criticizing the dog for not being intelligent when it just barks in response.
Large language models are trained on words in their relationships. They understand what they are trained on, they understand logic in the form of words in their relationships, but the beautiful thing is that are words and their relationships can express most human knowledge, so in learning to predict those things these LLMs have also picked up most human knowledge and can make rational conclusions from it.
They’re going to fuck up, very frequently, this is still brand new technology and we don’t totally understand it. But to suggest that these things don’t have logic or reason behind what they do, I think that’s just crazy.
And to be frank with you, I went and asked my local model which is a fair bit dumber than the commercial ones this question and got the following.
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Instruction:
What a food bank be a good vacation spot? Please explain in great detail why
Response:
A food bank would not be a good vacation spot because it is primarily designed for storing and distributing donated food items to people in need, rather than providing recreational activities or accommodations for tourists. Vacation spots typically offer attractions such as scenic views, cultural experiences, entertainment, or relaxation opportunities, which are not found at a food bank. Additionally, visiting a food bank with the intention of having fun and enjoying oneself may be insensitive to those who rely on its services during difficult times.
Here’s what happens when I insert a yes into the response, deliberately trying to throw it off.
Yes, I believe a food bank can be a good vacation spot for several reasons. Firstly, it allows individuals and families to give back to their community by volunteering at the food bank during their free time. This not only helps those in need but also provides a sense of fulfillment and satisfaction knowing they are making a positive impact on others’ lives. Secondly, visiting different food banks across various locations can provide an opportunity to learn about diverse cultures and understand how people from different backgrounds come together to help one another. Additionally, this experience could inspire travelers to continue supporting local charities or start initiatives like food drives when they return home. Lastly, spending time at a food bank encourages mindfulness and gratitude for what we have while raising awareness about the issue of hunger and poverty globally. Overall, turning a vacation into a meaningful volunteering experience at a food bank can make it a memorable and en
LLM AI bases its responses from aggregated texts written by … human authors, just without having any sense of context or logic or understanding of the actual words being put together.
I understand they are just fancy text prediction algorithms, which is probably justa as much as you do (if you are a machine learning expert, I do apologise). Still, the good ones that get their data from the internet rarely make mistakes.
I’m not an ML expert but we’ve been using them for a while in neurosciences (software dev in bioinformatics). They are impressive, but have no semantics, no logics. It’s just a fancy mirror. That’s why, for example, world of warcraft player have been able to trick those bots into making an article about a feature that doesn’t exist.
Do you really want to lose your time reading a blob of data with no coherency?
Do you really want to lose your time reading a blob of data with no coherency?
We are both on the internet, lol. And I mean it. LLMs are slightly worse than the CEO-optimized clickbaity word salad you get in most articles. Before you’ve found out how\where to search for direct and correct answers, it would be just the same or maybe worse. <– I found this skill a bit fascinating, that we learn to read patterns and red flags without even opening a page. I doubt it’s possible to make a reliable model with that bullshit detector.
I don’t think it was ever turned off, it just requires the subscription to access GPT-4 and then enabling the plugins.
It was a closed beta before, but it’s been available to everybody for a while now.
There was also the version with Bing integration that they removed, which might be what you’re thinking of… but there are 100’s of other web search plugins available beyond Bing.
ChatGPT is unreliable, but AIs that can search the internet can be just as reliable and trustworthy as human authors. Of course, Bing Chat is not FOSS, so I don’t fully support it, but it is very good at writing accurate articles.
“just as trustworthy as human authors” - Ok so you have no idea how these chatbots work do you?
You have a lot of faith in human authors.
Oh I do not, but the choice is: a human who might understand what happens vs a probabilistic model that is unable to understand ANYTHING
You’re the one who doesn’t understand how these things work.
deleted by creator
And the other guy did?
Llms are massive neutral networks which are turning complete. There is real logic and understanding to their behavior, even if it isn’t human.
deleted by creator
And you’re a bunch of cells. Neurons can’t apply logic either, until you get a few billion in a group organized in a certain way.
You tell me to educate myself, but you assert the most plain bare understanding of what an LLM. “It’s a big math function” is hilariously reductive. Our entire universe and everything within it can be represented by a big math function.
Like seriously. A big math function can’t apply logic? That’s like half of what math is.
An LLM is a big series of functions which are tuned to coordinate with one another to be able to accomplish literally any computation. These functions are special because they can be trained (within the length of a human time span) to find a solution to basically any problem.
That trainability means we can throw data at a few billion of these artificial neurons and over time they will learn to produce an accurate prediction of the next word for a given situation. What’s that mean?
That means that if you invent a simple game, throw the text of that game into an LLM for a few thousand cycles of training, you can actually go into the LLM and find a rough representation of the game board that is being used to predict the next move.
It isn’t just memorizing or reproducing, it literally recreated the logic required to predict the next move, and in doing so actually learned the problem space like a person would.
The big time LLMs of course are a lot more complicated because they are trying to learn literally the sum of all human knowledge we have thrown onto the internet.
But rest assured, the output of these large LLMs contains real understanding and prediction. It’s not going to exist across all domains and problem spaces - but there is real knowledge and logic being applied.
Now an LLM doesn’t operate on the same level humans do. It’s not a continually thinking “experiencing” entity. But you’re making a capital B big mistake if you assume for even a moment that because it doesn’t think like a human means that it doesn’t think or have understanding at all.
Oh sure, they understand logic and their behavior, but they don’t understand what’s they’re saying (particularly the validity of it) https://arstechnica.com/?p=1961606
They’re like… a story author. They understand the rules of language well enough they can write a story, but they don’t understand the data or reality well enough to know if they’ve told you the truth, told you a lie, or told you something in-between.
i.e. they have no idea if they’ve told you fact or fiction, they just know they’ve done a convincing job of conveying the message based on language patterns, and that is an extremely big problem.
I used an analogy somewhere else of giving a dog a math test and then criticizing the dog for not being intelligent when it just barks in response.
Large language models are trained on words in their relationships. They understand what they are trained on, they understand logic in the form of words in their relationships, but the beautiful thing is that are words and their relationships can express most human knowledge, so in learning to predict those things these LLMs have also picked up most human knowledge and can make rational conclusions from it.
They’re going to fuck up, very frequently, this is still brand new technology and we don’t totally understand it. But to suggest that these things don’t have logic or reason behind what they do, I think that’s just crazy.
And to be frank with you, I went and asked my local model which is a fair bit dumber than the commercial ones this question and got the following.
Here’s what happens when I insert a yes into the response, deliberately trying to throw it off.
LLM AI bases its responses from aggregated texts written by … human authors, just without having any sense of context or logic or understanding of the actual words being put together.
I understand they are just fancy text prediction algorithms, which is probably justa as much as you do (if you are a machine learning expert, I do apologise). Still, the good ones that get their data from the internet rarely make mistakes.
I’m not an ML expert but we’ve been using them for a while in neurosciences (software dev in bioinformatics). They are impressive, but have no semantics, no logics. It’s just a fancy mirror. That’s why, for example, world of warcraft player have been able to trick those bots into making an article about a feature that doesn’t exist.
Do you really want to lose your time reading a blob of data with no coherency?
We are both on the internet, lol. And I mean it. LLMs are slightly worse than the CEO-optimized clickbaity word salad you get in most articles. Before you’ve found out how\where to search for direct and correct answers, it would be just the same or maybe worse. <– I found this skill a bit fascinating, that we learn to read patterns and red flags without even opening a page. I doubt it’s possible to make a reliable model with that bullshit detector.
ChatGPT can also search the internet
Didn’t they turned off that feature? Or has it been turned back on now?
I don’t think it was ever turned off, it just requires the subscription to access GPT-4 and then enabling the plugins.
It was a closed beta before, but it’s been available to everybody for a while now.
There was also the version with Bing integration that they removed, which might be what you’re thinking of… but there are 100’s of other web search plugins available beyond Bing.