Ah shame - these improvements are unlikely to ever be ported to the HL2 VR mod
Ah shame - these improvements are unlikely to ever be ported to the HL2 VR mod
The merits are real. I do understand the deep mistrust people have for tech companies, but there’s far too much throwing out of the baby with the bath water.
As a solo developer, LLMs are a game-changer. They’ve allowed me to make amazing progress on some of my own projects that I’ve been stuck on for ages.
But it’s not just technical subjects that benefit from LLMs. ChatGPT has been a great travel guide for me. I uploaded a pic of some architecture in Berlin and it went into the history of it, I asked it about some damage to an old church in Spain - turned out to be from the Spanish civil war, where revolutionaries had been mowed down by Franco’s firing squads.
Just today, I was getting help from an LLM for an email to a Portuguese removals company. I sent my message in English with a Portuguese translation, but the guy just replied back with a single sentence in broken English:
“Yes a can , need tho mow m3 you need delivery after e gif the price”
The first bit is pretty obviously “Yes I can” but I couldn’t really be sure what he was trying to say with the rest of it. So I asked ChatGPT who responded:
It seems he’s saying he can handle the delivery but needs to know the total volume (in cubic meters) of your items before he can provide a price. Here’s how I’d interpret it:
“Yes, I can [do the delivery]. I need to know the [volume] in m³ for delivery, and then I’ll give you the price.”
Thanks to LLMs, I’m able to accomplish so many things that would have previously taken multiple internet searches and way more effort.
“physical pen testing”
oh I’ve seen that on the ButtSharpies subreddit
Only a French would ever dare to wear such a violently vomit-hued watch 😂
I certainly am not surprised that OpenAI, Google and so on are overstating the capabilities of the products they are developing and currently selling. Obviously it’s important for the public at large to be aware that you can’t trust a company to accurately describe products it’s trying to sell you, regardless of what the product is.
I am more interested in what academics have to say though. I expect them to be more objective and have more altruistic motivations than your typical marketeer. The reason I asked how you would define intelligence was really just because I find it an interesting area of thought which fascinates me and has done long before this new wave of LLMs hit the scene. It’s also one which does not have clear answers, and different people will have different insights and perspectives. There are different concepts which are often blurred together: intelligence, being clever, being well educated, and consciousness. I personally consider all of these to be separate concepts, and while they may have some overlap, they nevertheless are all very different things. I have met many people who have very little formal education but are nonetheless very intelligent. And in terms of AI and LLMs, I believe that an LLM does encapsulate some degree of genuine intelligence - they appear to somehow encode a model of the universe in their billions of parameters and they are able to meaningfully respond to natural language questions on almost any subject - however an LLM is unquestionably not a conscious being.
You’re right that we need a clear definition of intelligence if we are to make any predictions about achieving AGI. The researchers behind this article appear to mean “human-level cognition” which doesn’t seem to be a particularly objective or useful yardstick. To begin with, which human are we talking about? If they’re talking about an idealised maximally intelligent human, then I don’t think we should be surprised that we aren’t about to achieve that. The goal is not to recreate human cognition as if that’s some kind of holy grail. The goal is to make intelligent systems which can give results which are at least as good as what would be produced by a skilled and well-trained human working on the same problem.
Can I ask you how you would define intelligence? And in particular, how would you - if you would at all - differentiate intelligence from being clever, or from being well educated?
a/s/l?
I taught myself to touch-type when I was a schoolkid using something similar to Mavis Beacon. All the while, I had a voice in my head saying, “This is pointless, everyone will be talking to their computers like in Star Trek in a couple of years”. Well, that was the 90s and it turned out to be one of the most useful skills I taught myself - but surely the age of the keyboard must soon be coming to an end now??
Eh, that’s pretty metal.
It’s definitely pretty, and as thermite is a mixture of metal powder and metal oxide, your statement is entirely correct.
Imagine life in the post-apocalyptic hellscape. All electronic devices have been rendered useless due to the EMPs from all the nuclear blasts. You, with your unfathomable ability to tell the time from an old wind-up clock, are viewed as a literal god among men (and women)
ah they were making a nice and lame pun (anova brand == another brand)
Guy should’ve just called in an airstrike on his trash
You’re not being naive, the comment was joking about the meme incorrectly using “starring” (the gerund of “to star” as in “Cillian Murphy is starring in this movie”) when the meme creator clearly meant to use “staring”, the gerund of “to stare”.
They are remarkably useful. Of course there are dangers relating to how they are used, but sticking your head in the sand and pretending they are useless accomplishes nothing.
It models only use of language
This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.
If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence “The stolen painting was found by a tree”, you need to know what a tree is in order to interpret this correctly.
You can’t really use language *unless* you have a model of the universe.
Heroic works really well. I’ve just installed it myself recently, motivated mostly by a desire to finally play the free games I got off Epic. I’ve only installed two EGS games so far - Civ 6 and Guardians of the Galaxy - but they’re working perfectly, running via proton.
The experience is so good I was actually inspired to buy my first game outside of steam in years, namely Wartales which I just bought yesterday on GOG. Installation is a breeze, it runs under proton, and as far as I can tell it is running perfectly.
I sort of prefer Heroic to Steam in fact, because it starts almost immediately - no waiting around for 30 seconds while it tries to connect to the Steam network etc
That’s 1 in every 50 desktops. Anecdotally I can think of only 3 people, including myself
Can you name 147 people using Windows? If you can, then that’s 1 in every 50. Of course, people you know are probably the technical sort that are more likely to pay attention to their OS, but still you’d need to be able to individually name 147 Windows users just to match the 1 in 50 stat. Point I’m trying to make is that one in 50 really is not very many!
they, in fact, will have some understanding
These models have spontaneously acquired a concept of things like perspective, scale and lighting, which you can argue is already an understanding of 3D space.
What they do not have (and IMO won’t ever have) is consciousness. The fact we have created machines that have understanding of the universe without consciousness is very interesting to me. It’s very illuminating on the subject of what consciousness is, by providing a new example of what it is not.
They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.
It is, but sadly the dev has said this latest update is unlikely to ever be ported to the HL2 VR mod