- cross-posted to:
- technology@beehaw.org
- hackernews@lemmy.bestiver.se
- cross-posted to:
- technology@beehaw.org
- hackernews@lemmy.bestiver.se
As a normal user, I don’t find Ai useful.
Like, anybody’s, for much of anything other than generating fever-dreams and Plex art.
code, tho.
Bash scripts, maybe but, it’s not necessary for me.
It still needs to learn. I’m personally trying to opt out of it watching everything I do, will have to be some pretty serious benefits for me to revert.
How long does it take to learn? It should be able to scan all of my files locally and I should be able to search for songs by lyrics or images by description and metadata locally. It supposedly has GitHub copilot like functionality in xcode…
Genmoji is a waste of space. The image generation is really bad (but then again, most of these platforms are). The writing tools are mediocre. About all that is moderately useful is that Siri seems a little better and processing commands.
If they want to start charging for this, I’m out.
I appreciate the summaries on my notifications. Some of my people text a book every time.
As the owner of a 13 Mini; what’s that?
We didn’t get anything at all from 18.1. There is literally nothing on my phone to tell me that it exists.
However, I do have it on both of my Macs, and have yet to do anything with it. I’m not in any hurry to bother either.
I tried it one time, and it’s just as “slop” as the rest of generative AI. CEOs have no taste
CEOs have no
tasteclueTechbro CEOs are especially susceptible to the hypetrain and then want it implemented somehow, despite the tech not living up to the imaginary magic bullet they got from their superficial info.
Probably worth noting, this survey was taken before 18.2 went live with a ChatGPT integration, image generation, etc.
Even with integrations, a lot of the automatic replies basically boil down to “yes, thanks” and “no thank you” to every text. It isn’t even like… A longer message. It’s just two or three words, tops. If I’m going to use AI to write my texts, it’s going to be for something longer than a “yes lol” text.
Agreed.
IMHO, the only truly useful thing is writing tools and Siri being able to query ChatGPT for complex questions instead of telling people to pull out their phone and search the web.
The stuff everyone was actually interested in is likely in 18.4. On-screen awareness, integration with installed apps, contextual replies, etc.
I have 18.2 and don’t even see how to use the AI features. The only thing I bothered to look up so far was how to use genmoji. But the option still doesn’t display in iMessage so I have no idea. Might as well not exist for all I can tell.
Weird. New installs usually get some sort on onboarding screen that explains how to activate the new stuff.
The 18.2 Chat GPT stuff can be manually enabled under settings > Apple intelligence > scroll way down > Chat GPT. Once enabled, writing tools and Siri will give you the option to send a query to ChatGPT instead of Apple’s model.
If Siri gets stumped, it will ask if you want to query GPT. Or you can just prompt it to Ask Chat GPT ______.
Writing tools has it buried under “compose” which is at the very bottom of the writing tools sheet.
I was trying to generate ai images and it couldn’t handle anything …. asking Siri questions amounts to nothing … it has a cool animation and sound for when you summon it and that’s about all … it’s a fucking dud.
I went into settings on my phone and disabled it immediately
I feel like this can be generalized to AI in general for most people. I still don’t see much usefulness or quality in output in the scenarios where I’ve been exposed to AI LLMs.
Shitposting has never been easier though!
I feel the same way about AI as I felt about the older generation of smartphone voice assistants. The error rate remains high enough that i would never trust it to do anything important without double checking its work. For most tasks, the effort that goes into checking and correcting the output is comparable to the effort I would have spent to just do it myself, so I just do it myself.
For programming it saves insane time.
Real talk though, I’m seeing more and more of my peers in university ask AI first, then spending time debugging code they don’t understand.
I’ve yet to have chat gpt or copilot solve an actual problem for me. Simple, simple things are good, but any problem solving i find them more effort than just doing the thing.
I asked for instructions on making a KDE Widget to get weather canada information, and it sent me an api that doesn’t exist and python packages that don’t exist. By the time I fixed the instructions, very little of the original output remained.
One major problem with the current generation of "AI"seems to be it’s inability to use relevant information that it already has to assess the accuracy of the answers it provides.
Here’s a common scenario I’ve run into: I’m trying to create a complex DAX Measure in Excel. I give ChatGPT the information about the tables I’m working with and the expected Pivot Table column value.
ChatGPT gives me a response in the form of a measure I can use. Except it uses one DAX function in a way that will not work. I point out the error and ChatGPT is like, "Oh, sorry. Yeah that won’t work because [insert correct reason here].
I’ll try adjusting my prompt a few more times before finally giving up and just writing the measure myself. It does not have the ability to reason that an answer is incorrect even though it has all the information to know that the answer is incorrect and can even tell you why the answer is incorrect. It’s a glorified text generator and is definitely not “intelligent”.
It works fine for generating boiler plate code but that problem was already solved years ago with things like code templates.
As a prof, it’s getting a little depressing. I’ll have students that really seem to be getting to grips with the material, nailing their assignments, and then when they’re brought in for in-person labs… yeah, they can barely declare a function, let alone implement a solution to a fairly novel problem. AI has been hugely useful while programming, I won’t deny that! It really does make a lot of the tedious boilerplate a lot less time-intensive to deal with. But holy crap, when the crutch is taken away people don’t even know how to crawl.
Seem to be 2 problems. One is obvious, the other is that such tedious boilerplate exists.
I mean, all engineering is divide and conquer. Doing the same thing over and over for very different projects seems to be a fault in paradigm. Like when making a GUI with tcl/tk you don’t really need that, but with qt you do.
I’m biased as an ASD+ADHD person that hasn’t become a programmer despite a lot of trying, because there are a lot of things which don’t seem necessary, but huge, turning off my brain via both overthinking and boredom.
But still - students don’t know which work of what they must do for an assignment is absolutely necessary and important for the core task and which is maybe not, but practically required. So they can’t even correctly interpret the help that an “AI” (or some anonymous helper) is giving them. And thus, ahem, prepare for labs …
If you’re in school, everything being taught to you should be considered a core task and practically required. You can then reassess once you have graduated and a few years into your career as you’ll now possess the knowledge of what you need and what you like and what you should know. Until then, you have to trust the process.
People are different. For me personally “trusting the process” doesn’t work at all. Fortunately no, you don’t have to, generally.
When AI achieves sentience, it’ll simply have to wait until the last generation of humans that know how to code die off. No need for machine wars.
AI has absolutely wasted more of my time than it’s saved while programming. Occasionally it’s helpful for doing some repetitive refactor, but for actually solving any novel problems it’s hopeless. It doesn’t help that English is a terrible language for describing programming logic and constraints. That’s why we have programming languages…
The only things AI is competent with are common example problems that are everywhere on the Internet. You may as well just copy paste from StackOverflow. It might even be more reliable.
If you don’t mind a few hundred bugs
Yup. We passed on a candidate because they didn’t notice the AI making the same mistake twice in a row, and still saying they trust the code. Yeah, no…
It’s a nice way to search for content or answers without all the ads that websites have nowadays. Of course, it’s only a matter of time until the AI/LLM responses are surrounded by (or embedded with) ads as well.
Install Firefox and download uBlock Origin
llm and search should not be in the same sentence
Or it much prefers to give you answers from “partners.” For example:
Me: How can I find a good set of headphones?
AI: A lot of people look for guides and reviews to find a good set of headphones. The important features to look for are… <insert overcomplicated nonsense here>. This can be overwhelming, so consider narrowing the search to a reliable product line like those by Beats (or whatever advertiser). Do you want some links to well-reviewed products?
Ick…
Lol, that reminded me that i saw a pair of “Beats x Kim Kardashian” y’know, audio engineer/designer KK finally giving the public a taste of real performance combined with chic luxury aesthetic 🥴
I still can’t wrap my head around the “Beats” craze. It’s like headphones didn’t already exist, lol!
Same. I’m not opposed to it existing, I’m just kind of… lukewarm about it. I find the output overly verbose and factually questionable, and that’s not the experience I’m looking for.
Even with other forms of generative AI, there are very few notable uses for it that isn’t just a gimmick/having fun with it, and not in a way achievable via other means.
Being able to add a thing to a photo is neat, but also questionably useful, when it is also doable with a few minutes of Photoshop.
I’ve a friend who claims it can be useful for scripts and quick data processing, but I’ve personally not had that experience when giving it a spin.
They need to release Apple Strength and Apple Dexterity to make the experience more complete
That would require Apple wisdom
I can’t afford Apple Wisdom.
No Apple Luck
Hard to argue against a nice quality build.
but Apple Karma is so wrecked, it’s hopeless
The only bit of excitement I’ve experienced about this, was when they announced it will be force-disabled in Europe, so I didn’t have to turn it off myself
Oh don’t worry, it’s coming to the eu
That’s horrible news
Tbf, it’s opt-in for now, so it isn’t that bad yet.
Shock, I tell you. Absolutely shocked. S
Daily iPhone user. Haven’t really noticed any difference. They really pushed how tightly integrated the experience would be, but honestly, I don’t really notice.
Maybe they integrated it so well that it looks exactly the same as what they started with.
From my experience iOS actually got dumber. At least the keyboard did, which is annoying. There’s a certain way how keys responded to what you typed which has been a thing since the first iPhone. But two updates ago or so, they butchered it completely (especially if you type in German), making texting pretty difficult at times. I’ve asked other users and some of them experience the same issues in that certain keys just do not want to get tapped sometimes because the algorithm expects something else, making hitboxes of unwanted keys way too big. Needlesly to say I’m not ready to trust Apple’s Intelligence just yet.
I experience this way too much. I have a nostaligia for when all of the problems I had with computers (broadly) were because I did something wrong… not because the computer is trying to fix something or guess something or anticipate something. Just let me type.
Yesterday, I typed out the letters of a word I wanted, and after typing a second word, I saw my iPhone “correct” the first word I typed to something else entirely. NO. Stop assuming I made a mistake. You cause more problems than you solve.
It’s crazy because they’ve tried to ‘fix’ something that wasn’t broken at all. It was one of the best features. Most users didn’t even notice there was an algorithm behind their keyboard. It just felt natural. But now it’s so aggressive, texting can almost feel like a warzone.
The iOS keyboard is one of the worst pieces of software I’ve ever used. It is actively hostile towards what I’m trying to type.
Is this a documented feature? In that it will modify the hit boxes for keys as you’re typing, based on likely next key press.