Like most of you, I used reddit as solely my only source for finding information. Looking to hear your guys’ thoughts on this topic, and hopefully explain and share some knowledge in a more sophisticated manner than I can describe. (also, I hope this is an appropriate place to post?)
I have ran into this discussion a few times across the fediverse, but I can’t for the life of my find those threads and comments lol
I believe that a non-corporate owned platform with user-generated information is most optimal, like wikipedia. I don’t know the technicalities, but I feel like AI can’t replace answers from human experiences - humans who are enthusiasts and care about helping each other and not making money. This is one of those things where I feel like I know the “best” way to find information, but I don’t know the deep answers of why, and what makes the other platforms worse (aside from the obvious ads, bloatware, and corporate greed)
I don’t know much about this topic, but I’m curious if you guys have actual real answers! Thread-based services like this and stack overflow (?) vs chatgpt vs bing vs google, etc.
EDIT: Wow, all your responses are fantastic. I’m not very knowledgeable about the subject so I can’t really continue everyone’s responses with a discussion, but I love and appreciate the insight in this thread! But I’ll try to think of some follow up questions :)
To generate answers is not to search answers. If I need a search engine, I want a search engine. If I need a text generation model I want a text generation model.
Machine learning seems to be very good at generating believable persuasive writing, and not at all good at determining truth from fiction, even worse than people. This is an absolutely deadly combination and our rush to use it in this capacity is profoundly stupid.
I’m not against these algorithms mind you. I think they have a lot of useful potential. It’s just that the first things people have dived for to use it seem to me to be the absolute most foolhardy ways to apply it.
I completely agree. It makes sense that AI is not good at determining truth vs fiction. I think it’s more important for us as users to just search for information on our own, then determine the “end answer” with our own judgement after reviewing different sources and experiences (taking each individual answer with a grain of salt)
That’s why, I personally think AI search engine won’t be the best all-rounder for all types of information that’s not niche, deep searching which is IMO better found on forum-like platforms where people (enthusiasts) share sources, their experiences, what worked, what didn’t work, and why. For AI, maybe just simple bland information, like an excel formula, or how to hot wire a car, is better
yeah, AI does perform very well when given a specific and goal-oriented task. I think the coolest use I’ve seen for it was an emergency doc who was getting it to write explanatory documents for patients. Like “Please write a friendly, empathetic, simple-english explanation for why CPR would not be effective on a frail person with severe osteoporosis and advanced dementia” and things. This allows the doc to give the patient more detail than they’d have time to present, but it can be very closely tailored to the scenario, and it’s the sort of information AI shines at producing.
The worst part about ai as a search engine is that it doesn’t (or at least can’t reliably) give you the original source. It can tell you lots of stuff but there’s no link to a news article or wiki page where it got it from. A traditional search engine can give you unreliable results, but at least you can look at them yourself and decide if they’re reliable or not. An AI search engine has you just take what it says at face value, true or not.
That recent instance where lawyers used AI to write their defence is a great example of the problem. It even included “citations” and summaries of those “citations”. Except they were completely made up. And then when asked for the source, it made up the entire source.
If you read everything it cites (retrieving it from an independently verified source) in its entirety (to make sure it exists and the summary is accurate) then it can still be helpful, but at that point it’s probably easier not to use it in the first place.
Bad. Chatbots can and has given out wrong, nonsense, and potentially dangerous info. All they do is synthesize info, and that includes the same bad info that made search engines less useful in the first place.
Deep learning randomly recombines its input (e.g., text OpenAI found crawling Reddit and Stack Overflow) based on how frequently various groups of words are used together in its input. So its output is not “information” in the ordinary sense, or if you like, the only information you can glean from it is a vague sense of how other people have used words before, in general.
As a result, its credible use cases are pretty slim. If you need large quantities of extremely banal text, ChatGPT is your man. If you want to learn something, look elsewhere.
Though I will mention the use case I’m exploring at the moment, with… moderate success: using it as a Spanish chat bot, since I really need to learn Spanish.
I’m with you on this one. Personally, there are a myriad of issues with replacing search engines with AI-generated answers:
- the accuracy. Without going into what is truth or falsehood, can you trust AI generated answers? I use Brave Search occasionally, and it has an AI summary text at the top. A lot of the time it strings multiple conflicting answers together into a paragraph and the result is laughably bad.
When I look something up that isn’t trivial, I typically use multiple search results and make the call myself. This step is removed if you use AI, unless one explicitly ask it to iterate all the top conflicting answers (along with sources) so the user can decide for themselves. However, as far as I know, its amalgamated answer is being treated as a source of truth, even if the content has nuanced conflicts a human can easily spot. This alone deters me from AI search in general.
-
I feel like doing this will degenerate my reading/skimming comprehension and research skills, and can lead to blindly trusting direct and easy to access answers.
-
In the context of technical searches like programming or whatnot, I’m not that pressed for time to take shortcuts. I don’t mind working stuff out from online forums and documentation, purely because I enjoy it and it’s part of the process.
-
Sometimes, looking things up yourself means you also can discover great blogs and personal wikis from niche communities, and related content that you can save and look back later.
-
Centralizing information makes the internet bland, boring and potentially exploitative. If it becomes normalized to pay a visit to one or two Big AI search engines instead of actually clicking on human-made sources then the information-providing part of the internet will become lost to time.
There’s also problems with biases, alignment, training AI on AI-generated content, etc., make of that what you will but that sounds worse than spending a couple of minutes selecting sources for yourself. Top results are already full of generic, AI generated stuff. The internet, made by us, for us, must prevail.
Anecdotally, I’ve used ChatGPT once or twice when I was really pressed for time with something I couldn’t find anywhere, and because my university professor wasn’t replying to my email regarding the topic. I was somewhat impressed at its performance, but this was after 6 or 7 prompts, not a single search away.
Maybe the next generation of AI search users who’s never looked a thing up manually will grimace at the thought of pre-AI search engines.
I’ve been trying out an IaC services’ (Pulumi) chatbot to answer questions about how to spin up architecture. It’s really bad. Totally makes up properties that don’t exist and at times spins up code that doesn’t even make sense syntactically. Not to mention that the code it generates has the potential to cost not insignificant amounts of money.
Definitely not a replacement for stack overflow, github, forums, or random blog posts. Not for a service that spins up critical infrastructure. Like, you have to know to some degree how that stuff works. And if you know how that stuff works, what’s the point of the service? Saving a few minutes typing stuff out and looking at documentation?
I hate it. And I’d still hate it even if the AI passes the Turing test with flying colours, and demonstrates itself as knowledgeable and smart as a human specialist in the relevant field.
I don’t want over-simplistic answers. I want to understand the deep reasoning behind them. Because sometimes a tiny detail might change that “yes” into “no”, and I want to use it to my own benefit. But at the same time, I don’t want to be explained things that I already know.
The solution for me is not a single answer. It’s a half dozen walls of text, written in a way that I can skip info that I already know and look straight into the info that I don’t. So for me a search engine works the best when it gives me relevant links, not when it tells me “here’s your answer”.
I personally do not like the idea of AI powered “search” engines since AI has been known in the past to absolutely make stuff up and site fake articles that don’t actually exist.
I don’t remember the exact article, but I do remember the story of either a lawyer or law professor (I can’t remember which) who asked an AI chatbot about himself and it came up citing a fake news article about him having sexual relations with a student of his (if I am remembering this all correctly).
Also, I prefer a traditional search where I am given a ton of varying links to different web pages displayed in a listed order so that way I can open a link and if I don’t find what I’m looking for, just close said link and try another one. Compare that to any time I’ve used Perplexity chatbot where at most at the end of each response I’m given a few different links that may or may not contain the answer I’m looking for if they’re even legitimate.
This is probably a bit of a pessimistic take, but it feels like Google and some of the other search engines are already essentially giving you AI results in the form of the top content they display. For many searches, what you’ll see are a variety of pages either written with AI or so heavily SEO-optimized that it’s clear they’re written to maximize ad revenue, not to help people find real answers. I think that sort of thing is inevitable with the monetization issues we have today, so I’m not sure what the answer is. Personally I don’t ever use generative AI to give me a trustworthy answer. I think it’s better employed for coming up with ideas or spurring creativity. Folks using it for fact checking should probably look elsewhere.
I do agree with you that a forum of answers from real people, something like Reddit became, is probably the ideal. And I think there are some industry-specific sites that achieve this reasonably well, like G2 for software and business reviews.
Edit: As an aside, information literacy is truly one of the great social problems of the day. For example, I can’t count the number of times I’ve seen folks screenshot the blurb from Google that “answers” a question and use it to try to prove their point in an online argument. Yes, that works fine in some instances, but the reliance on that snippet is what’s concerning to me.
It can be challenging and time-consuming to find real information, and the state of current search engines only exacerbates the problem.
Without solving the “hallucination problem”, is very risk for let that became mainstream. And also, it’s extremely expensive to do that, since running LLM prompts costs more energy comparing with simple searches. RN, running smaller models locally seems more interesting.
Also, is seems more useful not as chat, but as voice assistants.
I think people are way to quick to dismiss AI on the basis that it’s not always factual. Searching for stuff and adding Reddit is a great way to get non factual information as well. Everyone that has great insight into a subject knows how horrible many highly upvotes comments are.
Wether you use AI, Reddit or Google, you have to do a quick analysis of how credible it seems. I use all three of them, but more and more AI for niche searches that are hard to get good results for.
Yea, I sort of agree - that’s kind of why I think that doing research yourself by looking across dozens of sources, posts, and comments, then making your own judgment call is the way. Idk I guess maybe it’s just my experience, but I usually find that a comment with misinformation is downvoted to oblivion with responses as to why it’s wrong, and the most helpful solution is usually upvoted, with replies like “you’re a life saver” or “this is the real answer. thank you!!”. Obviously I don’t mean like 100% of content is like this, there will be bad content everywhere, but I take every solution with a grain of salt while looking at other solutions, and decide for myself (or maybe I misunderstood you lol)
There are sooooooo many times, way more than I can count, that I had an INCREDIBLY niche problem (usually tech related) and bam, someone on reddit had the same issue, and either figured it out & posted the solution, or the wack solution is in the comments. I never ever find this information on other random articles or “official help threads”. This happened so much that I didn’t google a single piece of information without adding “reddit” to it
I feel like AI would be better for simple things like an excel formula, or ordinary information? For now, at least. But, I am trying to learn more about this subject and truly see the legitimate capabilities of AI
Then again, what do I know lol
I mostly have experience with Bing. And it’s because they keep forcing their shitty AI search splash page on me every time I want to do a normal web search. I turned it off in the Edge browser but what do you know, it keeps coming back.
Any new feature a company repeatedly forces on me is going to be starting from a hole it has to dig out of. The bigger the corporation, the more immediately resistant I will be to it. “ChatGPT” and “AI” as the latest buzzphrases grate on me.
Outside the big corporations, I’m keen to tinker around with it some. I’ve done some machine learning stuff in years past, but this a large step change in what is available to hobbyists.
AI generated search is a huge improvement over what we had before. Before, when you searched a question or topic that doesn’t have a Wikipedia page or easy answer, you get a ton of SEO spam. Stack Overflow is still a great resource for programming, but for most general knowledge, AI generated is so much more useful.
I’ve been using Brave Search and the AI summarizer is pretty good, and I get to avoid loading shitty websites. For specific questions that aren’t easy for search engines to answer, I really like how ChatGPT is conversational and lets you ask followup questions. Another thing I’ve been using is perplexity.ai, it can actually search the internet and cite sources.
Overall, AI has been a big help to me, and lets me avoid going to other websites which are usually just awful now. Most websites are full of ads, trackers, cookie notices, “Checking your connection”, I’m done with dealing with that. Search engines are there when I forget the domain for a service or link to a GitHub project.
I definitely think ai search engines are the next step. The way most people use Google is already a human readable prompt which gpt handles very well. We just need to improve the results and figure out a way for it to not steal and suppress the views from the websites.
Interesting, I think I agree with you on this. It could be better than traditional searching, but only if it is able to pull accurate organic content with sources. I think only then would it be more accurate and efficient than looking through forum-like platforms.
Discussions and comments are super important too so I guess it would have to pull sources that include that, which I guess could work? That’s super important for probably everything, because you might see comments that say
"add 1/4 c flour instead of 1/3 and it was perfect"
or"I used this and it caused a spark in my usb port, here's what I did and my setup, take caution"
or"if you use a 3 monitor setup though, be careful using 2 hdmi and 1 dp, for these reasons, 3 dp is better for these reasons"
or"if you want a more efficient way to farm this item, talk to this npc and do this quest instead"
etc (I just made those up for examples) - but the point is that people comment on posts with tweaks, improvements, warnings, positive feedback, negative feedback, etc. That’s super valuable for making a final decision on your own about the problem, which is partially why I don’t think AI will ever be the most successful way to find information, because I don’t know if it can achieve this more efficiently than forum-like platforms