They support Claude, ChatGPT, Gemini, HuggingChat, and Mistral.
The fact that I can’t choose one of the many AIs I have locally downloaded on my computer is bogus
Unpopular opinion, I think they’re doing it right as well as it can be at least. It’s completely optional and doesn’t seem to be intrusive.
yeah its not google chrome level which i’m thankful about.
I’m way more pissed about restarting my PC after an update and having Copilot installed without my permission.
I agree
I don’t understand the hate. It’s just a sidebar for the supported LLMs. Maybe I’m misunderstanding?
Yes, I would prefer Mozilla focus on the browser, but to me, this seems like it was done in an afternoon.
It seems like common cynicism. Mozilla adds this feature, as not to yield major features to other browsers. Mozilla’s lets you natively have lots of different AI solutions to pick from.
Not every feature is for everyone. Not every feature is done being improved on at release.
And in spite of popular opinions, organizations don’t do just one thing and then do just the next thing and the thing after that. Organizations can and do focus on and prioritize many things at the same time.
And for people who are naysaying AI at every mention, it has a lot of great and fascinating uses, and if you think otherwise, you really should try them more. I’ve used it plenty for work and life. It’s not going away, might as well do some nice things with it.
I want my browser to be a browser. I don’t want Pocket, I don’t want AI, I don’t want bullshit. There are plugins for that.
that’s the great thing: you don’t have to use it
But Firefox wastes time developing that instead of fixing 20 years-old bugs.
i know it is an unpopular opinion around here. but currently AI features open doors for sales. that is important.
for the software i help develop, we introduced an optional AI integration. just its presence allowed us to sell the main SW multiple times. the AI plugin was never sold so far.
investment AI: 2 weeks of gluecode. i am not concerned with finances, but that plugin is for sure net positive.
Do users like the AI integration, or is this just something the management class wanted to see? Right now, those clothes look crazy good on that emperor…
right now we don’t have any real customers that use it - as the plugin did not sell yet.
but from testing at customer sites with real people that would use it - we got only positive feedback. which is not hard to imagine: the RAG + LLM enables less experienced users to navigate a huge and complex network of information.
but it for sure is also a buzzword execs like to see: they talked to us because we have AI. saw that the main product is good. bought the main product and decided the AI is too expensive.
in the end it doesn’t matter to me. the 2w of AI was a fun sidequest and it left us with a passive boost for sales.
That was there before 133, don’t remember the exact release that added it.
Didn’t want it in Opera, don’t want it in Firefox. I mean they can keep trying and I’ll just keep on ignoring this shit :/
hopefully, it’ll be possible to opt out somehow.
as the screenshot shows, it is opt-in
Thing is, for your average user with no GPU and whp never thinks about RAM, running a local LLM is intimidating. But it shouldn’t be. Any system with an integrated GPU, and the more RAM the better, can run simple models locally.
The not so dirty secret is that ChatGPT 3 vs 4 isn’t that big a difference, and neither are leaps and bounds ahead of the publically available models for about 99% of tasks. For that 1% people will ooh and aah over it, but 99% of use cases are only seeing marginal gains on 4o.
And the simplified models that run “only” 95% as well? They can use 90% fewer resources give pretty much identical answers outside of hyperspecific use cases.
Running a a “smol” model as some are called, gets you all the bang for none of the buck, and your data stays on your system and never leaves.
I’ve been yelling from the rooftops to some stupid corporate types that once the model is trained, it’s trained. Unless you are training models yourself, there is no need for the massive AI clusters, just for the model. Run it local on your hardware at a fraction of the cost.
There’s the tragedy with this new feature: they fast-tracked this past more popular requests, sticking it into Release Firefox.
But they only rushed the part that connects to third parties. There was also a “localhost” option which was originally alongside the Big Five corporate offerings, but Mozilla ultimately decided to bury that one inside of the
about:config
settings.I’m guessing that the reason (and a good one at that) is that simply having an option to connect to a local chatbot leads to just confused users because they also need the actual chatbot running on their system. If you can set up that, then you can certainly toggle a simple switch in about:config to show the option.
Can you point me to some resources to running smol llm?
My use case prob just to help “typing” miscellaneous idea I have or check for my grammatical error, in english.
Thanks, in advance.
Here you go: Review of SmolVLM https://www.marktechpost.com/2024/11/26/hugging-face-releases-smolvlm-a-2b-parameter-vision-language-model-for-on-device-inference/
Model itself: https://huggingface.co/spaces/HuggingFaceTB/SmolVLM
And you can use Ollama to run it locally, and Open WebUI to access it in browser.
Idk I noticed pretty significant differences between models of various sizes. I mean there are lots of metrics on this
Last time I tried using a local llm (about a year ago) it generated only a couple words per second and the answers were barely relevant. Also I don’t see how a local llm can fulfill the glorified search engine role that people use llms for.
Try again. Simplified models take the large ones and pare them down in terms of memory requirements, and can be run off the CPU even. The “smol” model I mentioned is real, and hyperfast.
Llama 3.2 is pretty solid as well.
These are the answers they gave the first time.
Qwencoder is persistent after 6 rerolls.
Anyways, how do I make these use my gpu? ollama logs say the model will fit into vram / offloaing all layers but gpu usage doesn’t change and cpu gets the load. And regardless of the model size vram usage never changes and ram only goes up by couple hundred megabytes. Any advice? (Linux / Nvidia) Edit: it didn’t have cuda enabled apparently, fixed now
Nice.
Yea I don’t trust any AI models for facts, period. They all just lie. Confidently. The smol model there at least tried and got it right at first… Before confusing the sentence context.
Qwen is a good model too. But if you wanted something to run home automation or do text summaroes, smol is solid enough. I’m using CPU so it’s good enough.
They’re fast and high quality now. ChatGPT is the best, but local llms are great, even with 10gb of vram.
Luckily, it seems to be disabled by default. At the moment.
They better not decide to enable it by default.
it’s not enabled by default … it’s opt out by default
I think that means that it’s opt-in.
if third-party accounts are needed, it’ll have to stay that way.
I wish I had telemetry on such features.
I really doubt a significant number of people use AI chatbots often enough that having it in a dedicated sidebar is worth it.
I think nobody uses AI Chatbots, unless you’re forced to do it. They’re utter shit.
I wish I had telemetry
I’m sure they do as Mozilla is an ad company
This is apparently either not widely known or some people just like to shoot the messenger.
- jwz, Jun.: Mozilla is an advertising company now
- jwz, Oct.: Mozilla’s CEO doubles down on them being an advertising company now
- Mozilla support: Share data with Mozilla to help improve Firefox
- Firefox documentation: Telemetry
While you are not wrong your dislike of Mozilla is has more to do with your instance being anti west. I’m not sure I’m ready to side with lemmyml
I happen to know jwz personally, and he knows Mozilla intimately: he founded it. His dislike of Mozilla is pretty much the same as mine, and he is neither “anti west” nor anti liberal. We dislike Mozilla because it has lost its way from being a FOSS browser maintainer and a booster for & steward of an open web.
And I’m not “anti-West,” I’m anti-capitalist, anti-settler-colonialist, and anti-imperialist; and those happen to be things that “the West” presently embodies.
“Your instance is anti-West” means that it endorses everything you listed as bad, as long as it’s from a nation that has enough red in its flag.
There’s no time like the present to switch instances.
I’m pretty sure I know what the instance I admin does & doesn’t endorse, thanks.
Oh, so it was just empty virtue signaling and hypocrisy? You’re welcome.
I’ve never had the urge to use a chat bot personally, but I’m pretty sure I’m in the minority. Lots of people use these things all the time for so much stuff we probably wouldn’t even consider.
I’ve worked with a few people that all but rely on these things to produce any creative work they have to do.
Maybe we run in different circles but I think a lot of people don’t even talk about how they’re using it.
Thanks for nothing, Mozilla.
They should raise the ceo’s pay some more to celebrate.
And fire a few employees just cause.
This happened ages ago, didn’t it? Am I missing something new?
Yeah, it did. That feature has been there at least since when Mozilla enabled “Firefox labs” section in settings by default a few months ago, and maybe even earlier than that
TIL a month is an age.
Well, this month in particular…
True. ❤️
I only saw it now, maybe it happened before on a different version.
I mean, if you’re going to do it, where’s the Ollama love?
I was disappointed there was no local option…
I don’t get it, ollama is a provider no?
I think the point is it’s open source
and so is firefox, so why use another model provider
A provider that can be run locally.
Sigh. I’m glad to have switched to LibreWolf.
I switched a while back before all the Ai and “privacy preserving” telemetry stuff.
Every update note I see for Firefox now just reinforces my decision.
I wonder if this can be removed at compile time, like Pocket.