Don’t play, we were all burned by the high temp semiconductor scam recently
Don’t play, we were all burned by the high temp semiconductor scam recently
Irony lol
Sure, don’t feel too bad for him though, he did abuse her and his kids
In my younger life there was an older man who I spoke with quite a bit, an acquaintance
He spent about 5 years married to his first wife, who died.
He spent the next 30 years in relationships constantly pining over what could have been, never satisfied
Finally he turned 60 and his 4th wife got sick of him and divorced him
He was sad and lonely for 15 years after that, constantly sad about what could have been with his last wife, lost and not understanding why she left.
He’s married again now, at 75, and still talks about his prior wife.
Contentment is not found in relationships, it comes from within, and bubbles up to whatever situation you find yourself in. Don’t fall for the lie that you are a failure without a significant other.
That is a tall order, shorter perhaps than writing an opengl driver
Vulkan is incredibly verbose and well documented, have you looked at the spec?
I thought bytedance said they’re going dark anyways? Have they reversed their position?
We’re pathetically small and unintelligent on a universal scale, infinitesimal and unremarkable, it’s amazing we can think at all.
Unfortunately because we are meat with a lil lightning in it that grows naturally instead of being designed and perfected our brains are simply not truth machines, just like LLMs hallucinate, so do we, constantly.
Try to have some patience with our sibling humans who have the infection of superstition. They didn’t choose it, we all have our foibles, theirs are just a bit more visible and easily turned to hate
Unfortunately return to office mandates were never about worker preferences, it’s obvious workers prefer wfh, but with employers holding all the cards or course they’ll push to return to office
We’ve all played pandemic
I do this on my ultra, token speed is not great, depending on the model of course, a lot of source code sets are optimized for Nvidia and don’t even use native Mac gpu without modifying the code, defaulting to cpu. I’ve had to modify about half of what I run
Ymmv but I find it’s actually cheaper to just use a hosted service
If you want some specific numbers lmk
Final fantasy tactics Dragon quest monsters
My brother in name
And neither will anyone else with a paywall in place lol
Ngl I’d give up my firstborn to have him instead of trump
No backlight or front light, 3700mah battery, $650
I love eink tablets but I’m not sure this one really piques my interest
Deepseek v3 is the model in question
They’ve already started testing that at Google For ad enhancement and For immersive ads there’s no way they keep the chatting models pristine and ad-free
The dystopian future of “pay to use this miraculous product or it will shove advertisements down your throat in a way we know will work because we’ve trained it to sell specifically to you”
llama is good and I’m looking forward to trying deepseek 3, but the big issue is that those are the frontier open source models while 4o is no longer openai’s best performing model, they just dropped o3 (god they are literally as bad as microsoft at naming) which shows in benchmarks tremendous progress in reasoning
When running llama locally I appreciate the matched capabilities like structured output, but it is objectively significantly worse than openai’s models. I would like to support open source models and use them exclusively but dang it’s hard to give up the results
I suppose one way to start for me would be dropping cursor and copilot in favor of their open source equivalents, but switching my business to use llama is a hard pill to swallow
Hilarious, “fastify, a web framework” right there on the page