• 2 Posts
  • 170 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle




  • In my younger life there was an older man who I spoke with quite a bit, an acquaintance

    He spent about 5 years married to his first wife, who died.

    He spent the next 30 years in relationships constantly pining over what could have been, never satisfied

    Finally he turned 60 and his 4th wife got sick of him and divorced him

    He was sad and lonely for 15 years after that, constantly sad about what could have been with his last wife, lost and not understanding why she left.

    He’s married again now, at 75, and still talks about his prior wife.

    Contentment is not found in relationships, it comes from within, and bubbles up to whatever situation you find yourself in. Don’t fall for the lie that you are a failure without a significant other.




  • We’re pathetically small and unintelligent on a universal scale, infinitesimal and unremarkable, it’s amazing we can think at all.

    Unfortunately because we are meat with a lil lightning in it that grows naturally instead of being designed and perfected our brains are simply not truth machines, just like LLMs hallucinate, so do we, constantly.

    Try to have some patience with our sibling humans who have the infection of superstition. They didn’t choose it, we all have our foibles, theirs are just a bit more visible and easily turned to hate




  • I do this on my ultra, token speed is not great, depending on the model of course, a lot of source code sets are optimized for Nvidia and don’t even use native Mac gpu without modifying the code, defaulting to cpu. I’ve had to modify about half of what I run

    Ymmv but I find it’s actually cheaper to just use a hosted service

    If you want some specific numbers lmk











  • llama is good and I’m looking forward to trying deepseek 3, but the big issue is that those are the frontier open source models while 4o is no longer openai’s best performing model, they just dropped o3 (god they are literally as bad as microsoft at naming) which shows in benchmarks tremendous progress in reasoning

    When running llama locally I appreciate the matched capabilities like structured output, but it is objectively significantly worse than openai’s models. I would like to support open source models and use them exclusively but dang it’s hard to give up the results

    I suppose one way to start for me would be dropping cursor and copilot in favor of their open source equivalents, but switching my business to use llama is a hard pill to swallow