• kitnaht@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    22 days ago

    I’ve found that 4o is substantially worse than the previous model at a ton of things. So I run all of my LLMs locally now through OLLAMA.

  • damnthefilibuster@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    22 days ago

    I’m trying out Perplexity and it’s literally LMGTFY. To the point that sometimes I just open google.com to get what I need. Sometimes it’s just me being lazy and searching for a domain instead of typing “.com” at the end.

    But here’s the thing- for the longest time, google was devolving into LMGTFY too. Don’t you think?

  • fubarx
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 days ago

    It’s worked better for me when I throw complex tech questions at it, instead of wading through mountains of StackOverflow and Reddit 10-yo bilge.

    You can’t trust 2/3 of what ChatGPT generates or returns, and still have to know what you’re doing. But it’s a lot easier than clicking on 100 search results and finding 99 of them irrelevant.

    • xia@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      21 days ago

      I hear you. At some point, it’s as if everyone decided that “search” should produce garbage results if nothing matched… and eventually these garbage results become front-and-center.