According to the analytics firm’s report, worldwide desktop and mobile web traffic dropped by 9.7% from May to June, and 10.3% in the US alone. Users are also spending less time on the site overall, as the amount of time visitors spent on chat.openai.com was down 8.5%, according to the reports.

The decline, according to David F. Carr, senior insights manager at Similarweb, is an indication of a drop in interest in ChatGPT and that the novelty of AI chat has worn off. “Chatbots will have to prove their worth, rather than taking it for granted, from here on out,” Carr wrote in the report.

Personally, I’ve noticed a sharp decline in my usage. What felt like a massive shift in technology a few months ago, now feels like mostly a novelty. For my work, there just isn’t much ChatGPT can help me with that I can’t do better myself and with less frustration. I can’t trust it for factual information or research. The written material it generates is always too generic, formal, and missing the nuances I need that I either end up re-writing it or spending more time instructing ChatGPT on the changes I need than it would have taken me to just write it myself in the first place. Its not great at questions involving logic or any type of grey area. Its sometimes useful for brainstorming, but that is about it. ChatGPT has just naturally fallen out of my workflow. That’s my experience anyway.

  • Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    The generic ChatGPT is far too error-prone and limited compared to the many variations of other GPTs out there. It was a fad for those who weren’t going to fine tune a use that worked well or are doing actual research in better tactics. How many who are knowledgeable on computer systems have moved to smaller locally installed versions that work just as well or better?

      • Rhaedas@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        There are a number of them now, but I’ve put the Vicuna 13B one on my Windows side before. Trying to get it on Ubuntu so it can use the GPU, but it’s being difficult. Look up TheBloke on github, they have a large selection that can be used through the text-generation UI coding.

        I may have misspoke saying “better”, as it looks like it’s a few percentages below on comparisons. I thought I had seen some varieties of local compared that rated higher though, such as on AI Explained’s channel.

        • Zeth0s@reddthat.com
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Thanks! I tried vicuna, but I didn’t find it very good for programming. I will keep searching :)

          • Rhaedas@kbin.social
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            I didn’t either, actually. It seems to me that where LLMs excel is in situations where there will be a large consensus of a topic, so the training weights hit close to 100%. Anyone who has read through or Googled for answered for programming in the various sources online has seen how among the correct answers there are lots of deviations which muddy the waters even for a human browsing. Which is where the specialized training versions that hone down and eliminate a lot of the training noise come in handy.