sorry for the offtopic post… i just want the tldr bot here to summarize this :)

  • tracyspcyM
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    haha you are challenging it hard. Since it basically hugging chat under the hood it could be as any ai very creative in a way to create content from the thin air.

    • Arthur BesseOPA
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      yeah, my point in posting this is actually that running a “tldr bot” is a fundamentally bad idea and imo you should not do it :)

      i think its summary of this article demonstrates that pretty well.

      the cons don’t just outweigh the pros - there are literally no pros (besides maybe amusement).

      • tracyspcyM
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        in general I don’t see it as fundamentally bad idea since tldr bot (as an abstract concept) increases accessibility, and simple allows to get information right away avoiding visiting websites overloaded with ads and popup videos.

        Particularly in this community it helped to increase engagement from 0 to at least something :) Engagement level is common weakness of lemmy and new people may really feel lonely here.

        As for the bot, it exist in a tiniest possible part of lemmy (this community) and in a semi manual way, where I’m trying to remove false responses, so not seeing a big deal.

        • Arthur BesseOPA
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          but, as demonstrated by the summary in this thread, it doesn’t provide remotely accurate information.

          its summary of this article doesn’t actually convey anything at all about what this text is saying, but it does refer to conceptually related things which aren’t even mentioned in the text (eg “Monte Carlo Tree Search”?!).

          Don’t you think that enabling someone to “get information right away” is actually the opposite of helpful when they don’t realize it’s entirely incorrect information, especially when it appears plausibly-correct-enough that it prevents someone from reading the link and getting the actual information?

          i think the tldr bot as an abstract concept is a fundamentally harmful thing, given the state of LLMs today. i strongly encourage you to read the linked article (and/or the paper it is referring to) if you haven’t yet.

          • tracyspcyM
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            “fundamentally” itself contradicts to any appealing to current conditions. frankly, I just see that you proved the point that no one is calling in question(including me) - Hugging chat, chat gpt and other models may provide false information that could look plausibly-correct-enough. Since bot is just client for hugging chat and nothing more it inherits all of it. Future of bot here depends on test results, if we have 4-5 posts per day and i manually checking responses, as long it is not disturbing me to review it, it will be here increasing engagement.

            • Arthur BesseOPA
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Am I correct in assuming you have not read the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜?

              As the authors point out here, their critique applies to GPT-4 just as well as what came before it.

              I am far from alone in being confident that current conditions will not improve to the point that a summarization bot in a public forum will be non-harmful anytime in the near future.

              The time required to manually audit its responses thoroughly is substantial, and will almost always result in finding problems. Can you point me to a single example output from it so far that you believe to not contain any inaccuracies or misrepresentations?

              • tracyspcyM
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Your point in this chat, despite the words you use, is that you are simply trying toforbid me to conduct my own test within my community and drawing my own conclusions. I totally understand your position and knew that you are consistent in fighting with tldr bots on lemmy (it is not the first bot you are Ordering to kill). You are not opening my eyes on how AI works and its drawbacks, at the same time you are ignoring the motives behind this test and the test itself, even though the community sidebar clearly states that the bot has been added in test mode. This should make it clear to anyone that it could be removed if the test is unsuccessful.

                I cannot imagine what kind of result you are expecting.

                • Arthur BesseOPA
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Your point in this chat, despite the words you use, is that you are simply trying toforbid me to conduct my own test within my community and drawing my own conclusions.

                  I’m not trying to forbid you from playing with a bot in one community that you’re the mod of. I’m trying to convince you that it isn’t a worthwhile experiment, in case you decide that it is worth deploying elsewhere.

                  I totally understand your position and knew that you are consistent in fighting with tldr bots on lemmy (it is not the first bot you are Ordering to kill).

                  The one tldr bot I banned was posting total nonsense in worldnews, and iirc I only banned it from that one community? (Not sure, I might’ve banned it from the site…)

                  I think I briefly commented negatively about a second one, but I didn’t take the time to make a strong argument against it like I’ve done here. I’m doing it here because, while we don’t know eachother, I appreciate your posts lemmy and I figured I might be able to change your mind.

                  You are not opening my eyes on how AI works and its drawbacks, at the same time you are ignoring the motives behind this test and the test itself

                  I thought I was acknowledging your motivations, while respectfully explaining why I think they’re misguided.

                  even though the community sidebar clearly states that the bot has been added in test mode. This should make it clear to anyone that it could be removed if the test is unsuccessful.

                  I do appreciate the message in the sidebar.

                  I cannot imagine what kind of result you are expecting.

                  I was hoping to change your mind about this being a worthwhile endeavor. I’m sorry I appear to have failed, and to have irritated you in the process :/

  • lemming-TLDR-bot@reddthat.comB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    tldr
    Stochastic Parrots:

    1. Simulates human conversation by generating responses based on statistical analysis of large datasets.
    2. Designed for open source researchers who lack access to expensive language models like GPT-3.
    3. Uses Monte Carlo Tree Search (MCTS) algorithm to optimize responses.
    4. Allows users to fine-tune parameters such as temperature control and max depth for greater flexibility.
    5. Provides API keys for easy integration into other applications.