Just a 15 second game like Snake or Helicopter. Should stop a significant level of bots, no?

  • Wolf Link 🐺@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Like others have said already, bots could likely learn to play those easily … but I’m more concerned about people with disabilities / illnesses that would make playing these games hard, painful or even impossible. Someone who has parkinsons or arthritis for example might be able to click a big square in an image to solve a captcha, but might have trouble to “fine-tune” their movements fast enough to play a minigame that effectively locks them out of the community if they fail, especially if there is a timer involved.

  • Nanachi@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I wonder if you can detect if the player is a bot or not, regardless, most captchas are also ml training if I remember correctly

    • j4k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      There are two issue posts on the Lemmy github about the captcha options they considered. It is an interesting read. I had no idea there were so many types or even the existence of embedded options. I thought all were 3rd party and most were google, but I was wrong. Still, there are recent Lemmy posts by the devs basically saying the only option to effectively control the bots is by requiring a valid email for account creation.

      • AB7ORH7D@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        With AI capabilities now, surely it’s pretty easy for an AI to follow a set of instructions like: create an email, check email, click link in email…etc - is that correct? Or put another way - why would email verification stump ML so consistently if it’s trained to create emails and do the process

        • j4k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I’m only parroting. The developers of Lemmy mentioned this as the only empirically effective option in the real world. AI in the real world is far dumber than it is framed in the media. Even a well trained model can’t just run off on a long tangent path to connect the dots. It only works for a very short time under controlled circumstances in the real world before it must be reset. This is not real AI.

          • vegivamp@feddit.nl
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            You’re right about AI - it doesn’t exist and is decades away, what we have are increasingly capable statistics engines, aka machine learning.

            A for the topic, that’s easily worked around by having domains with catch-all addresses and a script that just clicks any registration link that comes in. Of course such domains are easily spotted and blocked, but domains are cheap as hell, and there are undoubtedly plenty of botnet nodes on hosts that can receive mail du they don’t even need to register their own domains at all.

  • dragnucs
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    15 seconds to log into m’y vank account! Nah.

  • whileloop@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Training an AI to play snake or other simple games is not hard. Making it stop at a specific score might make it slightly harder, but not much. Then you just need to read the text from the screen either, which is trivial. No, not hard for a bots to get past. It might slow actual humans more than bots.

    • thebestaquaman@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      It’s definitely trivial for an AI to solve the “game” or task, I think an interesting question would be whether you could filter them by checking how efficiently they do so.

      I’m thinking something like giving two consecutive math tasks, first you give e.g. 1+1, then you give something like 11 + 7. While probably all people would spend a small, but detectable, longer amount of time on the “harder” problem, an AI would have to be trained on “what do humans perceive as the harder problem” in order to be undetectable. That is, even training the AI to have a “human like” delay in responding isn’t enough, you would have to train it to have a relatively longer delay on “harder” problems.

      Another could be:

      1. Sort the words (ajax, zebra) alphabetically
      2. Sort the words (analogous, analogy) alphabetically

      where the human would spend more time on the second. Do you think such an approach would be feasible, or is there a very good, immediate reason it isn’t a common approach already?

      • turkey@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        With that idea, you (the captcha maker) would also have to write some code that computes how long humans should take to do a task (so that you can time the user and compare that with what your code spits out). Whatever code you write, the bot makers could eventually figure out what you wrote, and copy that.

        To put it another way, when you say “humans would spend more time on the second task” with your two examples, you would have to write specific rules about how long humans would take, so that your captcha can enforce those rules. But then the bot makers could use trial and error to figure out what your rules were and then write code that waits exactly as long as you’re expecting.

        • thebestaquaman@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          It’s true that a bot can be specialised to solve it, but i feel that is the case no matter what you do.

          To me the appeal of this approach is that it is very simple for a human to make the rules (e.g. numbers with two digits are harder to add than numbers with one digit, or "the more leading letters two words have in common, the harder they are to sort) but for a bot to figure out the rules by trial and error (while answering at human-like speed) will take time. So the set of questions can be changed quite often at low cost, making it less feasible to re-train the bot every time.

          Another alternative could be to only give questions that are trivial for a bot, but annoyingly difficult for a human, and let them through if they press “reset captcha” a couple times, though some people might find that annoying…