There are two issue posts on the Lemmy github about the captcha options they considered. It is an interesting read. I had no idea there were so many types or even the existence of embedded options. I thought all were 3rd party and most were google, but I was wrong. Still, there are recent Lemmy posts by the devs basically saying the only option to effectively control the bots is by requiring a valid email for account creation.
With AI capabilities now, surely it’s pretty easy for an AI to follow a set of instructions like: create an email, check email, click link in email…etc - is that correct? Or put another way - why would email verification stump ML so consistently if it’s trained to create emails and do the process
I’m only parroting. The developers of Lemmy mentioned this as the only empirically effective option in the real world. AI in the real world is far dumber than it is framed in the media. Even a well trained model can’t just run off on a long tangent path to connect the dots. It only works for a very short time under controlled circumstances in the real world before it must be reset. This is not real AI.
You’re right about AI - it doesn’t exist and is decades away, what we have are increasingly capable statistics engines, aka machine learning.
A for the topic, that’s easily worked around by having domains with catch-all addresses and a script that just clicks any registration link that comes in.
Of course such domains are easily spotted and blocked, but domains are cheap as hell, and there are undoubtedly plenty of botnet nodes on hosts that can receive mail du they don’t even need to register their own domains at all.
I wonder if you can detect if the player is a bot or not, regardless, most captchas are also ml training if I remember correctly
There are two issue posts on the Lemmy github about the captcha options they considered. It is an interesting read. I had no idea there were so many types or even the existence of embedded options. I thought all were 3rd party and most were google, but I was wrong. Still, there are recent Lemmy posts by the devs basically saying the only option to effectively control the bots is by requiring a valid email for account creation.
With AI capabilities now, surely it’s pretty easy for an AI to follow a set of instructions like: create an email, check email, click link in email…etc - is that correct? Or put another way - why would email verification stump ML so consistently if it’s trained to create emails and do the process
I’m only parroting. The developers of Lemmy mentioned this as the only empirically effective option in the real world. AI in the real world is far dumber than it is framed in the media. Even a well trained model can’t just run off on a long tangent path to connect the dots. It only works for a very short time under controlled circumstances in the real world before it must be reset. This is not real AI.
You’re right about AI - it doesn’t exist and is decades away, what we have are increasingly capable statistics engines, aka machine learning.
A for the topic, that’s easily worked around by having domains with catch-all addresses and a script that just clicks any registration link that comes in. Of course such domains are easily spotted and blocked, but domains are cheap as hell, and there are undoubtedly plenty of botnet nodes on hosts that can receive mail du they don’t even need to register their own domains at all.
Ok interesting thanks!