Smells like bullshit. The graphs they showed in the source paper with their accuracy at like 100% for every test seem even more like bullshit. Did they run the model over the training data or what?
Maybe I’m wrong but text is just way too high signal to noise medium to be able to tell if it was written by an AI. The false positives would be high enough that it’s effectively useless. Does anyone have another perspective on this? If I’m missing some nuance here I’d love to understand more.
Smells like bullshit. The graphs they showed in the source paper with their accuracy at like 100% for every test seem even more like bullshit. Did they run the model over the training data or what?
Maybe I’m wrong but text is just way too high signal to noise medium to be able to tell if it was written by an AI. The false positives would be high enough that it’s effectively useless. Does anyone have another perspective on this? If I’m missing some nuance here I’d love to understand more.
It is very easy to get to those numbers if you don’t include the rate of false positives. That is all there is to this, really.