• @Wheeljack
    link
    64 years ago

    That’s VERY cool, assuming it works as well as they imply in the abstract.

    I’d be curious how easily defeated it is, though. How persistent are the changes to the image if they’re effectively unnoticeable to a human eye? Does that mean that doing an anti-aliasing pass on an image will negate the cloak? What about re-rendering the image at a different resolution? How difficult is it to programmatically determine that an image has had this cloaking algorithm applied (and thus either throw out the sample as bad, or do additional processing to cloaked images)?

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      84 years ago

      It’s going to be an interesting arms race I imagine. Current models do appear to be fairly fragile since they don’t have any inherent understanding of what they’re looking at. From the perspective of the model it’s just a matrix of numbers that it compares to other matrices it’s been trained on to see whether it’s similar or not. This is why small changes in the image that aren’t detectable by a human can have a drastic effect on the ability of the model to recognize the image.

      Whether processing the image can negate these changes is an interesting question, and if that’s the case then it would be easy enough to just normalize the images before feeding them to the model.