• ☆ Yσɠƚԋσʂ ☆OP
    link
    fedilink
    arrow-up
    8
    ·
    4 years ago

    It’s going to be an interesting arms race I imagine. Current models do appear to be fairly fragile since they don’t have any inherent understanding of what they’re looking at. From the perspective of the model it’s just a matrix of numbers that it compares to other matrices it’s been trained on to see whether it’s similar or not. This is why small changes in the image that aren’t detectable by a human can have a drastic effect on the ability of the model to recognize the image.

    Whether processing the image can negate these changes is an interesting question, and if that’s the case then it would be easy enough to just normalize the images before feeding them to the model.