• Arthur BesseMA
    link
    fedilink
    arrow-up
    6
    ·
    3 years ago

    I uploaded this image to a caption generator AI and it says:

    • a close up of a clock on a wall
    • a black and white photo of a clock
    • a close up of a clock on a building
    • a black and white picture of a clock
    • a close up of a clock on the wall
    • a close up of a black and white clock
    • a close up of a clock on the side of a building
    • a black and white photo of a clock on a wall
    • a black and white photo of a black and white clock
    • a black and white photo of a black and white photo of a clock

    so… not bananas.

    • abbenm
      link
      fedilink
      arrow-up
      7
      ·
      3 years ago

      wonder if that AI was trained on anything in particular

      • AgreeableLandscapeOP
        link
        fedilink
        arrow-up
        2
        ·
        3 years ago

        In theory they could have used some public domain datasets or even parts of Wikimedia Commons.

        • Arthur BesseMA
          link
          fedilink
          arrow-up
          3
          ·
          3 years ago

          It appears that the captioning model on that website was trained on the MSCOCO dataset which was sourced from from Google and Bing image search, and also from Flickr.

      • Arthur BesseMA
        link
        fedilink
        arrow-up
        1
        ·
        3 years ago

        black and white photos of black and white photos, presumably.

  • K4mpfie@feddit.de
    link
    fedilink
    arrow-up
    3
    ·
    3 years ago

    I’ll do you one better: Everything in the known universe is either bananas or not bananas