• LvxferreM
    link
    510 months ago

    I’ve seen this video; heavily recommend it.

    In a nutshell, what computers can’t reliably understand is anything that relies on “world knowledge” - that is, knowledge outside the language that you need to know to parse it correctly. Stuff like “apples fall down, not up”, “a container needs to be bigger than the item that it contains”, stuff like this.

    Note that the common NLP (natural language processing) methods do not even try to address this, as they often rely on brute force - “if you feed enough language into the computer, it’ll eventually get it”.

    • @roastpotatothiefOP
      link
      29 months ago

      Since I posted this, Microsoft claims to have built an AI which does have “world knowledge”. It was able to explain how to pile up some objects so they don’t fall. We’ll see though if its claims are true.

      • LvxferreM
        link
        39 months ago

        I’d take claims from Microsoft with heavy scepticism; they tend to heavily overrate the capabilities of their own software. However, if it is true and accurate, it’s an amazing development, and it might solve problems in the video, like:

        • The trophy doesn’t fit in the bag because it₁ is too big.
        • The trophy doesn’t fit in the bag because it₂ is too small.

        For us humans it’s trivial to disambiguate it₁ as the trophy and it₂ as the bag, because we know stuff like “objects only fit in containers bigger than themselves”. Algorithms usually don’t.