7
The Winograd schema is a language test for intelligent computers. So far, they're not doing well. MORE LANGUAGE FILES: https://www.youtube.com/playlist?list=PL96C35uN7xGLDEnHuhD7CTZES3KXFnwm0
Written with Gretchen McCulloch and Molly Ruhl. Gretchen's podcast Lingthusiasm is at http://lingthusiasm.com/ - and Gretchen's new book, BECAUSE INTERNET, is available:
🇺🇸 US: https://amzn.to/30tLpjT
🇨🇦 CA: https://amzn.to/2JsTYWH
🇬🇧 UK: https://amzn.to/31K8eRD
(Those are affiliate links that give a commission to me or Gretchen, depending on country!)
REFERENCES:
Levesque, H.J., Davis, E., and Morgenstern, L. (2011). The winograd schema challenge. In
AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.
Trask, R. (1993). A dictionary of grammatical terms in linguistics. London ; New York: Routledge. (page 233)
Winograd, T. (1972). Understanding natural language. Cognitive Psychology, 3(1), 1-191. (page 33)
Hunston, S. (2002). Corpora in Applied Linguistics. Cambridge: Cambridge University Press.
Jurafsky, D., & Martin, J. (2009). Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition (2nd ed., Prentice Hall series in artificial intelligence). Upper Saddle River, N.J.: Pearson Prentice Hall.
Gray, M. & Suri, S. (2019) Ghost work. Boston, M.A.: HMH Books.
I'm at https://tomscott.com
on Twitter at https://twitter.com/tomscott
on Facebook at https://facebook.com/tomscott
and on Instagram as tomscottgo
I’ve seen this video; heavily recommend it.
In a nutshell, what computers can’t reliably understand is anything that relies on “world knowledge” - that is, knowledge outside the language that you need to know to parse it correctly. Stuff like “apples fall down, not up”, “a container needs to be bigger than the item that it contains”, stuff like this.
Note that the common NLP (natural language processing) methods do not even try to address this, as they often rely on brute force - “if you feed enough language into the computer, it’ll eventually get it”.
Since I posted this, Microsoft claims to have built an AI which does have “world knowledge”. It was able to explain how to pile up some objects so they don’t fall. We’ll see though if its claims are true.
I’d take claims from Microsoft with heavy scepticism; they tend to heavily overrate the capabilities of their own software. However, if it is true and accurate, it’s an amazing development, and it might solve problems in the video, like:
For us humans it’s trivial to disambiguate it₁ as the trophy and it₂ as the bag, because we know stuff like “objects only fit in containers bigger than themselves”. Algorithms usually don’t.