How generative natural language works has been highly debated for over 60 years—there’s certainly no consensus most linguists would agree with. And while we have a pretty good idea how the process of facial recognition works, we know that process isn’t conducive to extracting a conventional explanation of how to recognize a particular face. (The best you could do is to make a list of features that would allow someone to eliminate all but one candidate from a small group, but that’s distinct from the process of actually recognizing someone.)
How generative natural language works has been highly debated for over 60 years—there’s certainly no consensus most linguists would agree with. And while we have a pretty good idea how the process of facial recognition works, we know that process isn’t conducive to extracting a conventional explanation of how to recognize a particular face. (The best you could do is to make a list of features that would allow someone to eliminate all but one candidate from a small group, but that’s distinct from the process of actually recognizing someone.)
That feels off somehow, or at least, the unsolved problem part isn’t as narrow as that makes it seem.
I can recognise a place, just like a face. I don’t eliminate options, I just recognise it. I can recognise voices the same way.
I don’t understand how faces are different in this context?