I feel like Ted Chaing has thoroughly missed the point. He writes:
"It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language."
Since when did intent and language co-relate? The language itself is not always associated with intent. You do not need intent behind words for them to be literate. If a criminal standing trial lies and writes a heartfelt written statement - no one reading it without context can see his lies through the text, that is an emotionally appealing Hollywood myth - not realistic in any sense.
I feel like Ted Chaing has thoroughly missed the point. He writes:
"It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language."
Since when did intent and language co-relate? The language itself is not always associated with intent. You do not need intent behind words for them to be literate. If a criminal standing trial lies and writes a heartfelt written statement - no one reading it without context can see his lies through the text, that is an emotionally appealing Hollywood myth - not realistic in any sense.
As you two said - snobbish from Chaing.