Notes on Noam Chomsky: The False Promise of ChatGPT

Causality

AI can describe and predict the most probable outcome, but lacks causal explanation.

The human mind is not … a statistical engine for pattern matching

On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations. - Noam Chomsky: The False Promise of ChatGPT

And in most cases, the answer is highly improbable.

Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.

We often mistake correlation for causality. In this regard ChatGPT is like us.

But since Chat GPT does not understand causality, it doesn’t know what is impossible. Notice that Chat GPT rarely says that you’re wrong.

but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.

Moral

According to Chomsky, intelligence also includes moral.

This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism).

But in case of Chat GPT it’s an afterthought. Our mind is not a probabilistic engine with an output that is later regulated based on a certain set of rules. It’s rather the opposite, we generate output based on these inherent rules.

these programs learn humanly possible and humanly impossible languages with equal facility.

#ai