The Shortcomings of Large Language Models

Many people believe that large language models are simply not that powerful. Sure they're impressive, but they lack the ability to understand meaning of words and make decisions like humans do.

You say something like "draw a picture of a man chasing a woman" or "draw a picture of a woman chasing a man", and the system is basically a chance. It really can't tell the difference, and it's not clear that just adding in more of the same - what people call scaling today - is actually going to help. I think that there's something fundamental missing from these systems, which is an understanding of the world, how objects work, what objects are.

The snow plow is a great engineering achievement that clears the streets without manual labor.

Telescopes and deep learning approaches have resulted in helpful scientific discoveries such as protein folding.
However, the concern for understanding the world is different from creating useful objects.

Systems like GPT can find regularities in data but don't distinguish between actual and non-actual. They don't offer any scientific or engineering contribution and are a waste of energy.

There is a danger in thinking these tools are more capable than they are and sacrificing cognitive science for AI development.

Current systems perpetuate past bias and produce misinformation

Concern for the Democratic process due to misleading information being produced at scale

GPT3 doesn't start with meaning, but predicts the next word


up:: ๐Ÿ  Home