Multi-column
Meta
up:: ๐ฅ Sources
type:: #๐ฅ/๐ฐ
status:: #๐ฅ/๐ฉ
tags:: #on/articles
topics:: Artificial Intelligence
links:: Noam Chomsky
Article Info
Author:: Web Summit
Title:: Debunking the Great AI Lie | Noam Chomsky Gary Marcus Jeremy Kahn
URL:: "https://www.youtube.com/watch?v=PBdZi_JtV4c&feature=youtu.be"
Reviewed Date:: 2023-01-29
Finished Year:: 2023
Debunking the Great AI Lie | Noam Chomsky, Gary Marcus, Jeremy Kahn
Ava
Key Takeaways:
- AI systems cannot distinguish between actual and non-actual worlds and cannot understand the underlying meanings of words.
- AI systems perpetuate past bias and produce misinformation, which can be devastating to the democratic process.
- AI systems are not making a scientific contribution to understanding the world, and are instead only producing superficial regularities in astronomical amounts of data.
- AI systems are not providing insight into the nature of language or any other cognitive process.
Questions:
- What are the implications of AI systems perpetuating past bias?
- What other dangers could arise from AI systems producing misinformation?
- How can AI systems be used to gain insight into the nature of language and other cognitive processes?
- What are the long-term effects of AI systems not providing insight into the world?
Highlights
The Shortcomings of Large Language Models
The Shortcomings of Large Language Models
- Understand the relation between the orders of words and their underlying meaning.
- Sum another version of this system: you say something like "draw a picture of a man chasing a woman" or "draw a picture of a woman chasing a man", and the system is basically a chance. It really can't tell the difference, and it's not clear that just adding in more of the same - what people call scaling today - is actually going to help. I think that there's something fundamental missing from these systems, which is an understanding of the world, how objects work, what objects are.
- The kinds of problems these systems have are dolly. If you tell it to draw a blue cube on top of a red cube, it might just give you a red cube on top of a blue cube. So one of the most basic things about language is that we put together meanings from the orders of words. This is an idea that goes back to Fraga and even further. These systems don't have it.
- Describing it was another article that just came out on theory of mind and whether these systems can understand what you believe about other people, and there was failure on those kinds of things. There's failure on a system to understand something like if I want some grapes or ask you if you have something did you touch this glass? I can't remember the exact example, and you say, "Well, I had gloves on," so my finger implying your fingerprints aren't there, right? And the system doesn't understand.
- Question that Nome has worked on all his career which is why is human language the way that it is and what he's saying is these systems could learn computer languages or languages that aren't like humans; they could learn anything. They don't do any of it perfectly, and they don't really tell much about why we're the special creatures that we are.
- Interesting uh no...I know you've sort of said that these systems are maybe good engineering but not very scientifically valid at all. Can you explain what you meant by that and why you think the science here isn't really valid?
- The snow plow is a great engineering achievement that clears the streets without manual labor.
- Telescopes and deep learning approaches have resulted in helpful scientific discoveries such as protein folding.
- However, the concern for understanding the world is different from creating useful objects.
- Systems like GPT can find regularities in data but don't distinguish between actual and non-actual. They don't offer any scientific or engineering contribution and are a waste of energy.
- There is a danger in thinking these tools are more capable than they are and sacrificing cognitive science for AI development.
- Work on GPT3 but short-sighted technology that won't last long
- History of AI has seen fads like expert systems and support vector machines
- Current systems perpetuate past bias and produce misinformation
- Concern for the Democratic process due to misleading information being produced at scale
- Concerns about driverless car industry wasting investments
- Prediction for a death attributable to one of these systems in 2023 due to bad advice or love for the system
- No contribution from large language models to the understanding of linguistics
- Example of a study that discovered the word "occasion" is used more frequently than "molecule" because it's used in more domains
- Snowplows can be useful, but not contributing to science
- Lack of understanding from people building language production/comprehension systems, ignoring linguistics principles
- GPT3 doesn't start with meaning, but predicts the next word
up:: ๐ฅ Sources