up:: 📥 Sources
type:: #📥/🎙
status:: #📥/🟥
tags:: #on/podcasts
topics::
Author:: The Daily
Title:: Elon Musks Appetite for Destruction
URL:: "https://share.snipd.com/episode/1124f962-f849-4c47-a4bf-e9671e5ed6c0"
Reviewed Date:: 2023-02-27
Finished Year:: 2023
The Sunday Read: ‘Elon Musk’s Appetite for Destruction’
Episode metadata
- Episode title:: The Sunday Read: ‘Elon Musk’s Appetite for Destruction’
- Show:: The Daily
- Owner / Host:: The New York Times
- Episode link:: open in Snipd
- Episode publish date:: 2023-02-26
Show notes
> In February, the first lawsuit against Tesla for a crash involving its driver-assistance system, Autopilot, will go to trial. The slew of trials set to follow will be a costly fight that the company’s chief executive, Elon Musk, has vowed to take on in court. When Tesla released its Autopilot feature in October 2015, Musk touted the feature as “probably better” than a human driver. The reality, however, has proved different: On average, there is at least one Autopilot-related crash in the United States every day.> While several of these accidents will feature in the upcoming trials, another camp of Tesla users who have fallen victim to Autopilot crashes are unwilling to take a negative stance because of their love for the brand. Or because they believe that accidents are a necessary evil in the process of perfecting the Autopilot software.
> Dave Key, whose 2015 Tesla Model S drifted out of its lane and slammed into the back of a parked police S.U.V., is of the latter camp.
> “As a society,” Key argued, “we choose the path to save the most lives.”
> This story was recorded by Audm . To hear more audio stories from publications like The New York Times, download Audm for iPhone or Android .
- Show notes link:: open website
- Tags: #podcasts #snipd
- Export date:: 2023-02-27T21:32
Snips
[14:48] Elon Musk's Self-Centered Philosophy
🎧 Play snip - 1min️ (13:16 - 14:53)
✨ Key takeaways
- Elon Musk is a narcissist who is reckless with his company's finances and AI.
- Tesla is struggling because of lawsuits and a crashing stock price.
📚 Transcript
Click to expand
Speaker 1
First, he reached out directly to someone who was harmed by one of his products, something it's hard to imagine the head of GM or Ford contemplating if only for legal reasons. Indeed, this email was entered into evidence after Riley sued Tesla. And then Musk rebuffed Riley. No vague, I'll look into it or we'll see what we can do. Riley receives a hard no. Like Key, I want to resist Musk's tendency to make every story about him. Tesla is a big car company with thousands of employees. It existed before Elon Musk. It might exist after Elon Musk. But if you want a parsimonious explanation for the challenges the company faces in the form of the lawsuits, a crashing stock price, and an AI that still seems all too capable of catastrophic failure, you should look to its mercurial, brilliant, sophomoric chief executive. Perhaps there's no mystery here. Musk is simply a narcissist and every reckless swerve he makes is meant solely to draw the world's attention. He seemed to endorse this theory in a tongue-in-cheek way during a recent deposition. When a lawyer asked him, do you have some kind of unique ability to identify narcissistic sociopaths? And he replied, you mean by looking in the mirror? But what looks like self-obsession and poor impulse control might instead be the fruits of a coherent philosophy, one that Musk has detailed on many occasions. It's there in the email to Riley, the greatest good for the greatest number of people.
[18:16] Tesla's Autopilot is a Disaster
🎧 Play snip - 1min️ (16:44 - 18:19)
✨ Key takeaways
- Tesla's autopilot is causing a lot of accidents in the United States.
- Tesla is being investigated by the National Highway Traffic Safety Administration.
- Tesla is refusing to disclose information about the accidents.
📚 Transcript
Click to expand
Speaker 1
In the United States last year, there were around 11 million road accidents, nearly 5 million injuries, and more than 40,000 deaths. Tesla's AI was meant to put an end to this bloodbath. Instead, on average, there is at least one autopilot-related crash in the United States every day, and Tesla is under investigation by the National Highway Traffic Safety Administration. Ever since autopilot was released in October 2015, Musk has encouraged drivers to think of it as more advanced than it was, stating in January 2016 that it was probably better than a human driver. That November, the company released a video of a Tesla navigating the roads of the Bay Area with the disclaimer, the person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself. Musk also rejected the name co-pilot in favor of autopilot. The fine print made clear that the technology was for driver assistance only, but that message received a fraction of the attention of Musk's announcements. A large number of drivers seemed genuinely confused about autopilot's capabilities. Tesla also declined to disclose that the car in the 2016 video crashed in the company's parking lot. Slavik's legal complaint doesn't hold back. Tesla's conduct was despicable and so contemptible that it would be looked down upon and despised by ordinary, decent people. The many claims of the pending lawsuits come back to a single theme.
[31:22] Tesla's AI Is Erratic, But It Gets the Job Done
🎧 Play snip - 2min️ (29:09 - 31:20)
✨ Key takeaways
- Tesla's AI is erratic and not very nice or predictable.
- Tesla's AI is still able to drive safely even when faced with difficult situations.
📚 Transcript
Click to expand
Speaker 1
Later, at a four-way stop, the car was too cautious. It waited too long, and the other two cars at the intersection drove off before we did. We talked about the old saying about safe driving, don't be nice, be predictable. For a computer, Tesla's AI was surprisingly erratic. It's not nice or predictable, Alford said. A few miles down the road, we reached the intersection from the video. A left turn onto East Shepherd Avenue from State Route 168. The traffic light sits right at the point where the city's newest developments end and open land begins. If we drove straight, we would immediately find ourselves surrounded by sagebrush, on the way up into the Sierra. To replicate the error that Alford uncovered, we needed to approach the intersection with a red-left turn arrow and a green light to continue straight. On our first pass, the arrow turned green at the last second. On the second pass, though, on an empty road, the timing was right. A red for our turn and green for everyone else. As we got closer, the car moved into the turning lane and started to slow. It seized the red, I said. No, Alford said. It always slows down a little here before plowing through. But this time it kept slowing. Alford couldn't believe it. It's still going to run the light, he said. But he was wrong. We came to a tidy stop right at the line. Alford was shocked. They fixed it, he said. That one I've been giving them an issue about for two years. We waited patiently until the light turned green and the Tesla drove smoothly onto Shepherd Avenue. No problem. It was as clear a demonstration of Musk's hypothesis as one could hope for. There was a situation that kept stumping the AI until, after enough data had been collected by dedicated drivers like Alford, the neural net figured it out. Repeat this risk-reward conversion X number of times and maybe Tesla will solve self-driving. Maybe even next year.
[37:34] The Cost-Benefit Analysis of Elon Musk's Autopilot and FSD
🎧 Play snip - 1min️ (36:05 - 37:34)
✨ Key takeaways
- Slavik, the plaintiff's attorney, said that the recent shift in public sentiment against Musk made his job in the courtroom any easier.
- Slavik said that if he were on the other side, he would be worried about it.
- Some of Musk's most questionable decisions, though, begin to make sense if seen as a result of a blunt utilitarian calculus.
📚 Transcript
Click to expand
Speaker 1
I asked Slavik, the plaintiff's attorney, whether the recent shift in public sentiment against Musk made his job in the courtroom any easier. I think at least there are more people who are skeptical of his judgment at this point than were before, he said. If I were on the other side, I'd be worried about it. Some of Musk's most questionable decisions, though, begin to make sense if seen as a result of a blunt utilitarian calculus. Last month, Reuters reported that Neuralink, Musk's medical device company, had caused the needless deaths of dozens of laboratory animals through rushed experiments. Internal messages from Musk made it clear that the urgency came from the top. We are simply not moving fast enough, he wrote. It is driving me nuts. The cost-benefit analysis must have seemed clear to him. Neuralink had the potential to cure paralysis, he believed, which would improve the lives of millions of future humans. The suffering of a smaller number of animals was worth it. This form of crude, long-termism, in which the sheer size of future generations gives them added ethical weight, even shows up in Musk's statements about buying Twitter. He called Twitter a digital town square that was responsible for nothing less than preventing a new American civil war. I didn't do it to make more money, he wrote. I did it to try to help humanity, whom I love. Autopilot and FSD represent the culmination of this approach.
[40:55] Tesla's Autopilot: A Flawed System
🎧 Play snip - 1min️ (39:52 - 40:58)
✨ Key takeaways
- Singer believes that even if human drivers and autonomous vehicles are equally deadly, we should prefer the AI, provided that the next software update, based on data from crash reports and near misses, makes the system even safer.
- Tesla has not been able to get the informed consent of its drivers, as the answer might be different for different car owners.
📚 Transcript
Click to expand
Speaker 1
Singer told me that even if Autopilot and human drivers were equally deadly we should prefer the AI, provided that the next software update, based on data from crash reports and near misses would make the system even safer. That's a little bit like surgeons doing experimental surgery," he said. Probably the first few times they do the surgery, they're going to lose patients, but the argument for that is they will save more patients in the long run. It was important, however, Singer added, that the surgeons get the informed consent of the patients. Does Tesla have the informed consent of its drivers? The answer might be different for different car owners. It would probably be different for Dave Key in 2018 than it is in 2022. But most customers are not aware of how flawed autopilot is, said Philip Copeland, the author of How Safe is Safe Enough, measuring and predicting autonomous vehicle safety.
[41:53] Tesla's Autopilot: A False Sense of Safety
🎧 Play snip - 2min️ (39:52 - 41:54)
✨ Key takeaways
- Singer believes that even if autopilot and human drivers were equally deadly, we should prefer the AI over human drivers, provided that the next software update, based on data from crash reports and near misses, makes the system even safer.
- Copeland objects to Tesla's practice of using untrained civilians as test drivers for an immature technology, and also objects to Musk's supposed facts.
- In the third quarter of 2022, Tesla claimed that there was one crash for every 6.26 million miles driven using autopilot, indeed almost 10 times better than the US baseline of one crash for every 652,000 miles.
📚 Transcript
Click to expand
Speaker 1
Singer told me that even if Autopilot and human drivers were equally deadly we should prefer the AI, provided that the next software update, based on data from crash reports and near misses would make the system even safer. That's a little bit like surgeons doing experimental surgery," he said. Probably the first few times they do the surgery, they're going to lose patients, but the argument for that is they will save more patients in the long run. It was important, however, Singer added, that the surgeons get the informed consent of the patients. Does Tesla have the informed consent of its drivers? The answer might be different for different car owners. It would probably be different for Dave Key in 2018 than it is in 2022. But most customers are not aware of how flawed autopilot is, said Philip Copeland, the author of How Safe is Safe Enough, measuring and predicting autonomous vehicle safety. The cars keep making really crazy, crazy surprising mistakes, he said. Tesla's practice of using untrained civilians as test drivers for an immature technology is really egregious. Copeland also objects to Musk's supposed facts. One obvious problem with the data the company puts out in its quarterly safety report is that it directly compares autopilot miles, which are mainly driven on limited access highways with all vehicle miles. You're using autopilot on the safe miles, Copeland said, so of course it looks great, and then you're comparing it to not autopilot on the hard miles. In the third quarter of 2022, Tesla claimed that there was one crash for every 6.26 million miles driven using autopilot, indeed almost 10 times better than the US baseline of one crash for every 652,000 miles.
[41:27] Tesla's Autopilot: A Dangerous and Unsafe Technology
🎧 Play snip - 1min️ (39:52 - 41:27)
✨ Key takeaways
- Singer believes that even if human drivers and autonomous vehicles are equally deadly, the AI should be preferred, as the next software update based on crash reports and near misses will make the system even safer.
- Copeland disagrees, believing that Tesla's practice of using untrained civilians as test drivers for an immature technology is egregious.
📚 Transcript
Click to expand
Speaker 1
Singer told me that even if Autopilot and human drivers were equally deadly we should prefer the AI, provided that the next software update, based on data from crash reports and near misses would make the system even safer. That's a little bit like surgeons doing experimental surgery," he said. Probably the first few times they do the surgery, they're going to lose patients, but the argument for that is they will save more patients in the long run. It was important, however, Singer added, that the surgeons get the informed consent of the patients. Does Tesla have the informed consent of its drivers? The answer might be different for different car owners. It would probably be different for Dave Key in 2018 than it is in 2022. But most customers are not aware of how flawed autopilot is, said Philip Copeland, the author of How Safe is Safe Enough, measuring and predicting autonomous vehicle safety. The cars keep making really crazy, crazy surprising mistakes, he said. Tesla's practice of using untrained civilians as test drivers for an immature technology is really egregious. Copeland also objects to Musk's supposed facts. One obvious problem with the data the company puts out in its quarterly safety report is that it directly compares autopilot miles, which are mainly driven on limited access highways with all vehicle miles.
[42:07] Tesla's Autopilot: A False Sense of Safety
🎧 Play snip - 2min️ (39:52 - 42:13)
✨ Key takeaways
- Singer believes that even if human drivers and autonomous vehicles were equally deadly, we should prefer the AI, provided that the next software update, based on data from crash reports and near misses, makes the system even safer.
- Philip Copeland, the author of "How Safe is Safe Enough," objects to Tesla's practice of using untrained civilians as test drivers for an immature technology, and believes that the company's data does not accurately reflect the safety of the autopilot system.
- Tesla's advantage in comparing autopilot numbers to highway numbers decreases significantly when looking at local roads.
📚 Transcript
Click to expand
Speaker 1
Singer told me that even if Autopilot and human drivers were equally deadly we should prefer the AI, provided that the next software update, based on data from crash reports and near misses would make the system even safer. That's a little bit like surgeons doing experimental surgery," he said. Probably the first few times they do the surgery, they're going to lose patients, but the argument for that is they will save more patients in the long run. It was important, however, Singer added, that the surgeons get the informed consent of the patients. Does Tesla have the informed consent of its drivers? The answer might be different for different car owners. It would probably be different for Dave Key in 2018 than it is in 2022. But most customers are not aware of how flawed autopilot is, said Philip Copeland, the author of How Safe is Safe Enough, measuring and predicting autonomous vehicle safety. The cars keep making really crazy, crazy surprising mistakes, he said. Tesla's practice of using untrained civilians as test drivers for an immature technology is really egregious. Copeland also objects to Musk's supposed facts. One obvious problem with the data the company puts out in its quarterly safety report is that it directly compares autopilot miles, which are mainly driven on limited access highways with all vehicle miles. You're using autopilot on the safe miles, Copeland said, so of course it looks great, and then you're comparing it to not autopilot on the hard miles. In the third quarter of 2022, Tesla claimed that there was one crash for every 6.26 million miles driven using autopilot, indeed almost 10 times better than the US baseline of one crash for every 652,000 miles. Crashes however, are far more likely on surface streets than on the highway. One study from the Pennsylvania Department of Transportation showed that crashes were five times as common on local roads as on turnpikes. In comparing autopilot numbers to highway numbers, Tesla's advantage drops significantly.
[40:37] The Argument for Autonomous Cars
🎧 Play snip - 47sec️ (39:52 - 40:40)
✨ Key takeaways
- Singer believes that even if human drivers and autonomous vehicles are equally deadly, we should prefer the AI because of the next software update that will make the system even safer.
- Tesla has not been able to get the informed consent of its drivers, which could be different for different car owners.
📚 Transcript
Click to expand
Speaker 1
Singer told me that even if Autopilot and human drivers were equally deadly we should prefer the AI, provided that the next software update, based on data from crash reports and near misses would make the system even safer. That's a little bit like surgeons doing experimental surgery," he said. Probably the first few times they do the surgery, they're going to lose patients, but the argument for that is they will save more patients in the long run. It was important, however, Singer added, that the surgeons get the informed consent of the patients. Does Tesla have the informed consent of its drivers? The answer might be different for different car owners.
[42:48] Tesla's Autopilot: A False Safety Promise?
🎧 Play snip - 53sec️ (41:54 - 42:48)
✨ Key takeaways
- Crashes are more common on surface streets than on highways.
- Tesla's safety claims look shaky when you control for the age of the car and the age of the driver.
- Tesla's crashed just as often when autopilot was on as when it was off.
📚 Transcript
Click to expand
Speaker 1
Crashes however, are far more likely on surface streets than on the highway. One study from the Pennsylvania Department of Transportation showed that crashes were five times as common on local roads as on turnpikes. In comparing autopilot numbers to highway numbers, Tesla's advantage drops significantly. Tesla's safety claims look even shakier when you try to control for the age of the car and the age of the driver. Most Tesla owners are middle-aged or older, which eliminates one risky pool of drivers, teenagers, and simply having a new car decreases your chance of an accident significantly. It's even possible that the number of Teslas in California, with its generally mild, dry weather, has skewed the numbers in its favor. An independent study that tried to correct for some of these biases suggested that Tesla's crashed just as often when autopilot was on as when it was off.
[43:09] Tesla's Autopilot: The False Promise of Safety
🎧 Play snip - 1min️ (41:54 - 43:13)
✨ Key takeaways
- Crashes are more common on surface streets than on the highway.
- Tesla's safety claims look even shakier when you try to control for the age of the car and the age of the driver.
- Tesla's crashed just as often when autopilot was on as when it was off.
📚 Transcript
Click to expand
Speaker 1
Crashes however, are far more likely on surface streets than on the highway. One study from the Pennsylvania Department of Transportation showed that crashes were five times as common on local roads as on turnpikes. In comparing autopilot numbers to highway numbers, Tesla's advantage drops significantly. Tesla's safety claims look even shakier when you try to control for the age of the car and the age of the driver. Most Tesla owners are middle-aged or older, which eliminates one risky pool of drivers, teenagers, and simply having a new car decreases your chance of an accident significantly. It's even possible that the number of Teslas in California, with its generally mild, dry weather, has skewed the numbers in its favor. An independent study that tried to correct for some of these biases suggested that Tesla's crashed just as often when autopilot was on as when it was off. That's always been a problem for utilitarian, singer told me. Because it doesn't have strict moral rules, people might think they can get away with doing the sums in ways that suit their purposes. Human thinking has led individuals to perform acts of breathtaking virtue. But putting this ethical framework in the hands of an industrialist presents certain dangers.
[43:43] The Dangers of Utilitarianism in the Age of Elon Musk
🎧 Play snip - 1min️ (42:13 - 43:41)
✨ Key takeaways
- Tesla's safety claims look even shakier when you try to control for the age of the car and the age of the driver.
- Most Tesla owners are middleaged or older, which eliminates one risky pool of drivers, teenagers, and simply having a new car decreases your chance of an accident significantly.
- It's even possible that the number of Teslas in California, with its generally mild, dry weather, has skewed the numbers in its fav.
- An independent study that tried to correct for some of these biases suggested that Tesla's crashed just as often when autopilot was on as when it was off.
📚 Transcript
Click to expand
Speaker 1
Tesla's safety claims look even shakier when you try to control for the age of the car and the age of the driver. Most Tesla owners are middle-aged or older, which eliminates one risky pool of drivers, teenagers, and simply having a new car decreases your chance of an accident significantly. It's even possible that the number of Teslas in California, with its generally mild, dry weather, has skewed the numbers in its favor. An independent study that tried to correct for some of these biases suggested that Tesla's crashed just as often when autopilot was on as when it was off. That's always been a problem for utilitarian, singer told me. Because it doesn't have strict moral rules, people might think they can get away with doing the sums in ways that suit their purposes. Human thinking has led individuals to perform acts of breathtaking virtue. But putting this ethical framework in the hands of an industrialist presents certain dangers. True utilitarianism requires a careful balancing of all harms and benefits in the present and the future, with the patience to do this assessment and the patience to refrain from acting if the amount of suffering and death caused by pushing forward wasn't clear. Musk is using utilitarianism in a more limited way, arguing that as long as he's sure something will have a net benefit, he's permitted to do it right now.
[44:19] The Many Faces of Elon Musk
🎧 Play snip - 37sec️ (43:41 - 44:19)
✨ Key takeaways
- Elon Musk has been successful in running multiple companies that are working to preserve the future of humanity.
- Some questionable behavior begins to look almost reasonable when the stakes are raised.
📚 Transcript
Click to expand
Speaker 1
In the past two decades, Musk has somehow maneuvered himself into running multiple companies where he can plausibly claim to be working to preserve the future of humanity. SpaceX can't just deliver satellites into low orbit, it's also going to send us to Mars. Tesla can't just build a solid electric car, it's going to solve the problem of self-driving. Twitter can't just be one more place where we gather to argue, it's one of the props holding up civilization. With the stakes suitably raised, all sorts of questionable behavior begin to look, almost reasonable.
[47:19] The Autopilot System in Kevin George Aziz Riyadh's Tesla Was Likely Responsible for the Deaths of Two People
🎧 Play snip - 1min️ (45:42 - 47:21)
✨ Key takeaways
- On December 29, 2019, the same day a Tesla in Indiana got into a deadly crash with a parked fire truck, an offduty chauffeur named Kevin George Aziz Riyadh, was driving his grade 2016 Tesla Model S down the Gardena Freeway in suburban Los Angeles.
- It had been a long drive back from a visit to Orange County, and Riyadh had autopilot turned on.
- Shortly after midnight, the car passed under a giant sign that said, End Freeway signal ahead, in flashing yellow lights.
- The autopilot kept Riyadh's Tesla at a steady speed as it approached the stoplight that marked the end of the Freeway and the beginning of Artisiable of Art.
- According to a witness, the light was red, but the car drove straight through the intersection, striking a Honda Civic.
- Riyadh had only minor injuries, but the two people in the Civic, Hilderto Alquez Arlopes and Maria Guadalupe Nieves, died at the scene.
- Their families said that they were on a first date.
- Who was responsible for this accident? State officials have charged Riyadh with manslaughter and planned to prosecute him as if he were the sole actor behind the two deaths.
- The victim's families, meanwhile, have filed civil suits against both Riyadh and Tesla.
- Depending on the outcomes of the criminal and civil cases, the autopilot system could be judged, in effect, legally responsible,.
📚 Transcript
Click to expand
Speaker 1
On December 29, 2019, the same day a Tesla in Indiana got into a deadly crash with a parked fire truck, an off-duty chauffeur named Kevin George Aziz Riyadh, was driving his grade 2016 Tesla Model S down the Gardena Freeway in suburban Los Angeles. It had been a long drive back from a visit to Orange County, and Riyadh had autopilot turned on. Shortly after midnight, the car passed under a giant sign that said, End Freeway signal ahead, in flashing yellow lights. The autopilot kept Riyadh's Tesla at a steady speed as it approached the stoplight that marked the end of the Freeway and the beginning of Artisiable of Art. According to a witness, the light was red, but the car drove straight through the intersection, striking a Honda Civic. Riyadh had only minor injuries, but the two people in the Civic, Hilderto Alquez Arlopes and Maria Guadalupe Nieves, died at the scene. Their families said that they were on a first date. Who was responsible for this accident? State officials have charged Riyadh with manslaughter and planned to prosecute him as if he were the sole actor behind the two deaths. The victim's families, meanwhile, have filed civil suits against both Riyadh and Tesla. Depending on the outcomes of the criminal and civil cases, the autopilot system could be judged, in effect, legally responsible, not legally responsible, or both, simultaneously. Not long ago, I went to see the spot where Riyadh's Tesla reportedly ran the red light.
Created with Snipd | Highlight & Take Notes from Podcasts
up:: 📥 Sources