Short post about Tesla
- Posted by Michał ‘mina86’ Nazarewicz on 23rd of March 2025
- Share on Bluesky
- Cite
Mark Rober’s video where he fooled Tesla car into hitting fake road has been making rounds on the Internet recently. It questions Musk’s decision to abandon lidars and adds to the troubles Tesla has been facing. But it also hints at a more general issue with automation of complex systems and artificial intelligence.

‘Klyne and I belong to two different generations,’ testifies Pirx in Lem’s 1971 short story Ananke. ‘When I was getting my wings, servo-mechanisms were more error-prone. Distrust becomes second nature. My guess is… he trusted in them to the end.’ The end being the Ariel spacecraft crashing killing everyone onboard. If Klyne put less trust in ship’s computer, could the catastrophe be averted? We shall never know, mostly because the crash is a fictional story, but with proliferation of so called artificial intelligence, failures directly attributed to it has happened in real world as well:
- Chatbot turns schizophrenic when exposed to Twitter. Who could have predicted letting AI learn from Twitter was a bad idea?
- AI recommends adding glue to pizza because it read about it on Reddit. Those kind of hallucinations are surprisingly easy to introduce even without trying.
- Lawyers get sanctioned for letting AI invent fake cases. Strangely this keeps happening. Surely, at some point lawyers will learn, right?
- Chatbot offers a non-existent discount which company must honour.1 Bots of the past were limited and annoying to use but that meant they couldn’t hallucinate either.
- Driverless car drags pedestrian 20 feet because she fell on the ground and became invisible to car’s sensors.
What’s the conclusion from all those examples? Should we abandon automation and technology in general? Panic-sell Tesla stock? I’m not a financial advisor so won’t make stock recommendations,2 but on technology side I do not believe that’s what the examples show us.
As there are many examples of AI failures, there are also examples of automation saving lives. Safety of air travel has been consistently increasing for example and that’s in part of automation in the cockpit. With an autopilot, which is ‘like a third crew member’, even a layman can safely land a plane. Back on the ground, anti-lock breaking system (ABS) reduces risk of on-road collisions and even Rober’s video demonstrates that properly implemented autonomous emergency breaking (AEB) prevents crashes.
However, it looks like the deployment of new technologies may be going too fast. Large Language Models (LLM) are riddled with challenges which are yet to be addressed. It’s alright if one uses the technology knowing its limitations, checks citations Le Chat provides for example, but if the technology is pushed to people less familiar with it problems are sure to appear.
I still believe that real self-driving (not to be confused with Tesla’s Full Self-Driving (FSD) technology) has capacity to save lives. For example, as Timothy B. Lee observes, ‘most Waymo crashes involve a Waymo vehicle scrupulously following the rules while a human driver flouts them, speeding, running red lights, careening out of their lanes, and so forth.’ I also don’t think it’s inevitable that driverless cars will destroy cities as Jason Slaughter warns. [Added on 28th of March: citation of Lee’s article.]
But with AI systems finding their ways into more and more products, we need to tread lightly and test things properly. Otherwise, like Krzysztof in Kieślowski’s 1989 film Dekalog: One, we are walking on thin ice and insufficient testing and safety precautions may lead to many more failures.