rootusrootus a day ago

I'm on my second free FSD trial, just started for me today. Gave it another shot, and it seems largely similar to the last free trial they gave. Fun party trick, surprisingly good, right up until it's not. A hallmark of AI everywhere, is how great it is and just how abruptly and catastrophically it fails occasionally.

Please, if you're going to try it, keep both hands on the wheel and your foot ready for the brake. When it goes off the rails, it usually does so in surprising ways with little warning and little time to correct. And since it's so good much of the time, you can get lulled into complacence.

I never really understand the comments from people who think it's the greatest thing ever and makes their drive less stressful. Does the opposite for me. Entertaining but exhausting to supervise.

  • tverbeure 19 hours ago

    I just gave it another try after my last failed attempt. (https://tomverbeure.github.io/2024/05/20/Tesla-FSD-First-and...)

    I still find it shockingly bad, especially in the way it reacts, or doesn’t, to the way things change around the car (think a car on the left in front of you who switches on indicators to merge in front of you) or the way it makes the most random lane changing decisions and changes it’s mind in the middle of that maneuver.

    Those don’t count as disengagements, but they’re jarring and drivers around you will rightfully question your behavior.

    And that’s all over just a few miles of driving in an easy environment if interstate or highway.

    I totally agree that it’s an impressive party trick, but it has no business being on the road.

    My experience with Waymo in SF couldn’t have been more different.

    • friendzis 20 minutes ago

      > I still find it shockingly bad, especially in the way it reacts, or doesn’t, to the way things change around the car (think a car on the left in front of you who switches on indicators to merge in front of you) or the way it makes the most random lane changing decisions and changes it’s mind in the middle of that maneuver.

      I have said it before, I will say it again. It seems that this software does not posses permanence, neither object nor decision.

    • y-c-o-m-b 18 hours ago

      > it makes the most random lane changing decisions and changes it’s mind in the middle of that maneuver.

      This happened to me during my first month of trialing FSD last year and was a big contributing factor for me not subscribing. I did NOT appreciate the mess the vehicle made in this type of situation. If I saw another driver doing the same, I'd seriously question if they were intoxicated.

    • sokoloff 18 hours ago

      > (think a car on the left in front of you who switches on indicators to merge in front of you)

      That car is signaling an intention to merge into your lane once it is safe for them to do so. What does the Tesla do (or not do) in this case that's bad?

      • tverbeure 7 hours ago

        What I expect it to do is to be a courteous driver, and back off a little bit to signal to the car in front that I got the message and that it's safe to merge.

        FSD is already defensive to a fault, with frequent stop-and-go indecisions of when to merge onto a highway, but that's a whole other story.

        A major part of safe driving is about being predictable. You either commit and claim your right of way, or you don't. In this situation, both can be signaled easily to the other party by being a bit of a jerk (e.g. accelerating to close the gap and prevent somebody else from merging) or the opposite. Both are better than not doing anything at all and keeping the other dangling in a state of uncertainty.

        FSD is in an almost permanent state of being indecisive and unpredictable. It behaves like a scared teenager with a learner's permit. Again, totally different than my experience in Waymo in the urban jungle of San Francisco, who's a defensive but confident driver.

      • cma 17 hours ago

        Defensive driving is to assume they might not check their blindspot, etc. And just generally ease off in this situation if they would merge in tight if they began merging now.

        • tverbeure 14 hours ago

          That’s the issue: I would immediately slow a little bit to let the other one merge. FSD seems to be noticing something, and eventually slow down, but the action is too subtle (if at all) to signal the other guy that you’re letting them merge.

      • hotspot_one 16 hours ago

        > That car is signaling an intention to merge into your lane once it is safe for them to do so.

        Only under the assumption that the driver was trained in the US, to follow US traffic law, and is following that training.

        For example, in the EU, you switch on the indicators when you start the merge; the indicator shows that you ARE moving.

        • sokoloff 16 hours ago

          That seems odd to the point of uselessness, and does not match the required training I received in Germany from my work colleagues at Daimler prior to being able to sign out company cars.

          https://www.gesetze-im-internet.de/stvo_2013/__9.html seems to be the relevant law in Germany, which Google translates to "(1) Anyone wishing to turn must announce this clearly and in good time; direction indicators must be used."

          • johnisgood 38 minutes ago

            I think the moral of the story is that cars may or may not turn their blinkers on. If they do, the self-driving should catch that just as easily and expect the car to switch lanes (with extreme caution).

          • throw4950sh06 11 hours ago

            Maybe the guy was talking about the reality, not the theory. From my autobahn travels it seems like the Germans don't know how to turn on the blinkers.

            • xattt 10 hours ago

              > … the Germans don’t know how turn on the blinkers.

              [Insert nationality/regional area here] don’t know how to turn on the blinkers.

              • throw4950sh06 4 hours ago

                I wouldn't say so. It's a very marked difference with a sharp change the moment I drive through the border.

                • xattt 14 minutes ago

                  I’m only saying this from my experience in Canada where every region thinks its drivers are the worst.

        • Zanfa 14 hours ago

          > For example, in the EU, you switch on the indicators when you start the merge; the indicator shows that you ARE moving.

          In my EU country it's theoretically at least 3 seconds before initiating the move.

          • johnisgood 33 minutes ago

            As I mentioned in my other comment, 1 second is negligible, I would even dare to say that 3 seconds, is, too. For a computer it should not be, however.

        • valval 5 hours ago

          For anyone confused, this person’s statement about the EU is total bs.

          • rcxdude an hour ago

            It's what I was taught: you switch on your indicators when you have checked that you are clear to merge and you have effectively committed. I always assume that someone who has put their indicators in is going to move according to them, whether it's clear or not.

            • lbschenkel 9 minutes ago

              I don't doubt that it's the way you have been taught, but it doesn't make any sense. The whole point of blinkers/indicator lights in cars are to signal your intentions before you do them: if you're going to signal at the same time that you do the action you're signalling, you might as well not bother.

            • johnisgood 37 minutes ago

              It is what I see in practice in Eastern Europe. They signal as they are shifting lanes. Even if they turn the blinker on and then start moving 1 second later, it could be considered the same thing as 1 second is negligible.

              Thus "the indicator shows that you ARE moving." is correct, at least in practice.

    • avar 4 hours ago

      [flagged]

      • vkou an hour ago

        There's degrees to being a shitty human being.

        Using your platform and millions of followers to publicly shit some random person who pissed you off is a degree of it.

        Being a colossal hypocrite with your 'free speech' platform, or lying to your customers is something else.

        Full mask-off throwing millions of dollars towards electing a convicted conman who is unabashedly corrupt, vindictive, nepotistic, already has a failed coup under his belt, and is running on a platform of punishing anyone who isn't a sycophant is... Also something else.

        • whoitwas 41 minutes ago

          I'm a bit more cynical and see his turn as a business move. He has considerate market captured, so he went full wackjob to capture that market.

          Apparently, this doesn't reflect reality and he actually went crazy because one of his kids is trans. I have no idea because I don't know him.

  • darknavi a day ago

    You slowly build a relationship with it and understand where it will fail.

    I drive my 20-30 minute commutes largely with FSD, as well as our 8-10 hour road trips. It works great, but 100% needs to be supervised and is basically just nicer cruise control.

    • eschneider 19 hours ago

      "You slowly build a relationship with it and understand where it will fail."

      I spent over a decade working on production computer vision products. You think you can do this, and for some percentage of failures you can. The thing is, there will ALWAYS be some percentage of failure cases where you really can't perceive anything different from a success case.

      If you want to trust your life to that, fine, but I certainly wouldn't.

      • sandworm101 19 hours ago

        Or until a software update quietly resets the relationship and introduces novel failure modes. There is little more dangerous on the road than false confidence.

        • johnisgood 31 minutes ago

          Exactly. You may learn its patterns, but a software update could fuck it all up in a zillion different ways.

      • peutetre 13 hours ago

        Elon Musk is a technologist. He knows a lot about computers. The last thing Musk would do is trust a computer program:

        https://www.nbcnews.com/tech/tech-news/musk-pushes-debunked-...

        So I guess that's game over for full self-driving.

        • llamaimperative 13 hours ago

          Oooo maybe he'll get a similar treatment as Fox did versus Dominion.

        • valval 5 hours ago

          Yea he’s just dedicating his life on something that he knows won’t even work. What are you on about?

          • voganmother42 5 hours ago

            Everyone else’s life seems to be completely irrelevant

    • coffeefirst a day ago

      This feels like the most dangerous possible combination (not for you, just to have on the road in large numbers).

      Good enough that the average user will stop paying attention, but not actually good enough to be left alone.

      And when the machine goes to do something lethally dumb, you have 5 seconds to notice and intervene.

      • jvolkman 21 hours ago

        This is what Waymo realized a decade ago and what helped define their rollout strategy: https://youtu.be/tiwVMrTLUWg?t=247&si=Twi_fQJC7whg3Oey

        • nh2 20 hours ago

          This video is great.

          It looks like Wayno really understood the problem.

          It explains concisely why it's a bad idea to roll our incremental progress, how difficult the problem really is, and why you should really throw all sensors you can at it.

          I also appreciate the "we don't know when it's going to be ready" attitude. It shows they have a better understanding of what their task actually is than anybody who claims "next year" every year.

          • trompetenaccoun 19 hours ago

            All their sensors didn't prevent them from crashing into stationary object. You'd think that would be the absolute easiest to avoid, especially with both radar and lidar on board. Accidents like that show the training data and software will be much more important than number of sensors.

            https://techcrunch.com/2024/06/12/waymo-second-robotaxi-reca...

            • rvnx 18 hours ago

              The issue was fixed, now handling 100'000 trips per week, and all seems to go well in the last 4 months, this is 1.5 million trips.

              • trompetenaccoun 17 hours ago

                So they had "better understanding" of the problem as the other user put it, but their software was still flawed and needed fixing. That's my point. This happened two weeks ago btw: https://www.msn.com/en-in/autos/news/waymo-self-driving-car-...

                I don't mean Waymo is bad or unsafe, it's pretty cool. My point is about true automation needing data and intelligence. A lot more data than we currently have, because the problem is in the "edge" cases, the kind of situation the software has never encountered. Waymo is in the lead for now but they have fewer cars on the road, which means less data.

              • jraby3 17 hours ago

                Any idea how many accidents and how many fatalities? And how that compares to human drivers?

          • yborg 20 hours ago

            You don't get a $700B market cap by telling investors "We don't know."

            • rvnx 19 hours ago

              Ironically, Robotaxis from Waymo are actually working really well. It's a true unsupervised system, very safe, used in production, where the manufacturer takes the full responsibility.

              So the gradual rollout strategy is actually great.

              Tesla wants to do "all or nothing", and ends up with nothing for now (example with Europe, where FSD is sold since 2016 but it is "pending regulatory approval", when actually, the problem is the tech that is not finished yet, sadly).

              It's genuinely a difficult problem to solve, so it's better to do it step-by-step than a "big-bang deploy".

              • nh2 15 hours ago

                > So the gradual rollout strategy is actually great.

                I think you misunderstood, or it's a terminology problem.

                Waymo's point in the video is that in contrast to Tesla, they are _not_ doing gradual rollout of seemingly-working-still-often-catastropically-failing tech.

                See e.g. minute 5:33 -> 6:06. They are stating that they are targeting directly the shown upper curve of safety, and that they are not aiming for the "good enough that the average user will stop paying attention, but not actually good enough to be left alone".

                • espadrine 27 minutes ago

                  Terminology.

                  Since they targeted very low risk, they did a geographically-segmented rollout, starting with Phoenix, which is one of the easiest places to drive: a lot of photons for visibility, very little rain, wide roads.

              • mattgreenrocks 18 hours ago

                Does Tesla take full responsibility for FSD incidents?

                It seemed like most players in tech a few years ago were using legal shenanigans to dodge liability here, which, to me, indicates a lack of seriousness toward the safety implications.

                • valval 5 hours ago

                  What does that mean? Tesla’s system isn’t unsupervised, so why would they take responsibility?

                  • x3ro an hour ago

                    I don't know, maybe because they call it "Full Self-Driving"? :)

            • zbentley 18 hours ago

              Not sure how tongue-in-cheek that was, but I think your statement is the heart of the problem. Investment money chases confidence and moonshots rather than backing organizations that pitch a more pragmatic (read: asterisks and unknowns) approach.

      • ricardobeat 19 hours ago

        Five seconds is a long time in driving, usually you’ll need to react in under 2 seconds in situations where it disengages, those never happen while going straight.

        • theptip 19 hours ago

          Not if you are reading your emails…

    • lolinder 21 hours ago

      When an update comes out does that relationship get reset (does it start failing on things that used to work), or has it been a uniform upward march?

      I'm thinking of how every SaaS product I ever have to use regularly breaks my workflow to make 'improvements'.

      • bdndndndbve 20 hours ago

        I wouldn't take OP's word for it, if they really believe they know how it's going to react in every situation in the first place. Studies have shown this is a gross overestimation of their own ability to pay attention.

      • xur17 20 hours ago

        For me it does, but only somewhat. I'm much more cautious / aware for the first few drives while I figure it out again.

        I also feel like it takes a bit (5-10 minutes of driving) for it to recalibrate after an update, and it's slightly worse than usual at the very beginning. I know they have to calibrate the cameras to the car, so it might be related to that, or it could just be me getting used to its quarks.

    • sumodm 19 hours ago

      Something along this lines is the real danger. People will understand common failure modes and assume they have understood its behavior for most scenarios. Unlike common deterministic and even some probabilistic systems, where behavior boundaries are well behaved, there could be discontinuities in 'rarer' seen parts of the boundary. And these 'rarer' parts need not be obvious to us humans, since few pixel changes might cause wrinkles.

      *vocabulary use is for a broad stroke explanation.

  • 650REDHAIR 18 hours ago

    This was my experience as well. It tried to drive us (me, my wife, and my FIL) into a tree on a gentle low speed uphill turn and I’ll never trust it again.

  • jerb 15 hours ago

    But it’s clearly statistically much safer (https://www.tesla.com/VehicleSafetyReport) 7 million miles before an accident w FSD vs. 1 million when disengaged. I agree I didn’t like the feel of FSD either, but the numbers speak for themselves.

mcintyre1994 19 minutes ago

Something I find weird about riding in a Tesla is that they have a mode that a bunch of Uber drivers seem to use where it shows a sort of diagram on the screen of what the car perceives to be its surroundings. This seems to be mostly bad - cars jump in and out, things appear from nowhere right next to the car, things randomly disappear. If that's produced using the same inputs the car uses for self driving then I'm not surprised it has all these issues.

  • jdblair 12 minutes ago

    Here in Amsterdam, this view is constantly bicycles approaching from the sides, flickering and disappearing, but most of the time the bicycles are parked, completely stationary, without riders.

Fomite 20 hours ago

"Driver is mostly disengaged, but then must intervene in a sudden fail state" is also one of the most dangerous types of automation due to how long it takes the driver to reach full control as well.

  • drowsspa 18 hours ago

    Yeah, I don't drive but I would think it would be worse than actually paying attention all the time

    • rcxdude an hour ago

      It's more like being a driving instructor, which has a higher effort and skill bar than just driving.

    • lopkeny12ko 16 hours ago

      You are required to pay attention all the time. That's what the "supervised" in "FSD (supervised)" means.

      • freejazz 14 hours ago

        FSD stands for Fully Supervised Driving, right?

        • noapologies an hour ago

          Same energy as

          Unlimited Data!! (up to 100GB)

        • dhdaadhd 14 hours ago

          yeah, that sounds like Elon’s marketing to me.

    • pessimizer 18 hours ago

      It's also a problem that gets worse as the software gets better. Having to intervene once every 5 minutes is a lot easier than having to intervene once every 5 weeks. If lack of intervention causes an accident, I'd bet on the 5 minute car avoiding an accident longer than the 5 week car for any span of time longer than 10 weeks.

      • jakub_g 17 hours ago

        I feel like the full self driving cars should have a "budget". Every time you drive, say, 1000 km in FSD, you then need to drive 100 km in "normal" mode to keep sharp. Or whatever the ratio / exact numbers TBD. You can reset the counter upfront by driving smaller mileage more regularly.

        • krisoft an hour ago

          That's not solving the right problem. That keeps you sharp driving, but does not keep you sharp supervising. If the car did drive you 1000 km flawlessly, but can still kill you with a random erratic bug on the 1001th km (or on the 1234th). That is where people will zone out. Keeping people driving will keep them able to drive, but won't make them less zoned out when they are not driving.

        • Dylan16807 5 hours ago

          Just as driving practice?

          It's not going to help the problem of keeping up vigilance when monitoring a level 3 system.

TheAlchemist a day ago

Tesla released a promotional video in 2016 saying that with FSD a human driver is not necessary and that "The person in the driver's seat is only there for legal reasons". The video was staged as we've learned in 2022.

2016 folks... Even with today's FSD which is several orders of magnitude better than the one in the video, you would still probably have a serious accident within a week (and I'm being generous here) if you didn't seat in the driver's seat.

How Trevor Milton got sentenced for fraud and the people responsible for this were not is a mystery to me.

  • 1f60c 20 hours ago

    AFAIK the owner's manual says you have to keep your hands on the wheel and be ready to take over at all times, but Elon Musk and co. love to pretend otherwise.

    • Flameancer 18 hours ago

      This part doesn’t seem to be common knowledge. I don’t own a Tesla but I have been a few. From my understanding the feature as always said it was in beta and that it still required that you have your hands on the wheel.

      I like the idea of FSD, but I think we should have a serious talk about how the safety implications of making this more broadly available and also compatibility with making a mesh network so FSD vehicles can communicate. I’m not well versed in the tech but I feel like it would be safer if you have like say have more cars on the road that can communicate and making decisions together than separate cars existing in a vacuum having to make a decision.

      • y-c-o-m-b 17 hours ago

        I've wondered about the networked vehicle communication for a while. It doesn't even need to be FSD. I might be slightly wrong on this, but I would guess most cars going back at least a decade can have their software/firmware modified to do this if the manufacturers so choose. I imagine it would improve the reliability and reaction-times of FSD considerably.

  • throw627004 a day ago

    [flagged]

    • mglz a day ago

      > Why is he trying to buy a president in the first place?

      Because he can make more money under one than the other.

      • throw627004 21 hours ago

        Obviously yeah, but I do think his odd fervor in this allows us to speculate that there are some threats to his businesses that are not well understood publicly, and that would be solved by becoming a sort of American oligarch.

        • throw627004 an hour ago

          It's really not much of a stretch.

          In addition to the risks to Tesla raised upthread, SpaceX needs an ambitious space program, and Mars program specifically along the lines of Musk's ideas.

          Capturing the US government is a great way to get there.

  • sschueller a day ago

    [flagged]

    • nemo44x a day ago

      He’s not doing anything activists haven’t done for years to get out the vote. In college famous rock and hip hop groups would come on campus to play shows that had voter registration tables upon entry and lots of messaging about who to vote for and then being endlessly recruited to volunteer/phone bank/canvas for some group that was supporting the event.

      Activism cuts both ways.

      • matwood 21 hours ago

        Direct payments do seem to be illegal in a way that having a rally or concert or canvassing are not.

        https://electionlawblog.org/?p=146397

        • nemo44x 20 hours ago

          You’re not obligated to do anything like vote or vote a certain way. Money is speech and it’s an advertisement.

          That’s just some random blog. I’m sure Musks’s lawyers understand what they’re doing.

          Frankly I find it inspiring he cares enough about our democracy to encourage people to participate in it at great expense of his own. You love to see innovation in turning out voters who may not otherwise have their voices heard.

          • shadowfacts 20 hours ago

            It's not a random blog making some conjectures, Rick Hasen is a law professor who is an expert in this area and, moreover, he cites specifics statutes and DOJ information that's not all that ambiguous.

            • nemo44x 19 hours ago

              He gets basic facts wrong in his blog though. For instance the rewards are for referring people to sign a petition that says you support 1a and 2a. You need to be registered for your voice to count. He’s not paying them to register but rather to refer registered people to sign it. So it’s up to an individual to find registered voters to sign it so they can collect their $47 bounty per referred signee.

              • matwood 17 hours ago

                > He gets basic facts wrong in his blog though.

                No, he doesn't. He references the $47 as 'murky legality'. What he's stating is clearly illegal is the $1M lottery also announced by Musk.

                And Musk does what he wants regardless of lawyers. Remember when he tried to back out of buying Twitter...

            • rvnx 19 hours ago

              Yes but he didn't take into consideration that laws don't apply when you are a billionaire and that you hold both state secrets (via DoD/Starlink) and connections to foreign countries.

              So Musk will be fine, especially if Trump wins.

          • CamperBob2 19 hours ago

            That’s just some random blog. I’m sure Musks’s lawyers understand what they’re doing.

            I'm sure his lawyers know what they're doing, but did that stop their client from calling a cave diver a pedophile for objecting to his submarine design?

            Musk is basically a valueless chaos monkey with a perfect 18 score in Luck. Even he doesn't know what he'll do, say, or believe next. He has the luxury of not caring because it doesn't matter anyway; he'll just continue to get away with things that would shut the rest of us down for good. Not surprising that he's found a kindred spirit in Trump.

      • sschueller 21 hours ago

        That may be but were they required to sign a PAC's pledge to enter such concert? I think this might be over the line but a court has the decide this.

      • lawn 21 hours ago

        Paying someone to vote a certain way is in fact illegal though.

        • nemo44x 20 hours ago

          You can vote for whomever you’d like. His PAC isn’t asking you to prove you voted or voted for a particular person to get the money. They’re just generating buzz and interest in the candidate they feel is better.

bastawhiz 2 days ago

Lots of people are asking how good the self driving has to be before we tolerate it. I got a one month free trial of FSD and turned it off after two weeks. Quite simply: it's dangerous.

- It failed with a cryptic system error while driving

- It started making a left turn far too early that would have scraped the left side of the car on a sign. I had to manually intervene.

- In my opinion, the default setting accelerates way too aggressively. I'd call myself a fairly aggressive driver and it is too aggressive for my taste.

- It tried to make way too many right turns on red when it wasn't safe to. It would creep into the road, almost into the path of oncoming vehicles.

- It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

- It would switch lanes to go faster on the highway, but then missed an exit on at least one occasion because it couldn't make it back into the right lane in time. Stupid.

After the system error, I lost all trust in FSD from Tesla. Until I ride in one and feel safe, I can't have any faith that this is a reasonable system. Hell, even autopilot does dumb shit on a regular basis. I'm grateful to be getting a car from another manufacturer this year.

  • TheCleric 2 days ago

    > Lots of people are asking how good the self driving has to be before we tolerate it.

    There’s a simple answer to this. As soon as it’s good enough for Tesla to accept liability for accidents. Until then if Tesla doesn’t trust it, why should I?

    • ndsipa_pomu a day ago

      > As soon as it’s good enough for Tesla to accept liability for accidents.

      That makes a lot of sense and not just from a selfish point of view. When a person drives a vehicle, then the person is held responsible for how the vehicle behaves on the roads, so it's logical that when a machine drives a vehicle that the machine's manufacturer/designer is held responsible.

      It's a complete con that Tesla is promoting their autonomous driving, but also having their vehicles suddenly switch to non-autonomous driving which they claim moves the responsibility to the human in the driver seat. Presumably, the idea is that the human should have been watching and approving everything that the vehicle has done up to that point.

      • andrewaylett a day ago

        The responsibility doesn't shift, it always lies with the human. One problem is that humans are notoriously poor at maintaining attention when supervising automation

        Until the car is ready to take over as legal driver, it's foolish to set the human driver up for failure in the way that Tesla (and the humans driving Tesla cars) do.

        • mannykannot 18 hours ago

          > The responsibility doesn't shift, it always lies with the human.

          Indeed, and that goes for the person or persons who say that the products they sell are safe when used in a certain way.

        • f1shy a day ago

          What?! So if there is a failure and the car goes full throttle (no autonomous car) it is my responsibility?! You are pretty wrong!!!

          • kgermino 21 hours ago

            You are responsible (Legally, contractually, morally) for supervising FSD today. If the car decided to stomp on the throttle you are expected to be ready to hit the brakes.

            The whole point is that is somewhat of an unreasonable expectation but it’s what Tesla expects you to do today

            • f1shy 19 hours ago

              My example was clear about NOT about autonomous driving. Because the previous comment seems to imply for everything you are responsible

            • FireBeyond 19 hours ago

              > If the car decided to stomp on the throttle you are expected to be ready to hit the brakes.

              Didn't Tesla have an issue a couple of years ago where pressing the brake did not disengage any throttle? i.e. if the car has a bug and puts throttle to 100% and you stand on the brake, the car should say "cut throttle to 0", but instead, you just had 100% throttle, 100% brake?

              • blackeyeblitzar 16 hours ago

                If it did, it wouldn’t matter. Brakes are required to be stronger than engines.

                • FireBeyond 15 hours ago

                  That makes no sense. Yes, they are. But brakes are going to be more reactive and performant with the throttle at 0 than 100.

                  You can't imagine that the stopping distances will be the same.

          • xondono 20 hours ago

            Autopilot, FSD, etc.. are all legally classified as ADAS, so it’s different from e.g. your car not responding to controls.

            The liability lies with the driver, and all Tesla needs to prove is that input from the driver will override any decision made by the ADAS.

      • f1shy a day ago

        >> When a person drives a vehicle, then the person is held responsible for how the vehicle behaves on the roads, so it's logical that when a machine drives a vehicle that the machine's manufacturer/designer is held responsible.

        Never really understood the supposed dilemma. What happens when the brakes fail because of bad quality?

        • ndsipa_pomu 20 hours ago

          > What happens when the brakes fail because of bad quality?

          Depends on the root cause of the failure. Manufacturing faults would put the liability on the manufacturer; installation mistakes would put the liability on the mechanic; using them past their useful life would put the liability on the owner for not maintaining them in working order.

        • arzig a day ago

          Then this would be manufacturing liability because they are not fit for purpose.

    • jefftk a day ago

      Note that Mercedes does take liability for accidents with their (very limited level) level 3 system: https://www.theverge.com/2023/9/27/23892154/mercedes-benz-dr...

      • f1shy a day ago

        Yes. That is the only way. That being said, I want to see the first incidents, and how are they resolved.

      • iknowstuff 12 hours ago

        its pathetic. <40mph following a vehicle directly ahead. basically only usable in stop and go traffic

        https://www.notebookcheck.net/Tesla-vs-Mercedes-self-driving...

        • jefftk 12 hours ago

          The Mercedes system is definitely, as I said, very limited. But within it's operating conditions the Mercedes system is much more useful: you can safely and legally read, work, or watch a movie while in the driver's seat, literally not paying any attention to the road.

    • genocidicbunny 2 days ago

      I think this is probably both the most concise and most reasonable take. It doesn't require anyone to define some level of autonomy or argue about specific edge cases of how the self driving system behaves. And it's easy to apply this principle to not only Tesla, but to all companies making self driving cars and similar features.

    • bdcravens 2 days ago

      The liability for killing someone can include prison time.

      • TheCleric 2 days ago

        Good. If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.

        • hibikir 20 hours ago

          Remember that this is neural networks doing the driving, more than old expert systems: What makes a crash happen is a network that fails to read an image correctly, or a network that fails to capture what is going on when melding input from different sensors.

          So the blame won't be on a guy who got an if statement backwards, but signing off on stopping training, failing to have certain kinds of pictures in the set, or other similar, higher order problem. Blame will be incredibly nebulous.

          • snovv_crash an hour ago

            This is the difference between a Professional Engineer (ie. the protected term) and everyone else who calls themselves engineers. They can put their signature on a system that would then hold them criminally liable if it fails.

            Bridges, elevators, buildings, ski lifts etc. all require a professional engineer to sign off on them before they can be built. Maybe self driving cars need the same treatment.

        • sashank_1509 17 hours ago

          Do we send Boeing engineers to jail when their plane crashes?

          Intention matters when passing crime judgement. If a mother causes the death of her baby due to some poor decision (say feed her something contaminated), no one proposes or tries to jail the mother, because they know the intention was the opposite.

          • davkan 3 hours ago

            This is why we have criminal negligence. Did the mother open a sealed package from the grocery store or did she find an open one on the ground?

            Harder to apply to software but maybe there should be a some legal liability involved when a sysadmin uses admin/admin and health information is leaked.

            Some employees should be absolutely in jail from boeing regarding the MCAS system and the hundreds of people who died as a result. But the actions there go beyond negligence anyway.

        • bdcravens a day ago

          Assuming there's the kind of guard rails as in other industries where this is true, absolutely. (In other words, proper licensing and credentialing, and the ability to prevent a deployment legally)

          I would also say that if something gets signed off on by management, that carries an implicit transfer of accountability up the chain from the individual contributor to whoever signed off.

        • beej71 a day ago

          And such coders should carry malpractice insurance.

        • mensetmanusman a day ago

          Software requires hardware that can bit flip with gamma rays.

          • aaronmdjones a day ago

            Which is why hardware used to run safety-critical software is made redundant.

            Take the Boeing 777 Primary Flight Computer for example. This is a fully digital fly-by-wire aircraft. There are 3 separate racks of equipment housing identical flight computers; 2 in the avionics bay underneath the flight deck, 1 in the aft cargo section. Each flight computer has 3 separate processors, supporting 2 dissimilar instruction set architectures, running the same software built by 3 separate compilers. Each flight computer captures instances of the software not agreeing about an action to be undertaken and wins by majority vote. The processor that makes these decisions is different in each flight computer.

            The power systems that provide each flight computer are also fully redundant; each computer gets power from a power supply assembly, which receives 2 power feeds from 3 separate power supplies; no 2 power supply assemblies share the same 2 sources of power. 2 of the 3 power systems (L engine generator, R engine generator, and the hot battery bus) would have to fail and the APU would have to be unavailable in order to knock out 1 of the 3 computers.

            This system has never failed in 30 years of service. There's still a primary flight computer disconnect switch on the overhead panel in the cockpit, taking the software out of the loop, to logically connect all of your control inputs to the flight surface actuators. I'm not aware of it ever being used (edit: in a commercial flight).

            • mensetmanusman 20 hours ago

              You can’t guarantee the hardware was properly built.

              • aaronmdjones 19 hours ago

                Unless Intel, Motorola, and AMD all conspire to give you a faulty processor, you will get a working primary flight computer.

                Besides, this is what flight testing is for. Aviation certification authorities don't let an aircraft serve passengers unless you can demonstrate that all of its safety-critical systems work properly and that it performs as described.

                I find it hard to believe that automotive works much differently in this regard, which is what things like crumple zone crash tests are for.

          • chgs a day ago

            You can control for that. Multiple machines doing is rival calculations for example

          • rvnx a day ago

            [flagged]

        • dansiemens a day ago

          Are you suggesting that individuals should carry that liability?

          • izacus a day ago

            The ones that are identified as making decisions leading to death, yes.

            It's completely normal in other fields where engineers build systems that can kill.

            • Dylan16807 5 hours ago

              That's liability for defective design, not any time it fails as suggested above.

            • A4ET8a8uTh0 a day ago

              Pretty much. Fuck. I just watched higher ups sign off on a project I know for a fact has defects all over the place going into production despite our very explicit: don't do it ( not quite Tesla level consequences, but still resulting in real issues for real people ). The sooner we can start having people in jail for knowingly approving half-baked software, the sooner it will improve.

              • IX-103 21 hours ago

                Should we require Professional Engineers to sign off on such projects the same way they are required to for other safety critical infrastructure (like bridges and dams)? The Professional Engineer that signed off is liable for defects in the design. (Though, of course, if the design is not followed then liability can shift back to the company that built it)

                • A4ET8a8uTh0 14 hours ago

                  I hesitate, because I shudder at government deciding which algorithm is best for a given scenario ( because that is effectively is where it would go ). Maybe the distinction is, the moment money changes hands based on product?

                  I am not an engineer, but I have watched clearly bad decisions take place from technical perspective so that a person with title that went to their head and a bonus that is not aligned with right incentives mess things up for us. Maybe some proffesionalization of software engineering is in order.

                  • snovv_crash an hour ago

                    This isn't a matter of the government saying what you need to do. This is a matter of being held criminally liable if people get hurt.

        • ekianjo a day ago

          How is that working with Boeing?

          • mlinhares a day ago

            People often forget corporations don’t go to jail. Murder when you’re not a person ends up with a slap.

        • bossyTeacher a day ago

          Doesn't seem to happen in the medical and airplane industries, otherwise, Boeing would most likely not exist as a company anymore.

          • jsvlrtmred 21 hours ago

            Perhaps one can debate whether it happens often enough or severely enough, but it certainly happens. For example, and only the first one to come to mind - the president of PIP went to jail.

        • dmix a day ago

          Drug companies and the FDA (circa 1906) play a very dangerous and delicate dance all the time releasing new drugs to the public. But for over a century now we've managed to figure it out without holding pharma companies criminally liable for every death.

          > If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.

          Easy to type those words on the internet than make it a policy IRL. That sort of policy IRL would likely result in a) killing off all commercial efforts to solve traffic deaths via technology and vast amounts of other semi-autonomous technology like farm equipment or b) government/car companies mandating filming the driver every time they turn it on, because it's technically supposed to be human assisted autopilot in these testing stages (outside restricted pilot programs like Waymo taxis). Those distinctions would matter in a criminal court room, even if humans can't always be relied upon to always follow the instructions on the bottle's label.

          • ywvcbk a day ago

            > criminally liable for every death.

            The fact that people generally consume drugs voluntarily and make that decision after being informed about most of the known risks probably mitigates that to some extent. Being killed by someone else’s FSD car seems to be very different

            • sokoloff a day ago

              Imagine that in 2031, FSD cars could exactly halve all aspects of auto crashes (minor, major, single car, multi car, vs pedestrian, fatal/non, etc.)

              Would you want FSD software to be developed or not? If you do, do you think holding devs or companies criminally liable for half of all crashes is the best way to ensure that progress happens?

              • ywvcbk a day ago

                From a utilitarian perspective sure, you might be right but how do you exempt those companies from civil liability and make it impossible for victims/their families to sue the manufacturer? Might be legally tricky (driver/owner can explicitly/implicitly agree with the EULA or other agreements, imposing that on third parties wouldn’t be right).

                • Majromax 21 hours ago

                  > how do you exempt those companies from civil liability and make it impossible for victims/their families to sue the manufacturer?

                  I don't think anyone in this thread has talked about an exemption from civil liability (sue for money), just criminal liability (go to jail).

                  Civil liability is the far less controversial issue because it's transferred all the time: governments even mandate that drivers carry insurance for this purpose.

                  With civil liability transfer, imperfect FSD can still make economic sense. Just as an insurance company needs to collect enough premium to pay claims, the FSD manufacturer would need to reserve enough revenue to pay its expected claims. In this case, FSD doesn't even need to be better than humans to make economic sense, in the same way that bad drivers can still buy (expensive) insurance.

                  • ywvcbk 21 hours ago

                    > just criminal liability (go to jail).

                    That just seems like a theoretical possibility (even if that). I don’t see how any engineer or even someone in management could go to jail unless intent or gross negligence can be proven.

                    > drivers carry insurance for this purpose.

                    The mandatory limit is extremely low in many US states.

                    > expected claims

                    That seems like the problem. It might take a while until we reach an equilibrium of some sort.

                    > that bad drivers can still buy

                    That’s still capped by the amount of coverage + total assets held by that bad driver. In Tesl’s case there is no real limit (without legislation/established precedent). Juries/courts would likely be influenced by that fact as well.

                  • DennisP 20 hours ago

                    In fact, if you buy your insurance from Tesla, you effectively do put civil responsibility for FSD back in their hands.

              • blackoil a day ago

                Say cars have near 0 casualty in northern hemisphere but occasionally fails for cars driving topsy turvy in south. If company knew about it and chooses to ignore it because of profits, yes they should be charged criminally.

            • ekianjo a day ago

              > make that decision after being informed about most of the known risks

              Like for the COVID-19 vaccines? Experimental yet given to billions without ever showing them a consent form.

              • ywvcbk a day ago

                Yes, but worse. Nobody physically forced anyone to get vaccinated so you still had some choice. Of course legally banning individuals from using public roads or sidewalks unless they give up their right to sue Tesla/etc. might be an option.

          • hilsdev a day ago

            We should hold Pharma companies liable for every death. They make money off the success cases. Not doing so is another example of privatized profits and socialized risks/costs. Something like a program with reduced costs for those willing to sign away liability to help balance social good vs risk analysis

          • ryandrake a day ago

            Your take is understandable and not surprising on a site full of software developers. Somehow, the general software industry has ingrained this pessimistic and fatalistic dogma that says bugs are inevitable and there’s nothing you can do to prevent them. Since everyone believes it, it is a self-fulfilling prophecy and we just accept it as some kind of law of nature.

            Holding software developers (or their companies) liable for defects would definitely kill off a part of the industry: the very large part that YOLOs code into production and races to get features released without rigorous and exhaustive testing. And why don’t they spend 90% of their time testing and verifying and proving their software has no defects? Because defects are inevitable and they’re not held accountable for them!

            • everforward a day ago

              It is true of every field I can think of. Food gets salmonella and what not frequently. Surgeons forget sponges inside of people (and worse). Truckers run over cars. Manufacturers miss some failures in QA.

              Literally everywhere else, we accept that the costs of 100% safety are just unreasonably high. People would rather have a mostly safe device for $1 than a definitely safe one for $5. No one wants to pay to have every head of lettuce tested for E Coli, or truckers to drive at 10mph so they can’t kill anyone.

              Software isn’t different. For the vast majority of applications where the costs of failure are low to none, people want it to be free and rapidly iterated on even if it fails. No one wants to pay for a formally verified Facebook or DoorDash.

              • kergonath a day ago

                > Literally everywhere else, we accept that the costs of 100% safety are just unreasonably high.

                Yes, but also in none of these situations would the consumer/customer/patient be held responsible. I don’t expect a system to be perfect, but I won’t accept any liability if it malfunctions as I use it the way it is intended. And even worse, I would not accept that the designers evade their responsibilities if it kills someone I know.

                As the other poster said, I am happy to consider it safe enough the day the company accepts to own its issues and the associated responsibility.

                > No one wants to pay for a formally verified Facebook or DoorDash.

                This is untenable. Does nobody want a formally verified avionics system in their airliner, either?

                • everforward 21 hours ago

                  You could be held liable if it impacts someone else. A restaurant serving improperly cooked chicken that gives people E Coli is liable. Private citizens may not have that duty, I’m not sure.

                  You would likely also be liable if you overloaded an electrical cable, causing a fire that killed someone.

                  “Using it in the way it was intended” is largely circular reasoning; of course it wasn’t intended to hurt anyone, so any usage that does hurt someone was clearly unintended. People frequently harm each other by misusing items in ways they didn’t realize were misuses.

                  > This is untenable. Does nobody want a formally verified avionics system in their airliner, either?

                  Not for the price it would cost. Airbus is the pioneer here, and even they apply formal verification sparingly. Here’s a paper from a few years ago about it, and how it’s untenable to formally verify the whole thing: https://www.di.ens.fr/~delmas/papers/fm09.pdf

                  Software development effort generally tends to scale superlinearly with complexity. I am not an expert, but the impression I get is that formal verification grows exponentially with complexity to the point that it is untenable for most things beyond research and fairly simple problems. It is a huge pain in the ass to do something like putting time bounds around reading a config file.

                  IO also sucks in formal verification from what I hear, and that’s like 80% of what a plane does. Read these 300 signals, do some standard math, output new signals to controls.

                  These things are much easier to do with tests, but tests only check for scenarios you’ve thought of already

                  • kergonath 13 hours ago

                    > You could be held liable if it impacts someone else. A restaurant serving improperly cooked chicken that gives people E Coli is liable. Private citizens may not have that duty, I’m not sure. > You would likely also be liable if you overloaded an electrical cable, causing a fire that killed someone.

                    Right. But neither of these examples are following guidelines or proper use. If I turn the car into people on the pavement, I am responsible. If the steering wheel breaks and the car does it, then the manufacturer is responsible (or the mechanic, if the steering wheel was changed). The question at hand is whose responsibility it is if the car’s software does it.

                    > “Using it in the way it was intended” is largely circular reasoning; of course it wasn’t intended to hurt anyone, so any usage that does hurt someone was clearly unintended.

                    This is puzzling. You seem to be conflating use and consequences and I am not quite sure how you read that in what I wrote. Using a device normally should not make it kill people, I guess at least we can agree on that. Therefore, if a device kills people, then it is either improper use (and the fault of the user), or a defective device, at which point it is the fault of the designer or manufacturer (or whoever did the maintenance, as the case might be, but that’s irrelevant in this case).

                    Each device has a manual and a bunch of regulations about its expected behaviour and standard operating procedures. There is nothing circular about it.

                    > Not for the price it would cost.

                    Ok, if you want to go full pedantic, note that I wrote “want”, not “expect”.

            • tsimionescu a day ago

              > And why don’t they spend 90% of their time testing and verifying and proving their software has no defects? Because defects are inevitable and they’re not held accountable for them!

              For a huge part of the industry, the reason is entirely different. It is because software that mostly works today but has defects is much more valuable than software that always works and has no defects 10 years from now. Extremely well informed business customers will pay for delivering a buggy feature today rather than wait two more months for a comprehensively tested feature. This is the reality of the majority of the industry: consumers care little about bugs (below some defect rate) and care far more about timeliness.

              This of course doesn't apply to critical systems like automatic drivers or medical devices. But the vast majority of the industry is not building these types of systems.

            • ywvcbk a day ago

              Punishing individual developers is of course absurd (unless intent can be proven) the company itself and the upper management on the hand? Would make perfect sense.

              • chgs a day ago

                You have one person in that RACI accountable box. That’s the engineer signing it off as fit. They are held accountable, including with jail if required.

            • viraptor a day ago

              > that says bugs are inevitable and there’s nothing you can do to prevent them

              I don't think people believe this as such. It may be the short way to write it, but actually what devs mean is "bugs are inevitable at the funding/time available". I often say "bugs are inevitable" when it practice it means "you're not going to pay a team for formal specification, validated implementation and enough reliable hardware".

              Which business will agree to making the process 5x longer and require extra people? Especially if they're not forced there by regulation or potential liability?

        • viraptor a day ago

          That's a dangerous line and I don't think it's correct. Software I write shouldn't be relied on in critical situations. If someone makes that decision then it's on them not on me.

          The line should be where a person tells others that they can rely on the software with their lives - as in the integrator for the end product. Even if I was working on the software for self driving, the same thing would apply - if I wrote some alpha level stuff for the internal demonstration and some manager decided "good enough, ship it", they should be liable for that decision. (Because I wouldn't be able to stop them / may have already left by then)

          • kergonath a day ago

            It’s not that complicated or outlandish. That’s how most engineering fields work. If a building collapses because of design flaws, then the builders and architects can be held responsible. Hell, if a car crashes because of a design or assembly flaw, the manufacturer is held responsible. Why should self-driving software be any different?

            If the software is not reliable enough, then don’t use it in a context where it could kill people.

            • krisoft a day ago

              I think the example here is that the designer draws a bridge for a railway model, and someone decides to use the same design and sends real locomotives across it. Is the original designer (who neither intended nor could have foreseen this) liable in your understanding?

              • ndsipa_pomu a day ago

                That's a ridiculous argument.

                If a construction firm takes an arbitrary design and then tries to build it in a totally different environment and for a different purpose, then the construction firm is liable, not the original designer. It'd be like Boeing taking a child's paper aeroplane design and making a passenger jet out of it and then blaming the child when it inevitably fails.

                • wongarsu a day ago

                  Or alternatively, if Boeing uses wood screws to attach an airplane door and the screw fails that's on Boeing, not the airline, pilot or screw manufacturer. But if it's sold as aerospace-grade attachment bolt with attachments for safety wire and a spec sheet that suggests the required loads are within design parameters then it's the bolt manufacturers fault when it fails, and they might have to answer for any deaths resulting from that. Unless Boeing knew or should have known that the bolts weren't actually as good as claimed, then the buck passes back to them

                  Of course that's wildly oversimplifying and multiple entities can be at fault at once. My point is that these are normal things considered in regular engineering and manufacturing

                • krisoft a day ago

                  > That's a ridiculous argument.

                  Not making an argument. Asking a clarifying question about someone else’s.

                  > It'd be like Boeing taking a child's paper aeroplane design and making a passenger jet out of it and then blaming the child when it inevitably fails.

                  Yes exactly. You are using the same example I used to say the same thing. So which part of my message was ridiculous?

                  • ndsipa_pomu 21 hours ago

                    If it's not an argument, then you're just misrepresenting your parent poster's comment by introducing a scenario that never happens.

                    If you didn't intend your comment as a criticism, then you phrased it poorly. Do you actually believe that your scenario happens in reality?

                    • krisoft 15 hours ago

                      > you're just misrepresenting your parent poster's comment

                      I did not represent or misrepresent anything. I have asked a question to better understand their thinking.

                      > If you didn't intend your comment as a criticism, then you phrased it poorly.

                      Quite probably. I will have to meditate on it.

                      > Do you actually believe that your scenario happens in reality?

                      With railway bridges? Never. It would ring alarm bells for everyone from the fabricators to the locomotive engineer.

                      With software? All the time. Someone publishes some open source code, someone else at a corporation bolts the open source code into some application and now the former “toy train bridge” is a loadbearing key-component of something the original developer could never imagine nor plan for.

                      This is not theoretical. Very often I’m the one doing the bolting.

                      And to be clear: my opinion is that the liability should fall with whoever integrated the code and certified it to be fit for some safety critical purpose. As an example if you publish leftpad and i put it into a train brake controller it is my job to make sure it is doing the right thing. If the train crashes you as the author of leftpad bear no responsibility but me as the manufacturer of discount train brakes do.

                    • lcnPylGDnU4H9OF 20 hours ago

                      It was not a misrepresentation of anything. They were just restating the worry that was stated in the GP comment. https://news.ycombinator.com/item?id=41892572

                      And the only reason the commenter I linked to had that response is because its parent comment was slightly careless in its phrasing. Probably just change “write” to “deploy” to capture the intended meaning.

              • kergonath a day ago

                Someone, at some point signed off on this being released. Not thinking things through seriously is not an excuse to sell defective cars.

              • f1shy a day ago

                Are you serious?! You must be trolling!

                • krisoft 21 hours ago

                  I assure you I am not trolling. You appear to have misread my message.

                  Take a deep breath. Read my message one more time carefully. Notice the question mark at the end of the last sentence. Think about it. If after that you still think I’m trolling you or anyone else I will be here and happy to respond to your further questions.

          • presentation a day ago

            To be fair maybe the software you write shouldn’t be relied on in critical situations but in this case the only place this software could be used in are critical situations

            • viraptor a day ago

              Ultimately - yes. But as I mentioned, the fact it's sold as ready for critical situations doesn't mean the developers thought/said it's ready.

              • gmueckl a day ago

                But someone slapped that label on it and made a pinky promise that it's true. That person needs to accept liability if things go wrong. If person A is loud and clear that something isn't ready, but person B tells the customer otherwise, B is at fault.

                Look, there are well established procedures in a lot of industries where products are relied on to keep people safe. They all require quite rigorous development and certification processes and sneaking untested alpha quality software through such a process would be actively malicious and quite possibly criminal in and of itself, at least in some industries.

                • viraptor a day ago

                  This is the beginning of the thread https://news.ycombinator.com/item?id=41891164

                  You're in violent agreement with me ;)

                  • latexr a day ago

                    No, the beginning of the thread is earlier. And with that context it seems clear to me that the “you” in the post you linked means “the company”, not “the individual software developer”. No one else in your replies seems confused by that, we all understand self-driving software wasn’t written by a single person that has ultimate decision power within a company.

                    • viraptor a day ago

                      If the message said "you release software", or "approve" or "produce", or something like that, sure. But it said "you write software" - and I don't think that can apply to a company, because writing is what individuals do. But yeah, maybe that's not what the author meant.

                      • latexr a day ago

                        > and I don't think that can apply to a company, because writing is what individuals do.

                        By that token, no action could ever apply to a company—including approving, producing, or releasing—since it is a legal entity, a concept, not a physical thing. For all those actions there was a person actually doing it in the name of the company.

                        It’s perfectly normal to say, for example, “GenericCorp wrote a press-release about their new product”.

              • elric a day ago

                I think it should be fairly obvious that it's not the individual developers who are responsible/liable. In critical systems there is a whole chain of liability. That one guy in Nebraska who thanklessly maintains some open source lib that BigCorp is using in their car should obviously not be liable.

                • f1shy a day ago

                  It depends. If you do bad sw and skip reviews and processes, you may be liable. Even if you are told to do something, if you know is wrong, you should say it. Right now I’m in middle of s*t because of I spoked up.

                  • Filligree 19 hours ago

                    > Right now I’m in middle of s*t because of I spoked up.

                    And you believe that, despite experiencing what happens if you speak up?

                    We shouldn’t simultaneously require people to take heroic responsibility, while also leaving them high and dry if they do.

                    • f1shy 18 hours ago

                      I do believe I am responsible. I recognize I’am now in a position that I can speak without fear. If I get fired I would make a party tbh.

          • sigh_again 19 hours ago

            >Software I write shouldn't be relied on in critical situations.

            Then don't write software to be used in things that are literally always critical situations, like cars.

        • _rm a day ago

          What a laugh, would you take that deal?

          Upside: you get paid a 200k salary, if all your code works perfectly. Downside: if it doesn't, you go to prison.

          The users aren't compelled to use it. They can choose not to. They get to choose their own risks.

          The internet is a gold mine of creatively moronic opinions.

          • chgs a day ago

            Need far more regulation of the software industry, far too many people working in it fail to understand the scope of what they do.

            Civil engineer kills someone with a bad building, jail. Surgeon removes the wrong lung, jail. Computer programmer kills someone, “oh well it’s your own fault”.

            • caddemon 21 hours ago

              I've never heard of a surgeon going to jail over a genuine mistake even if it did kill someone. I'm also not sure what that would accomplish - take away their license to practice medicine sure, but they're not a threat to society more broadly.

          • thunky a day ago

            You can go to prison or die for being a bad driver, yet people choose to drive.

            • _rm a day ago

              Arguing for the sake of it; you wouldn't take that risk reward.

              Most code has bugs from time to time even when highly skilled developers are being careful. None of them would drive if the fault rate was similar and the outcome was death.

              • notahacker a day ago

                Or to put even more straightforwardly: people who choose to drive rarely expect to drive more than a few 10s of k per year. People who choose to write autonomous software's lines of code potentially drive a billion miles per year, experiencing a lot more edge cases they are expected to handle in a non-dangerous manner, and have to handle them via advance planning and interactions with a lot of other people's code.

                The only practical way around this which permits autonomous vehicles (which are apparently dependent on much more complex and intractable codebases than, say, avionics) is a much higher threshold of criminal responsibility than the "the serious consequences resulted from the one-off execution of an dangerous manoeuvre which couldn't be justified in context" which sends human drivers to jail. And of course that double standard will be problematic if "willingness to accept liability" is the only safety threshold.

              • 7sidedmarble 17 hours ago

                I don't think anyone's seriously suggesting people be held accountable for bugs which are ultimately accidents. But if you knowingly sign off on, oversea, or are otherwise directly responsible for the construction of software that you know has a good chance of killing people, then yes, there should be consequences for that.

            • ukuina a day ago

              Systems evolve to handle such liability: Drivers pass theory and practical tests to get licensed to drive (and periodically thereafter), and an insurance framework that gauges your risk-level and charges you accordingly.

              • kergonath a day ago

                Requiring formal licensing and possibly insurance for developers working on life-critical systems is not that outlandish. On the contrary, that is already the case in serious engineering fields.

              • ekianjo a day ago

                And yet tens of thousands of people die on the roads right now every year. Working well?

          • moralestapia a day ago

            Read the site rules.

            And also, of course some people would take that deal, and of course some others wouldn't. Your argument is moot.

      • lowbloodsugar 17 hours ago

        And corporations are people now, so Tesla can go to jail.

      • renegade-otter a day ago

        In the United States? Come on. Boeing executives are not in jail - they are getting bonuses.

        • f1shy a day ago

          But some little boy down the line will pay for it. Look for Eschede ICE accident.

          • renegade-otter a day ago

            There are many examples.

            The Koch brothers, famous "anti-regulatory state" warriors, have fought oversight so hard that their gas pipelines were allowed to be barely intact.

            Two teens get into a truck, turn the ignition key - and the air explodes:

            https://www.southcoasttoday.com/story/news/nation-world/1996...

            Does anyone go to jail? F*K NO.

            • IX-103 21 hours ago

              To be fair, the teens knew about the gas leak and started the truck in an attempt to get away. Gas leaks like that shouldn't happen easily, but people near pipelines like that should also be made aware of the risks of gas leaks, as some leaks are inevitable.

              • 8note 15 hours ago

                As an alternative though, the company also failed at handling that the gas leak started. They could have had people all over the place guiding people out and away from the leak safely, and keeping the public away while the leak is fixed.

                Or, they could buy sufficient buffer land around the pipeline such that the gas leak will be found and stopped before it could explode down the road

    • theptip 19 hours ago

      Presumably that is exactly when their taxi service rolls out?

      While this has a dramatic rhetorical flourish, I don’t think it’s a good proxy. Even if it was safer, it would be an unnecessarily high burden to clear. You’d be effectively writing a free insurance policy which is obviously not free.

      Just look at total accidents / deaths per mile driven, it’s the obvious and standard metric for measuring car safety. (You need to be careful to not stop the clock as soon as the system disengages of course. )

    • mrjin a day ago

      Even if it does, can it resurrect the deceased?

      • LadyCailin a day ago

        But people driving manually kill people all the time too. The bar for self driving isn’t «does it never kill anyone», it’s «does it kill people less than manual driving». We’re not there yet, and Tesla’s «FSD» is marketing bullshit, but we certainly will be there one day, and at that point, we need to understand what we as a society will do when a self driving car kills someone. It’s not obvious what the best solution is there, and we need to continue to have societal discussions to hash that out, but the correct solution definitely isn’t «don’t use self driving».

        • amelius a day ago

          No, because every driver thinks they are better than average.

          So nobody will accept it.

          • Dylan16807 5 hours ago

            The level where someone personally uses it and the level where they accept it being on the road are different. Beating the average driver is all about the latter.

            Also I will happily use self driving that matches the median driver in safety.

          • the8472 a day ago

            I expect insurance to figure out the relative risks and put a price sticker on that decision.

          • A4ET8a8uTh0 a day ago

            Assuming I understand the argument flow correctly, I think I disagree. If there is one thing that the past few decades have confirmed quite conclusively, it is that people will trade a lot of control and sense away in the name of convenience. The moment FSD reaches that sweet spot of 'take me home -- I am too drunk to drive' of reliability, I think it would be accepted; maybe even required by law. It does not seem there.

        • Majromax 21 hours ago

          > The bar for self driving isn’t «does it never kill anyone», it’s «does it kill people less than manual driving».

          Socially, that's not quite the standard. As a society, we're at ease with auto fatalities because there's often Someone To Blame. "Alcohol was involved in the incident," a report might say, and we're more comfortable even though nobody's been brought back to life. Alternatively, "he was asking for it, walking at night in dark clothing, nobody could have seen him."

          This is an emotional standard that speaks to us as human, story-telling creatures that look for order in the universe, but this is not a proper actuarial standard. We might need FSD to be manifestly safer than even the best human drivers before we're comfortable with its universal use.

          • LadyCailin 3 hours ago

            That may be true, but I think I personally would find it extremely hard to argue against when the numbers are clearly showing that it’s safer. I think once the numbers are unambiguously showing that autopilots are safer, it will be super hard for people to argue against it. Of course there is a huge intermediate state where the numbers aren’t clear (or at least not clear to the average person), and during that stage, emotions may rule the debate. But if the underlying data is there, I’m certain car companies can change the narrative - just look at how much American hates public transit and jaywalkers.

    • concordDance 2 days ago

      Whats the current total liability cost for all Tesla drivers?

      The average for all USA cars seems to be around $2000/year, so even if FSD was half as dangerous Tesla would still be paying $1000/year equivalent (not sure how big insurance margins are, assuming nominal) per car.

      Now, if legally the driver could avoid paying insurance for the few times they want/need to drive themselves (e.g. snow? Dunno what FSD supports atm) then it might make sense economically, but otherwise I don't think it would work out.

      • Retric 2 days ago

        Liability alone isn’t nearly that high.

        Car insurance payments include people stealing your car, uninsured motorists, rental cars, and other issues not the drivers fault. Further insurance payments also include profits for the insurance company, advertising, billing, and other overhead from running a business.

        Also, if Tesla was taking on these risks you’d expect your insurance costs to drop.

        • ywvcbk a day ago

          How much would every death or severe injury caused by FSD cost Tesla? We probably won’t know anytime soon but since unlike anyone else they can afford to pay out virtually unlimited amounts and courts will presumably take that into account

        • TheCleric 2 days ago

          Yeah any automaker doing this would just negotiate a flat rate per car in the US and the insurer would average the danger to make a rate. This would be much cheaper than the average individual’s cost for liability on their insurance.

          • ywvcbk a day ago

            What if someone gets killed because of some clear bug/error and the jury decides to award 100s of millions just for that single ? I’m not sure it’s trivial to insurance companies to account for that sort of risk

            • kalenx a day ago

              It is trivial and they've done it for ages. It's called reinsurance.

              Basically (_very_ basically, there's more to it) the insurance company insures itself against large claims.

              • ywvcbk a day ago

                I’m not sure Boeing etc. could have insured any liability risk resulting from engineering/design flaws in their vehicles?

            • ndsipa_pomu a day ago

              Not trivial, but that is exactly the kind of thing that successful insurance companies factor into their premiums, or specifically exclude those scenarios (e.g. not covering war zones for house insurance).

          • ryandrake a day ago

            Somehow I doubt those savings would be passed along to the individual car buyer. Surely buying a car insured by the manufacturer would be much more expensive than buying the car plus your own individual insurance, because the car company would want to profit from both.

          • thedougd a day ago

            And it would be supplementary to the driver’s insurance, only covering incidents that happen while FSD is engaged. Arguably they would self insure and only purchase insurance for Tesla as a back stop to their liability, maybe through a reinsurance market.

      • ywvcbk a day ago

        Also I wouldn’t be surprised if any potential wrongful death lawsuits could cost Tesla several magnitudes more than the current average.

    • tiahura a day ago

      I think that’s implicit in the promise of the upcoming-any-year-now unattended full self driving.

    • renewiltord a day ago

      This is how I feel about nuclear energy. Every single plant should need to form a full insurance fund dedicated to paying out if there’s trouble. And the plant should have strict liability: anything that happens from materials it releases are its responsibility.

      But people get upset about this. We need corporations to take responsibility.

      • ndsipa_pomu a day ago

        That's not a workable idea as it'd just encourage corporations to obfuscate the ownership of the plant (e.g. shell companies) and drastically underestimate the actual risks of catastrophes. Ultimately, the government will be left holding the bill for nuclear catastrophes, so it's better to just recognise that and get the government to regulate the energy companies.

        • f1shy a day ago

          The problem I see there is that if “corporations are responsible” then no one is. That is, no real person has the responsibility, and acts accordingly.

      • idiotsecant a day ago

        While we're at it how about why apply the same standard to coal and natural gas plants? For some reason when we start taking about nuclear plants we all of a sudden become adverse to the idea of unfunded externalities but when we're talking about 'old' tech that has been steadily irradiating your community and changing the gas composition of the entire planet it becomes less concerning.

        • moooo99 a day ago

          I think it is a matter of perceived risk.

          Realistically speaking, nuclear power is pretty safe. In the history of nuclear power, there were two major incidents. Considering the number of nuclear power plants around the planet, that is pretty good. However, as those two accidents demonstrated, the potential fallout of those incidents is pretty severe and widespread. I think this massively contributes to the perceived risks. The warnings towards the public were pretty clear. I remember my mom telling stories from the time the Chernobyl incident became known to the public and people became worried about the produce they usually had from their gardens. Meanwhile, everything that has been done to address the hazards of fossil based power generation is pretty much happening behind the scenes.

          With coal and natural gas, it seems like people perceive the risks as more abstract. The radioactive emissions of coal power plants have been known for a while and the (potential) dangers of fine particulate matters resulting from combustion are somewhat well known nowadays as well. However, the effects of those danger seem much more abstract and delayed, leading people to not be as worried about it. It also shows on a smaller, more individual scale: people still buy ICE cars at large and install gas stoves into their houses despite induction being readily available and at times even cheaper.

          • pyrale a day ago

            > However, the effects of those danger seem much more abstract and delayed, leading people to not be as worried about it.

            Climate change is very visible in the present day to me. People are protesting about it frequently enough that it's hard to claim they are not worried.

            • moooo99 21 hours ago

              Climate change is certainly visible, although the extend to which areas are affected varies wildly. However, there are still shockingly many people who have a hard time attributing ever increasing natural disasters and more extreme weather patterns to climate change.

          • brightball a day ago

            During power outages, having natural gas in your home is a huge benefit. Many in my area just experienced it with Helene.

            You can still cook. You can still get hot water. If you have gas logs you still have a heat source in the winter too.

            These trade offs are far more important to a lot of people.

            • moooo99 21 hours ago

              Granted, that is a valid concern if power outages are more frequent in your area. I have never experienced a power outage personally, so that is nothing I ever thought of. However, I feel like with solar power and battery storage systems becoming increasingly widespread, this won't be a major concern for much longer

              • brightball 14 hours ago

                They aren’t frequent but in the last 15-16 years there have been 2 outages that lasted almost 2 weeks in some areas around here. The first one was in the winter and the only gas appliance I had was a set of gas logs in the den.

                It heated my whole house and we used a pan to cook over it. When we moved the first thing I did was install gas logs, gas stove and a gas water heater.

                It’s nice to have options and backup plans. That’s one of the reasons I was a huge fan of the Chevy Volt when it first came out. I could easily take it on a long trip but still averaged 130mpg over 3 years (twice). Now I’ve got a Tesla and when there are fuel shortages it’s also really nice.

                A friend of ours owns a cybertruck and was without power for 9 days, but just powered the whole house with the cybertruck. Every couple of days he’d drive to a supercharger station to recharge.

        • renewiltord 20 hours ago

          Sure, we can have a carbon tax on everything. That's fine. And then the nuclear plant has to pay for a Pripyat-sized exclusion zone around it. Just like the guy said about Tesla. All fair.

  • rainsford 21 hours ago

    Arguably the problem with Tesla self-driving is that it's stuck in an uncanny valley of performance where it's worse than better performing systems but also worse from a user experience perspective than even less capable systems.

    Less capable driver assistance type systems might help the driver out (e.g. adaptive cruise control), but leave no doubt that the human is still driving. Tesla though goes far enough that it takes over driving from the human but it isn't reliable enough that the human can stop paying attention and be ready to take over at a moment's notice. This seems like the worst of all possible worlds since you are both disengaged by having to maintain alertness.

    Autopilots in airplanes are much the same way, pilots can't just turn it on and take a nap. But the difference is that nothing an autopilot is going to do will instantly crash the plane, while Tesla screwing up will require split second reactions from the driver to correct for.

    I feel like the real answer to your question is that having reasonable confidence in self-driving cars beyond "driver assistance" type features will ultimately require a car that will literally get from A to B reliably even if you're taking a nap. Anything close to that but not quite there is in my mind almost worse than something more basic.

  • suggeststrongid an hour ago

    > I'd call myself a fairly aggressive driver

    This is puzzling. It’s as if it was said without apology. How about not endangering others on the road with manual driving before trying out self driving?

  • dreamcompiler 2 days ago

    > It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

    This is what bugs me about ordinary autopilot. Autopilot doesn't switch lanes, but I like to slow down or speed up as needed to allow merging cars to enter my lane. Autopilot never does that, and I've had some close calls with irate mergers who expected me to work with them. And I don't think they're wrong.

    Just means that when I'm cruising in the right lane with autopilot I have to take over if a car tries to merge.

    • kelnos a day ago

      While I certainly wouldn't object to how you handle merging cars (it's a nice, helpful thing to do!), I was always taught that if you want to merge into a lane, you are the sole person responsible for making that possible and making that safe. You need to get your speed and position right, and if you can't do that, you don't merge.

      (That's for merging onto a highway from an entrance ramp, at least. If you're talking about a zipper merge due to a lane ending or a lane closure, sure, cooperation with other drivers is always the right thing to do.)

      • llamaimperative a day ago

        More Americans should go drive on the Autobahn. Everyone thinks the magic is “omg no speed limits!” which is neat but the really amazing thing is that NO ONE sits in the left hand lane and EVERYONE will let you merge immediately upon signaling.

        It’s like a children’s book explanation of the nice things you can have (no speed limits) if everyone could just stop being such obscenely selfish people (like sitting in the left lane or preventing merges because of some weird “I need my car to be in front of their car” fixation).

        • rvnx a day ago

          Tesla FSD on German Autobahn = most dangerous thing ever. The car has never seen this rule and it's not ready for a 300km/h car behind you.

          • FeepingCreature an hour ago

            To be fair, Tesla FSD on German Autobahn = impossible because it's not released yet, precisely because it's not trained for German roads.

      • macNchz a day ago

        At least in the northeast/east coast US there are still lots of old parkways without modern onramps, where moving over to let people merge is super helpful. Frequently these have bad visibility and limited room to accelerate if any at all, so doing it your way is not really possible.

        For example:

        I use this onramp fairly frequently. It’s rural and rarely has much traffic, but when there is you can get stuck for a while trying to get on because it’s hard to see the coming cars, and there’s not much room to accelerate (unless people move over, which they often do). https://maps.app.goo.gl/ALt8UmJDzvn89uvM7?g_st=ic

        Preemptively getting in the left lane before going under this bridge is a defensive safety maneuver I always make—being in the right lane nearly guarantees some amount of conflict with merging traffic.

        https://maps.app.goo.gl/PumaSM9Bx8iyaH9n6?g_st=ic

      • lolinder a day ago

        I was taught that in every situation you should act as though you are the sole person responsible for making the interaction safe.

        If you're the one merging? It's on you. If you're the one being merged into? Also you.

        If you assume that every other driver has a malfunctioning vehicle or is driving irresponsibly then your odds of a crash go way down because you assume that they're going to try to merge incorrectly.

      • rainsford 21 hours ago

        > You need to get your speed and position right, and if you can't do that, you don't merge.

        I agree, but my observation has been that the majority of drivers are absolutely trash at doing that and I'd rather they not crash into me, even if would be their fault.

        Honestly I think Tesla's self-driving technology is long on marketing and short on performance, but it really helps their case that a lot of the competition is human drivers who are completely terrible at the job.

      • lotsofpulp a day ago

        >cooperation with other drivers is always the right thing to do

        Correct, including when the other driver may not have the strictly interpreted legal right of way. You don't know if their vehicle is malfunctioning, or if the driver is malfunctioning, or if they are being overly aggressive or distracted on their phone.

        But most of the time, on an onramp to a highway, people on the highway in the lane that is being merged into need to be taking into account the potential conflicts due to people merging in from the acceleration lane. Acceleration lanes can be too short, other cars may not have the capability to accelerate quickly, other drivers may not be as confident, etc.

        So while technically, the onus is on people merging in, a more realistic rule is to take turns whenever congestion appears, even if you have right of way.

    • dham 13 hours ago

      Autopilot is just adaptive cruise control with lane keep. Literally every car has this now. I don't see people on Toyota, Honda, or Ford forums complaining that a table-stakes feature of a car doesn't adjust speed or change lanes as a car is merging in. Do you know how insane that sounds. I'm assuming you're in software since you're on Hacker news.

      • Dylan16807 5 hours ago

        It sounds zero insane. Adaptive cruise control taking into account merging would be great. And it's valid to complain about automations that make your car worse at cooperating.

      • twoWhlsGud 9 hours ago

        My Audi doesn't advertise its predictive cruise control as Full Self Driving. So expectations are more controlled...

        • Dylan16807 5 hours ago

          They're not talking about FSD.

    • japhyr a day ago

      > Just means that when I'm cruising in the right lane with autopilot I have to take over if a car tries to merge.

      Which brings it right back to the original criticism of Tesla's "self driving" program. What you're describing is assisted driving, not anything close to "full self driving".

    • bastawhiz a day ago

      Agreed. Automatic lane changes are the only feature of enhanced autopilot that I think I'd be interested in, solely for this reason.

  • modeless 2 days ago

    Tesla jumped the gun on the FSD free trial earlier this year. It was nowhere near good enough at the time. Most people who tried it for the first time probably share your opinion.

    That said, there is a night and day difference between FSD 12.3 that you experienced earlier this year and the latest version 12.6. It will still make mistakes from time to time but the improvement is massive and obvious. More importantly, the rate of improvement in the past two months has been much faster than before.

    Yesterday I spent an hour in the car over three drives and did not have to turn the steering wheel at all except for parking. That never happened on 12.3. And I don't even have 12.6 yet, this is still 12.5; others report that 12.6 is a noticeable improvement over 12.5. And version 13 is scheduled for release in the next two weeks, and the FSD team has actually hit their last few release milestones.

    People are right that it is still not ready yet, but if they think it will stay that way forever they are about to be very surprised. At the current rate of improvement it will be quite good within a year and in two or three I could see it actually reaching the point where it could operate unsupervised.

    • wstrange 2 days ago

      I have a 2024 Model 3, and it's a a great car. That being said, I'm under no illusion that the car will ever be self driving (unsupervised).

      12.5.6 Still fails to read very obvious signs for 30 Km/h playgrounds zones.

      The current vehicles lack sufficient sensors, and likely do not have enough compute power and memory to cover all edge cases.

      I think it's a matter of time before Tesla faces a lawsuit over continual FSD claims.

      My hope is that the board will grow a spine and bring in a more focused CEO.

      Hats off to Elon for getting Tesla to this point, but right now they need a mature (and boring) CEO.

    • jvanderbot 2 days ago

      I have yet to see a difference. I let it highway drive for an hour and it cut off a semi, coming within 9 to 12 inches of the bumper for no reason. I heard about that one believe me.

      It got stuck in a side street trying to get to a target parking lot, shaking the wheel back and forth.

      It's no better so far and this is the first day.

      • modeless 2 days ago

        You have 12.6?

        As I said, it still makes mistakes and it is not ready yet. But 12.3 was much worse. It's the rate of improvement I am impressed with.

        I will also note that the predicted epidemic of crashes from people abusing FSD never happened. It's been on the road for a long time now. The idea that it is "irresponsible" to deploy it in its current state seems conclusively disproven. You can argue about exactly what the rate of crashes is but it seems clear that it has been at the very least no worse than normal driving.

        • jvanderbot 2 days ago

          Hm. I thought that was the latest release but it looks like no. But there seems to be no improvements from the last trial, so maybe 12.6 is magically better.

          • modeless 2 days ago

            A lot of people have been getting the free trial with 12.3 still on their cars today. Tesla has really screwed up on the free trial for sure. Nobody should be getting it unless they have 12.6 at least.

            • jvanderbot 2 days ago

              I have 12.5. maybe 12.6 is better but I've heard that before.

              Don't get me wrong without a concerted data team building maps a priori, this is pretty incredible. But from a pure performance standpoint it's a shaky product.

              • KaoruAoiShiho 2 days ago

                The latest version is 12.5.6, I think he got confused by the .6 at the end. If you think that's bad then there isn't a better version available. However it is a dramatic improvement over 12.3, don't know how much you tested on it.

                • modeless 2 days ago

                  You're right, thanks. One of the biggest updates in 12.5.6 is transitioning the highway Autopilot to FSD. If he has 12.5.4 then it may still be using the old non-FSD Autopilot on highways which would explain why he hasn't noticed improvement there; there hasn't been any until 12.5.6.

      • hilux 2 days ago

        > ... coming within 9 to 12 inches of the bumper for no reason. I heard about that one believe me.

        Oh dear.

        Glad you're okay!

      • eric_cc 2 days ago

        Is it possible you have a lemon? Genuine question. I’ve had nothing but positive experiences with FSD for the last several months and many thousands of miles.

        • ben_w 2 days ago

          I've had nothing but positive experiences with ChatGPT-4o, that doesn't make people wrong to criticise either as modelling their training data too much and generalising too little when they need to use it for something where the inference domain is too far outside the training domain.

        • kelnos a day ago

          If the incidence of problems is some relatively small number, like 5% or 10%, it's very easily possible that you've never personally seen a problem, but overall we'd still consider that the total incidence of problems is unacceptable.

          Please stop presenting arguments of the form "I haven't seen problems so people who have problems must be extreme outliers". At best it's ignorant, at worst it's actively in bad faith.

        • londons_explore a day ago

          I suspect the performance might vary widely depending on if you're on a road in california they have a lot of data on, or if its a road FSD has rarely seen before.

        • dham 13 hours ago

          A lot of haters mistake safety critical disengagements with "oh the car is doing something I don't like or I wouldn't do"

          If you treat the car like it's a student driver or someone else driving, disengagements will go do. If you treat it like you're driving there's also something to complain about.

    • latexr a day ago

      > At the current rate of improvement it will be quite good within a year and in two or three I could see it actually reaching the point where it could operate unsupervised.

      That’s not a reasonable assumption. You can’t just extrapolate “software rate of improvement”, that’s not how it works.

      • modeless 19 hours ago

        The timing of the rate of improvement increasing corresponds with finishing their switch to end-to-end machine learning. ML does have scaling laws actually.

        Tesla collects their own data, builds their own training clusters with both Nvidia hardware and their own custom hardware, and deploys their own custom inference hardware in the cars. There is no obstacle to them scaling up massively in all dimensions, which basically guarantees significant progress. Obviously you can disagree about whether that progress will be enough, but based on the evidence I see from using it, I think it will be.

    • snypher 2 days ago

      So just a few more years of death and injury until they reach a finished product?

      • Peanuts99 21 hours ago

        If this is what society has to pay to improve Tesla's product, then perhaps they should have to share the software with other car manufacturers too.

        Otherwise every car brand will have to kill a whole heap of people too until they manage to make a FSD system.

        • modeless 19 hours ago

          Elon has said many times that they are willing to license FSD but nobody else has been interested so far. Clearly that will change if they reach their goals.

          Also, "years of death and injury" is a bald-faced lie. NHTSA would have shut down FSD a long time ago if it were happening. The statistics Tesla has released to the public are lacking, it's true, but they cannot hide things from the NHTSA. FSD has been on the road for years and a billion miles and if it was overall significantly worse than normal driving (when supervised, of course) the NHTSA would know by now.

          The current investigation is about performance under specific conditions, and it's possible that improvement is possible and necessary. But overall crash rates have not reflected any significant extra danger by public use of FSD even in its primitive and flawed form of earlier this year and before.

      • quailfarmer a day ago

        If the answer was yes, presumably there’s a tradeoff where that deal would be reasonable.

      • londons_explore a day ago

        So far, data points to it having far fewer crashes than a human alone. Teslas data shows that, but 3rd party data seems to imply the same.

        • llamaimperative a day ago

          Tesla does not release the data required to substantiate such a claim. It simply doesn’t and you’re either lying or being lied to.

          • londons_explore 21 hours ago
            • rainsford 21 hours ago

              That data is not an apples to apples comparison unless autopilot is used in exactly the same mix of conditions as human driving. Tesla doesn't share that in the report, but I'd bet it's not equivalent. I personally tend to turn on driving automation features (in my non-Tesla car) in easier conditions and drive myself when anything unusual or complicated is going on, and I'd bet most drivers of Teslas and otherwise do the same.

              This is important because I'd bet similar data on the use of standard, non-adaptive cruise control would similarly show it's much safer than human drivers. But of course that would be because people use cruise control most in long-distance highway driving outside of congested areas, where you're least likely to have an accident.

            • FireBeyond 17 hours ago

              No, it releases enough data to actively mislead you (because there is no way Tesla's data people are unaware of these factors):

              The report measures accidents in FSD mode. Qualifiers to FSD mode: the conditions, weather, road, location, traffic all have to meet a certain quality threshold before the system will be enabled (or not disable itself). Compare Sunnyvale on a clear spring day to Pittsburgh December nights.

              There's no qualifier to the "comparison": all drivers, all conditions, all weather, all roads, all location, all traffic.

              It's not remotely comparable, and Tesla's data people are not that stupid, so it's willfully misleading.

              This report does not include fatalities. It also doesn't consider any incident where there was not airbag deployment to be an accident. Sounds potentially reasonable until you consider:

              - first gen airbag systems were primitive: collision exceeds threshold, deploy. Currently, vehicle safety systems consider duration of impact, speeds, G-forces, amount of intrusion, angle of collision, and a multitude of other factors before deciding what, if any, systems to fire (seatbelt tensioners, airbags, etc.) So hit something at 30mph with the right variables? Tesla: "this is not an accident".

              - Tesla also does not consider "incident was so catastrophic that airbags COULD NOT deploy*" to be an accident, because "airbags didn't deploy". This umbrella could also include egregious, "systems failed to deploy for any reason up to and including poor assembly line quality control", as also not an accident and also "not counted".

            • llamaimperative 19 hours ago

              Per the other comment: no, they don't. This data is not enough to evaluate its safety. This is enough data to mislead people who spend <30 seconds thinking about the question though, so I guess that's something (something == misdirection and dishonesty).

              You've been lied to.

        • rvnx a day ago

          It disconnects in case of dangerous situations, so every 33 miles to 77 miles driven (depending on the version), versus 400'000 miles for a human

      • the8472 20 hours ago

        We also pay this price with every new human driver we train. again and again.

        • dham 13 hours ago

          You won't be able to bring logic to people with Elon derangement syndrome.

    • m463 a day ago

      > the rate of improvement in the past two months has been much faster than before.

      I suspect the free trials let tesla collect orders of magnitude more data on events requiring human intervention. If each one is a learning event, it could exponentially improve things.

      I tried it on a loaner car and thought it was pretty good.

      One bit of feedback I would give tesla - when you get some sort of FSD message on the center screen, make the text BIG and either make it linger more, or let you recall it.

      For example, it took me a couple tries to read the message that gave instructions on how to give tesla feedback on why you intervened.

      EDIT: look at this graph

      https://electrek.co/wp-content/uploads/sites/3/2024/10/Scree...

    • jeffbee 2 days ago

      If I had a dime for every hackernews who commented that FSD version X was like a revelation compared to FSD version X-ε I'd have like thirty bucks. I will grant you that every release has surprisingly different behaviors.

      Here's an unintentionally hilarious meta-post on the subject https://news.ycombinator.com/item?id=29531915

      • modeless 2 days ago

        Sure, plenty of people have been saying it's great for a long time, when it clearly was not (looking at you, Whole Mars Catalog). I was not saying it was super great back then. I have consistently been critical of Elon for promising human level self driving "next year" for like 10 years in a row and being wrong every time. He said it this year again and I still think he's wrong.

        But the rate of progress I see right now has me thinking that it may not be more than two or three years before that threshold is finally reached.

        • ben_w 2 days ago

          The most important lesson I've had from me incorrectly predicting in 2009 that we'd have cars that don't come with steering wheels in 2018, and thinking that the progress I saw each year up to then was consistent with that prediction, is that it's really hard to guess how long it takes to walk the fractal path that is software R&D.

          How far are we now, 6 years later than I expected?

          Dunno.

          I suspect it's gonna need an invention on the same level as Diffusion or Transformer models to be able to get all the edge cases we can get, and that might mean we only get it with human level AGI.

          But I don't know that, it might be we've already got all we need architecture-wise and it's just a matter of scale.

          Only thing I can be really sure of is we're making progress "quite fast" in a non-objective use of the words — it's not going to need a re-run of 6 million years of mammilian evolution or anything like that, but even 20 years wall clock time would be a disappointment.

          • modeless 2 days ago

            Waymo went driverless in 2020, maybe you weren't that far off. Predicting that in 2009 would have been pretty good. They could and should have had vehicles without steering wheels anytime since then, it's just a matter of hardware development. Their steering wheel free car program was derailed when they hired traditional car company executives.

            • ben_w a day ago

              Waymo for sure, but I meant also without any geolock etc., so I can't claim credit for my prediction.

              They may well best Tesla to this, though.

              • IX-103 19 hours ago

                Waymo is using full lidar and other sensors, whereas Tesla is relying on pure vision systems (to the point of removing radar on newer models). So they're solving a much harder problem.

                As for whether it's worthwhile to solve that problem when having more sensors will always be safer, that's another issue...

                • ben_w 18 hours ago

                  Indeed.

                  While it ought to be possible to solve for just RGB… making it needlessly hard for yourself is a fun hack-day side project, not a valuable business solution.

      • kylecordes 18 hours ago

        On one hand, it really has gotten much better over time. It's quite impressive.

        On the other hand, I fear/suspect it is asymptotically, rather than linearly, approaching good enough to be unsupervised. It might get halfway there, each year, forever.

      • Laaas 2 days ago

        Doesn’t this just mean it’s improving rapidly which is a good thing?

        • jeffbee a day ago

          No, the fact that people say FSD is on the verge of readiness constantly for a decade means there is no widely shared benchmark.

    • delusional 2 days ago

      > That said, there is a night and day difference between FSD 12.3 that you experienced earlier this year and the latest version 12.6

      >And I don't even have 12.6 yet, this is still 12.5;

      How am i supposed to take anything you say seriously when your only claim is a personal anecdote that doesn't even apply to your own argument. Please, think about what you're writing, and please stop repeating information you heard on youtube as if it's fact.

      The is one of the reasons (among many) that I can't take Tesla booster seriously. I have absolutely zero faith in your anecdote that you didn't touch the steering wheel. I bet it's a lie.

      • jsjohnst a day ago

        > I have absolutely zero faith in your anecdote that you didn't touch the steering wheel. I bet it's a lie.

        I’m not GP, but I can share video showing it driving across residential, city, highway, and even gravel roads all in a single trip without touching the steering wheel a single time over a 90min trip (using 12.5.4.1).

        • jsjohnst a day ago

          And if someone wants to claim I’m cherry picking the video, happy to shoot a new video with this post visible on an iPad in the seat next to me. Is it autonomous? Hell no. Can it drive in Manhattan? Nope. But can it do >80% of my regular city (suburb outside nyc) and highway driving, yep.

      • eric_cc 2 days ago

        I can second this experience. I rarely touch the wheel anymore. I’d say I’m 98% FSD. I take over in school zones, parking lots, and complex construction.

      • modeless 2 days ago

        The version I have is already a night and day difference from 12.3 and the current version is better still. Nothing I said is contradictory in the slightest. Apply some basic reasoning, please.

        I didn't say I didn't touch the steering wheel. I had my hands lightly touching it most of the time, as one should for safety. I occasionally used the controls on the wheel as well as the accelerator pedal to adjust the set speed, and I used the turn signal to suggest lane changes from time to time, though most lane choices were made automatically. But I did not turn the wheel. All turning was performed by the system. (If you turn the wheel manually the system disengages). Other than parking, as I mentioned, though FSD did handle some navigation into and inside parking lots.

    • josefx a day ago

      > it will be quite good within a year

      The regressions are getting worse. For the first release anouncement it was only hitting regulatory hurdles and now the entire software stack is broken? They should fire whoever is in charge and restore the state Elon tried to release a decade ago.

    • bastawhiz 2 days ago

      > At the current rate of improvement it will be quite good within a year

      I'll believe it when I see it. I'm not sure "quite good" is the next step after "feels dangerous".

      • rvnx a day ago

        "Just round the corner" (2016)

        • FireBeyond 15 hours ago

          Musk in 2016 (these are quotes, not paraphrases): "Self driving is a solved problem. We are just tuning the details."

          Musk in 2021: "Right now our highest priority is working on solving the problem."

    • misiti3780 2 days ago

      i have the same experience 12.5 is insanely good. HN is full of people that dont want self driving to succeed for some reason. fortunately, it's clear as day to some of us that tesla approach will work

      • kelnos a day ago

        > HN is full of people that dont want self driving to succeed for some reason.

        I would love for self-driving to succeed. I do long-ish car trips several times a year, and it would be wonderful if instead of driving, I could be watching a movie or working on something on my laptop.

        I've tried Waymo a few times, and it feels like magic, and feels safe. Their record backs up that feeling. After everything I've seen and read and heard about Tesla, if I got into a Tesla with someone who uses FSD, I'd ask them to drive manually, and probably decline the ride entirely if they wouldn't honor my request.

        > fortunately, it's clear as day to some of us that tesla approach will work

        And based on my experience with Tesla FSD boosters, I expect you're basing that on feelings, not on any empirical evidence or actual understanding of the hardware or software.

      • ethbr1 2 days ago

        Curiousity about why they're against it and enunciating your why you think it will work would be more helpful.

        • misiti3780 2 days ago

          It's evident to Tesla drivers using Full Self-Driving (FSD) that the technology is rapidly improving and will likely succeed. The key reason for this anticipated success is data: any reasonably intelligent observer recognizes that training exceptional deep neural networks requires vast amounts of data, and Tesla has accumulated more relevant data than any of its competitors. Tesla recently held a robotaxi event, explicitly informing investors of their plans to launch an autonomous competitor to Uber. While Elon Musk's timeline predictions and politics may be controversial, his ability to achieve results and attract top engineering and management talent is undeniable.

          • kelnos a day ago

            > It's evident to Tesla drivers using Full Self-Driving (FSD) that the technology is rapidly improving and will likely succeed

            Sounds like Tesla drivers have been at the Kool-Aid then.

            But to be a bit more serious, the problem isn't necessarily that people don't think it's improving (I do believe it is) or that they will likely succeed (I'm not sure where I stand on this). The problem is that every year Musk says the next year will be the Year of FSD. And every next year, it doesn't materialize. This is like the Boy Who Cried Wolf; Musk has zero credibility with me when it comes to predictions. And that loss of credibility affects my feeling as to whether he'll be successful at all.

            On top of that, I'm not convinced that autonomous driving that only makes use of cameras will ever be reliably safer than human drivers.

            • modeless 19 hours ago

              I have consistently been critical of Musk for this over the many years it's been happening. Even right now, I don't believe FSD will be unsupervised next year like he just claimed. And yet, I can see the real progress and I am convinced that while it won't be next year, it could absolutely happen within two or three years.

              One of these years, he is going to be right. And at that point, the fact that he was wrong for a long time won't diminish their achievement. As he likes to say, he specializes in transforming technology from "impossible" to "late".

              > I'm not convinced that autonomous driving that only makes use of cameras will ever be reliably safer than human drivers.

              Believing this means that you believe AIs will never match or surpass the human brain. Which I think is a much less common view today than it was a few years ago. Personally I think it is obviously wrong. And also I don't believe surpassing the human brain in every respect will be necessary to beat humans in driving safety. Unsupervised FSD will come before AGI.

          • Animats a day ago

            > and Tesla has accumulated more relevant data than any of its competitors.

            Has it really? How much data is each car sending to Tesla HQ? Anybody actually know? That's a lot of cell phone bandwidth to pay for, and a lot of data to digest.

            Vast amounts of data about routine driving is not all that useful, anyway. A "highlights reel" of interesting situations is probably more valuable for training. Waymo has shown some highlights reels like that, such as the one were someone in a powered wheelchair is chasing a duck in the middle of a residential street.

            • jeffbee 18 hours ago

              Anyone who believes Tesla beats Google because they are better at collecting and handling data can be safely ignored.

              • ethbr1 6 hours ago

                The argument wouldn't be "better at" but simply "more".

                Sensor platforms deployed at scale, that you have the right to take data from, are difficult to replicate.

          • ryandrake a day ago

            Then why have we been just a year or two away from actual working self-driving, for the last 10 years? If I told my boss that my project would be done in a year, and then the following year said the same thing, and continued that for years, that’s not what “achieving results” means.

          • llamaimperative 21 hours ago

            The crux of the issue is that your interpretation of performance cannot be trusted. It is absolutely irrelevant.

            Even a system that is 99% reliable will honestly feel very, very good to an individual operator, but would result in huge loss of life when scaled up.

            Tesla can earn more trust be releasing the data necessary to evaluate the system’s performance. The fact that they do not is far more informative than a bunch of commentators saying “hey it’s better than it was last month!” for the last several years — even if it is true that it’s getting better and even if it’s true it’s hypothetically possible to get to the finish line.

          • KaiserPro 19 hours ago

            Tesla's sensor suite does not support safe FSD.

            It relies on inferred depth from a single point of view. This means that the depth/positioning info for the entire world is noisy.

            From a safety critical point of view its also bollocks, because a single birdshit/smear/raindrop/oil can render the entire system inoperable. Does it degrade safely? does it fuck.

            > recognizes that training exceptional deep neural networks requires vast amounts of data,

            You missed good data. Recording generic driver's journeys isn't going to yield good data, especially if the people who are driving aren't very good. You need to have a bunch of decent drivers doing specific scenarios.

            Moreover that data isn't easily generalisable to other sensor suites. Add another camera? yeahna, new model.

            > Tesla recently held a robotaxi event, explicitly informing investors of their plans

            When has Musk ever delivered on time?

            > his ability to achieve results

            most of those results aren't that great. Tesla isn't growing anymore, its reliant on state subsidies to be profitable. They still only ship 400k units a quarter, which is tiny compared to VW's 2.2million.

            > attract top engineering and management talent is undeniable

            Most of the decent computer vision people are not in tesla. Hardware wise, their factories aren't fun places to be. He's a dick to work for, capricious and vindictive.

      • FireBeyond 15 hours ago

        I would love self-driving to succeed. I should be a Tesla fan, because I'm very much a fan of geekery and tech anywhere and everywhere.

        But no. I want self-driving to succeed, and when it does (which I don't think is that soon, because the last 10% takes 90% of the time), I don't think Tesla or their approach will be the "winner".

      • eric_cc 2 days ago

        Completely agree. It’s very strange. But honestly it’s their loss. FSD is fantastic.

        • llamaimperative 21 hours ago

          Very strange not wanting poorly controlled 4,000lb steel cages driving around at 70mph stewarded by people calling “only had to stop it from killing me 4 times today!” as great success.

    • seizethecheese 2 days ago

      If this is the case, the calls for heavy regulation in this thread will lead to many more deaths than otherwise.

  • frabjoused 2 days ago

    The thing that doesn't make sense is the numbers. If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

    When I did the trial on my Tesla, I also noted these kinds of things and felt like I had to take control.

    But at the end of the day, only the numbers matter.

    • timabdulla 2 days ago

      > If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

      Even if it is true that the data show that with FSD (not Autopilot) enabled, drivers are in fewer crashes, I would be worried about other confounding factors.

      For instance, I would assume that drivers are more likely to engage FSD in situations of lower complexity (less traffic, little construction or other impediments, overall lesser traffic flow control complexity, etc.) I also believe that at least initially, Tesla only released FSD to drivers with high safety scores relative to their total driver base, another obvious confounding factor.

      Happy to be proven wrong though if you have a link to a recent study that goes through all of this.

      • valval 2 days ago

        Either the system causes less loss of life than a human driver or it doesn’t. The confounding factors don’t matter, as Tesla hasn’t presented a study on the subject. That’s in the future, and all stats that are being gathered right now are just that.

        • unbrice 2 days ago

          > Either the system causes less loss of life than a human driver or it doesn’t. The confounding factors don’t matter.

          Confounding factors are what allows one to tell appart "the system cause less loss of life" from "the system causes more loss of life yet it is only enabled in situations were fewer lives are lost".

        • kelnos a day ago

          No, that's absolutely not how this works. Confounding factors are things that make your data not tell you what you are actually trying to understand. You can't just hand-wave that away, sorry.

          Consider: what I expect is actually true based on the data is that Tesla FSD is as safe or safer than the average human driver, but only if the driver is paying attention and is ready to take over in case FSD does something unsafe, even if FSD doesn't warn the driver it needs to disengage.

          That's not an autonomous driving system. Which is potentially fine, but the value prop of that system is low to me: I have to pay just as much attention as if I were driving manually, with the added problem that my attention is going to start to wander because the car is doing most of the work, and the longer the car successfully does most of the work, the more I'm going to unconsciously believe I can allow my attention to slip.

          I do like current common ADAS features because they hit a good sweet spot: I still need to actively hold onto the wheel and handle initiating lane changes, turns, stopping and starting at traffic lights and stop signs, etc. I look at the ADAS as a sort of "backup" to my own driving, and not as what's primarily in control of the car. In contrast, Tesla FSD wants to be primarily in control of the car, but it's not trustworthy enough to do that without constant supervision.

          • valval 3 hours ago

            Like I said, the time for studies is in the future. FSD is a product in development and they know which stats they need to track in order to track progress.

            You’re arguing for something that: 1. Isn’t under contention and 2. Isn’t rooted in the real world.

            You’re right FSD isn’t an autonomous driving system. It’s not meant to be, right now.

    • rvnx 2 days ago

      There is an easy way to know what is really behind the numbers: look who is paying in case of accident.

      You have a Mercedes, Mercedes takes responsibility.

      You have a Tesla, you take the responsibility.

      Says a lot.

      • sebzim4500 2 days ago

        Mercedes had the insight that if no one is able to actually use the system then it can't cause any crashes.

        Technically, that is the easiest way to get a perfect safety record and journalists will seemingly just go along with the charade.

      • tensor 2 days ago

        You have a Mercedes, and you have a system that works virtually nowhere.

        • therouwboat 2 days ago

          Better that way than "Oh it tried to run red light, but otherwise it's great."

          • tensor 2 days ago

            "Oh we tried to build it but no one bought it! So we gave up." - Mercedes before Tesla.

            Perhaps FSD isn't ready for city streets yet, but it's great on the highways and I'd 1000x prefer we make progress rather than settle for the status quo garbage that the legacy makers put out. Also, human drivers are the most dangerous, by far, we need to make progress to eventual phase them out.

            • meibo a day ago

              2-ton blocks of metal that go 80mph next to me on the highway is not the place I would want people to go "fuck it let's just do it" with their new tech. Human drivers might be dangerous but adding more danger and unpredictability on top just because we can skip a few steps in the engineering process is crazy.

              Maybe you have a deathwish, but I definitely don't. Your choices affect other humans in traffic.

              • tensor 14 hours ago

                It sounds like you are the one with a deathwish, because objectively by the numbers Autopilot on the highway has greatly reduced death. So you are literally advocating for more death.

                You have two imperfect systems for highway driving: Autopilot with human oversight, and humans. The first has far far less death. Yet you are choosing the second.

      • diebeforei485 a day ago

        While I don't disagree with your point in general, it should be noted that there is more to taking responsibility than just paying. Even if Mercedes Drive Pilot was enabled, anything that involves court appearances and criminal liability is still your problem if you're in the driver's seat.

    • jsight 2 days ago

      Because it is bad enough that people really do supervise it. I see people who say that wouldn't happen because the drivers become complacent.

      Maybe that could be a problem with future versions, but I don't see it happening with 12.3.x. I've also heard that driver attention monitoring is pretty good in the later versions, but I have no first hand experience yet.

      • valval 2 days ago

        Very good point. The product that requires supervision and tells the user to keep their hands on the wheel every 10 seconds is not good enough to be used unsupervised.

        I wonder how things are inside your head. Are you ignorant or affected by some strong bias?

        • jsight a day ago

          Yeah, it definitely isn't good enough to be used unsupervised. TBH, they've switched to eye and head tracking as the primary mechanism of attention monitoring now. It seems to work pretty well, now that I've had a chance to try it.

          I'm not quite sure what you meant by your second paragraph, but I'm sure I have my blind spots and biases. I do have direct experience with various versions of 12.x though (12.3 and now 12.5).

    • kelnos a day ago

      Agree that only the numbers matter, but only if the numbers are comprehensive and useful.

      How often does an autonomous driving system get the driver into a dicey situation, but the driver notices the bad behavior, takes control, and avoids a crash? I don't think we have publicly-available data on that at all.

      You admit that you ran into some of these sorts of situations during your trial. Those situations are unacceptable. An autonomous driving system should be safer than a human driver, and should not make mistakes that a human driver would not make.

      Despite all the YouTube videos out there of people doing unsafe things with Tesla FSD, I expect that most people that use it are pretty responsible, are paying attention, and are ready to take over if they notice FSD doing something wrong. But if people need to do that, it's not a safe, successful autonomous driving system. Safety means everyone can watch TV, mess around on their phone, or even take a nap, and we still end up with a lower crash rate than with human drivers.

      The numbers that are available can't tell us if that would be the case. My belief is that we're absolutely not there.

    • bastawhiz 2 days ago

      Is Tesla required to report system failures or the vehicle damaging itself? How do we know they're not optimizing for the benchmark (what they're legally required to report)?

      • rvnx 2 days ago

        If the question is: “was FSD activated at the time of the accident: yes/no”, they can legally claim no, for example if luckily the FSD disconnects half a second before a dangerous situation (eg: glare obstructing cameras), which may coincide exactly with the times of some accidents.

        • diebeforei485 a day ago

          > To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.

          Scroll down to Methodology at https://www.tesla.com/VehicleSafetyReport

          • rvnx a day ago

            This is for Autopilot, which is the car following system on highways. If you are in cruise control and staying on your lane, not much is supposed to happen.

            The FSD numbers are much more hidden.

            The general accident rate is 1 per 400’000 miles driven.

            FSD has one “critical disengagement” (aka before accident if human or safety braking doesn’t intervene) every 33 miles driven.

            It means to reach unsupervised with human quality they would need to improve it 10’000 times in few months. Not saying it is impossible, just highly optimistic. In 10 years we will be there, but in 2 months, sounds a bit overpromising.

      • Uzza 2 days ago

        All manufacturers have for some time been required by regulators to report any accident where an autonomous or partially autonomous system was active within 30 seconds of an accident.

        • bastawhiz a day ago

          My question is better rephrased as "what is legally considered an accident that needs to be reported?" If the car scrapes a barricade or curbs it hard but the airbags don't deploy and the car doesn't sense the damage, clearly they don't. There's a wide spectrum of issues up to the point where someone is injured or another car is damaged.

          • kelnos a day ago

            And not to move the goalposts, but I think we should also be tracking any time the human driver feels they need to take control because the autonomous system did something they didn't believe was safe.

            That's not a crash (fortunately!), but it is a failure of the autonomous system.

            This is hard to track, though, of course: people might take over control for reasons unrelated to safety, or people may misinterpret something that's safe as unsafe. So you can't just track this from a simple "human driver took control".

    • nkrisc 2 days ago

      What numbers? Who’s measuring? What are they measuring?

    • akira2501 2 days ago

      You can measure risks without having to witness disaster.

    • johnneville 2 days ago

      are there even transparent reported numbers available ?

      for whatever does exist, it is also easy to imagine how they could be misleading. for instance i've disengaged FSD when i noticed i was about to be in an accident. if i couldn't recover in time, the accident would not be when FSD is on and depending on the metric, would not be reported as a FSD induced accident.

    • kybernetikos a day ago

      > But at the end of the day, only the numbers matter.

      Are these the numbers reported by tesla, or by some third party?

    • ForHackernews 2 days ago

      Maybe other human drivers are reacting quickly and avoiding potential accidents from dangerous computer driving? That would be ironic, but I'm sure it's possible in some situations.

    • lawn 2 days ago

      > The thing that doesn't make sense is the numbers.

      Oh? Who are presenting the numbers?

      Is a crash that fails to trigger the airbags still not counted as a crash?

      What about the car turning off FSD right before a crash?

      How about adjusting for factors such as age of driver and the type of miles driven?

      The numbers don't make sense because they're not good comparisons and are made to make Tesla look good.

    • gamblor956 2 days ago

      The numbers collected by the NHTSA and insurance companies do show that FSD is dangerous...that's why the NHTSA started investigating and its why most insurance companies won't insure Tesla vehicles or charge significantly higher rates.

      Also, Tesla is known to disable self-driving features right before collisions to give the appearance of driver fault.

      And the coup de grace: if Tesla's own data showed that FSD was actually safer, they'd be shouting it from the moon, using that data to get self-driving permits in CA, and offering to assume liability if FSD actually caused an accident (like Mercedes does with its self driving system).

    • throwaway562if1 2 days ago

      AIUI the numbers are for accidents where FSD is in control. Which means if it does a turn into oncoming traffic and the driver yanks the wheel or slams the brakes 500ms before collision, it's not considered a crash during FSD.

      • Uzza 2 days ago

        That is not correct. Tesla counts any accident within 5 seconds of Autopilot/FSD turning off as the system being involved. Regulators extend that period to 30 seconds, and Tesla must comply with that when reporting to them.

        • kelnos a day ago

          How about when it turns into oncoming traffic, the driver yanks the wheel, manages to get back on track, and avoids a crash? Do we know how often things like that happen? Because that's also a failure of the system, and that should affect how reliable and safe we rate these things. I expect we don't have data on that.

          Also how about: it turns into oncoming traffic, but there isn't much oncoming traffic, and that traffic swerves to get out of the way, before FSD realizes what it's done and pulls back into the correct lane. We certainly don't have data on that.

      • concordDance 2 days ago

        Several people in this thread have been saying this or similar. It's incorrect, from Tesla:

        "To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact"

        https://www.tesla.com/en_gb/VehicleSafetyReport

        Situations which inevitably cause a crash more than 5 seconds later seem like they would be extremely rare.

        • rvnx a day ago

          This is Autopilot, not FSD which is an entirely different product

  • mike_d 2 days ago

    > Lots of people are asking how good the self driving has to be before we tolerate it.

    When I feel as safe as I do sitting in the back of a Waymo.

  • dchichkov 2 days ago

    > I'm grateful to be getting a car from another manufacturer this year.

    I'm curious, what is the alternative that you are considering? I've been delaying an upgrade to electric for some time. And now, a car manufacturer that is contributing to the making of another Jan 6th, 2021 is not an option, in my opinion.

    • bastawhiz a day ago

      I've got a deposit on the Dodge Charger Daytona EV

    • lotsofpulp a day ago

      I also went into car shopping with that opinion, but the options are bleak in terms of other carmakers' software. For some reason, if you want basic software features of a Tesla, the other carmakers want an extra $20k+ (and still don't have some).

      A big example is why do the other carmakers not yet offer camera recording on their cars? They are all using cameras all around, but only Tesla makes it available to you in case you want the footage? Bizarre. And then they want to charge you an extra $500+ for one dash cam on the windshield.

      I even had Carplay/Android Auto as a basic requirement, but I was willing to forgo that after trying out the other brands. And not having to spend hours at a dealership doing paperwork was amazing. Literally bought the car on my phone and was out the door within 15 minutes on the day of my appointment.

      • bink 18 hours ago

        Rivian also allows recording drives to an SSD. They also just released a feature where you can view the cameras while it's parked. I'm kinda surprised other manufacturers aren't allowing that.

        • lotsofpulp 18 hours ago

          Rivians start at $30k more than Teslas, and while they may be nice, they don’t have the track record yet that Tesla does, and there is a risk the company goes bust since it is currently losing a lot of money.

  • browningstreet a day ago

    Was this the last version, or the version released today?

    I’ve been pretty skeptical of FSD and didn’t use the last version much. Today I used the latest test version, enabled yesterday, and rode around SF, to and from GGP, and it did really well.

    Waymo well? Almost. But whereas I haven’t ridden Waymo on the highway yet, FSD got me from Hunters Point to the east bay with no disruptions.

    The biggest improvement I noticed was its optimizations on highway progress.. it’ll change lanes, nicely, when the lane you’re in is slower than the surrounding lanes. And when you’re in the fast/passing lane it’ll return to the next closest lane.

    Definitely better than the last release.

    • bastawhiz a day ago

      I'm clearly not using the FSD today because I refused to complete my free trial of it a few months ago. The post of mine that you're responding to doesn't mention my troubles with Autopilot, which I highly doubt are addressed by today's update (see my other comment for a list of problems). They need to really, really prove to me that Autopilot is working reliably before I'd even consider accepting another free trial of FSD, which I doubt they'd do anyway.

  • anonu 21 hours ago

    My experience has been directionally the same as yours but not of the same magnitude. There's a lot of room from improvement but it's still very good. I'm in a slightly suburban setting... I suspect you're in a fender denser location that me, in which case your experience may be different.

    • amelius 21 hours ago

      Their irresponsible behavior says enough. Even if they fix all their technical issues, they are not driven by a safety culture.

      The first question that comes to their minds is not "how can we prevent this accident?" but it's "how can we further inflate this bubble?"

  • herdcall a day ago

    Same here, but I tried the new 12.5.4.1 yesterday and the difference is night and day. It was near flawless except for some unexplained slowdowns and you don't even need to hold the steering anymore (it detects attention by looking at your face), they clearly are improving rapidly.

    • lolinder a day ago

      How many miles have you driven since the update yesterday? OP described a half dozen different failure modes in a variety of situations that seem to indicate quite extensive testing before they turned it off. How far did you drive the new version and in what circumstances?

      • AndroidKitKat 21 hours ago

        I recently took a 3000 mile road trip on 12.5.4.1 on a mix of interstate, country roads, and city streets and there were only a small handful of instances where I felt like FSD completely failed. It's certainly not perfect, but I have never had the same failures that the original thread poster had.

  • averageRoyalty a day ago

    I'm not disagreeing with your experience. But if it's as bad as you say, why aren't we seeing tens or hundreds of FSD fatalities per day or at least per week? Even if only 1000 people globally have it on, these issues sound like we should be seeing tens per week.

    • bastawhiz 19 hours ago

      Perhaps having more accidents doesn't mean more fatal accidents.

  • pbasista 2 days ago

    > I'm grateful to be getting a car from another manufacturer this year.

    I have no illusions about Tesla's ability to deliver an unsupervised self-driving car any time soon. However, as far as I understand, their autosteer system, in spite of all its flaws, is still the best out there.

    Do you have any reason to believe that there actually is something better?

    • bastawhiz a day ago

      Autopilot has not been good. I have a cabin four hours from my home and I've used autopilot for long stretches on the highway. Some of the problems:

      - Certain exits are not detected as such and the car violently veers right before returning to the lane. I simply can't believe they don't have telemetry to remedy this.

      - Sometimes the GPS becomes miscalibrated. This makes the car think I'm taking an exit when I'm not, causing the car to abruptly reduce its speed to the speed of the ramp. It does not readjust.

      - It frequently slows for "emergency lights" that don't exist.

      - If traffic comes to a complete stop, the car accelerates way too hard and brakes hard when the car in front moves any substantial amount.

      At this point, I'd rather have something less good than something which is an active danger. For all intents and purposes, my Tesla doesn't have reliable cruise control, period.

      Beyond that, though, I simply don't have trust in Tesla software. I've encountered so many problems at this point that I can't possibly expect them to deliver a product that works reliably at any point in the future. What reason do I have to believe things will magically improve?

      • absoflutely a day ago

        I'll add that it randomly brakes hard on the interstate because it thinks the speed limit drops to 45. There aren't speed limit signs anywhere nearby on different roads that it could be mistakenly reading either.

        • bastawhiz a day ago

          I noticed that this happens when the triangle on the map is slightly offset from the road, which I've attributed to miscalibrated GPS. It happens consistently when I'm in the right lane and pass an exit when the triangle is ever so slightly misaligned.

    • throwaway314155 2 days ago

      I believe they're fine with losing auto steering capabilities, based on the tone of their comment.

  • kingkongjaffa a day ago

    > right turns on red

    This is a idiosyncrasy of the US (maybe other places too?) and I wonder if it's easier to do self driving at junctions, in countries without this rule.

    • dboreham 19 hours ago

      Only some states allow turn on red, and it's also often overridden by a road sign that forbids. But for me the ultimate test of AGI is four-or-perhaps-three-or-perhaps-two way stop intersections. You have to know whether the other drivers have a stop sign or not in order to understand how to proceed, and you can't see that information. As an immigrant to the US this baffles me, but my US-native family members shrug like there's some telepathy way to know. There's also a rule that you yield to vehicles on your right at uncontrolled intersections (if you can determine that it is uncontrolled...) that almost no drivers here seem to have heard of. You have to eye-ball the other driver to determine whether or not they look like they remember road rules. Not sure how a Tesla will do that.

      • bink 18 hours ago

        If it's all-way stop there will often be a small placard below the stop sign. If there's no placard there then (usually) cross traffic doesn't stop. Sometimes there's a placard that says "two-way" stop or one that says "cross traffic does not stop", but that's not as common in my experience.

  • mrjin a day ago

    I would not even try. The reason is simple, there is absolutely no ability of understanding in any of current self claimed auto driving approach, no matter how well they market them.

  • geoka9 2 days ago

    > It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

    I've been on the receiving end of this with the offender being a Tesla so many times that I figured it must be FSD.

    • bastawhiz a day ago

      Probably autopilot, honestly.

  • heresie-dabord a day ago

    > After the system error, I lost all trust in FSD from Tesla.

    May I ask how this initial trust was established?

    • bastawhiz 19 hours ago

      The numbers that are reported aren't abysmal, and people have anecdotally said good things. I was willing to give it a try while being hyper vigilant.

  • paulcole 2 days ago

    > Until I ride in one and feel safe, I can't have any faith that this is a reasonable system

    This is probably the worst way to evaluate self-driving for society though, right?

    • bastawhiz a day ago

      Why would I be supportive of a system that has actively scared me for objectively scary reasons? Even if it's the worst reason, it's not a bad reason.

      • paulcole a day ago

        How you feel while riding isn’t an objective thing. It’s entirely subjective. You and I can sit side by side and feel differently about the same experience.

        I don’t see how this is in any way objective besides the fact that you want it to be objective.

        You can support things for society that scare you and feel unsafe because you can admit your feelings are subjective and the thing is actually safer than it feels to you personally.

        • bastawhiz 19 hours ago

          I also did write about times when the car would have damaged itself or likely caused an accident, and those are indeed objective problems.

          • paulcole 18 hours ago

            > It failed with a cryptic system error while driving

            I’ll give you this one.

            > In my opinion, the default setting accelerates way too aggressively. I'd call myself a fairly aggressive driver and it is too aggressive for my taste

            Subjective.

            > It started making a left turn far too early that would have scraped the left side of the car on a sign. I had to manually intervene.

            Since you intervened and don’t know what would’ve happened, subjective.

            > It tried to make way too many right turns on red when it wasn't safe to. It would creep into the road, almost into the path of oncoming vehicles

            Subjective.

            > It would switch lanes to go faster on the highway, but then missed an exit on at least one occasion because it couldn't make it back into the right lane in time. Stupid.

            Objective.

            You’ve got some fair complaints but the idea that feeling safe is what’s needed remains subjective.

  • thomastjeffery 2 days ago

    It's not just about relative safety compared to all human driving.

    We all know that some humans are sometimes terrible drivers!

    We also know what that looks like: Driving too fast or slow relative to surroundings. Quickly turning every once in a while to stay in their lane. Aggressively weaving through traffic. Going through an intersection without spending the time to actually look for pedestrians. The list goes on..

    Bad human driving can be seen. Bad automated driving is invisible. Do you think the people who were about to be hit by a Tesla even realized that was the case? I sincerely doubt it.

    • bastawhiz 2 days ago

      > Bad automated driving is invisible.

      I'm literally saying that it is visible, to me, the passenger. And for reasons that aren't just bad vibes. If I'm in an Uber and I feel unsafe, I'll report the driver. Why would I pay for my car to do that to me?

      • thomastjeffery a day ago

        We are taking about the same thing: unpredictability. If you and everyone else can't predict what your car will do, then that seems objectively unsafe to me. It also sounds like we agree with each other.

      • wizzwizz4 2 days ago

        GP means that the signs aren't obvious to other drivers. We generally underestimate how important psychological modelling is for communication, because it's transparent to most of us under most circumstances, but AI systems have very different psychology to humans. It is easier to interpret the body language of a fox than a self-driving car.

  • concordDance 2 days ago

    This would be more helpful with a date. Was this in 2020 or 2024? I've been told FSD had a complete rearchitecting.

  • eric_cc 2 days ago

    That sucks that you had that negative experience. I’ve driven thousands of miles in FSD and love it. Could not imagine going back. I rarely need to intervene and when I do it’s not because the car did something dangerous. There are just times I’d rather take over due to cyclists, road construction, etc.

    • itsoktocry 2 days ago

      These "works for me!" comments are exhausting. Nobody believes you "rarely intervene", otherwise Tesla themselves would be promoting the heck out of the technology.

      Bring on the videos of you in the passenger seat on FSD for any amount of time.

      • eric_cc a day ago

        It’s the counter-point to the “it doesn’t work for me” posts. Are you okay with those ones?

        • kelnos a day ago

          I think the problem with the "it works for me" type posts is that most people reading them think the person writing it is trying to refute what the person with the problem is saying. As in, "it works for me, so the problem must be with you, not the car".

          I will refrain from commenting on whether or not that's a fair assumption to make, but I think that's where the frustration comes from.

          I think when people make "WFM" posts, it would go a long way to acknowledge that the person who had a problem really did have a problem, even if implicitly.

          "That's a bummer; I've driven thousands of miles using FSD, and I've felt safe and have never had to intervene. I wonder what's different about our travel that's given us such different experiences."

          That kind of thing would be a lot more palatable, I think, even if you might think it's silly/tiring/whatever to have to do that every time.

      • omgwtfbyobbq a day ago

        I can see it. How FSD performs depends on the environment. In some places it's great, in others I take over relatively frequently, although it's usually because it's being annoying, not because it poses any risk.

        Being in the passenger seat is still off limits for obvious reasons.

      • Der_Einzige 8 hours ago

        Thank god someone else said it.

        I want some of these tesla bulls to PROVE that they are actually "not intervening". I think the one's who claim they aren't doing things for hours are liars.

    • windexh8er 2 days ago

      I don't believe this at all. I don't own one but know about a half dozen people that got suckered into paying for FSD. All of them don't use it and 3 of them have stated it's put them in dangerous situations.

      I've ridden in an X, S and Y with it on. Talk about vomit inducing when letting it drive during "city" driving. I don't doubt it's OK on highway driving, but Ford Blue Cruise and GM's Super Cruise are better there.

      • eric_cc a day ago

        You can believe what you want to believe. It works fantastic for me whether you believe it or not.

        I do wonder if people who have wildly different experiences than I have are living in a part of the country that, for one reason or another, Tesla FSD does not yet do as well in.

        • kelnos a day ago

          I think GP is going too far in calling you a liar, but I think for the most part your FSD praise is just kinda... unimportant and irrelevant. GP's aggressive attitude notwithstanding, I think most reasonable people will agree that FSD handles a lot of situations really well, and believe that some people have travel routes where FSD always handles things well.

          But ok, great, so what? If that wasn't the case, FSD would be an unmitigated disaster with a body count in the tens of thousands. So in a comment thread about someone talking about the problems and unsafe behavior they've seen, a "well it works for me" reply is just annoying noise, and doesn't really add anything to the discussion.

          • eric_cc 14 hours ago

            Open discussion and sharing different experiences with technology is “annoying noise” to you but not to me. Slamming technology that works great for others should receive no counter points and become an echo chamber or what?

    • bastawhiz a day ago

      I'm glad for you, I guess.

      I'll say the autopark was kind of neat, but parking has never been something I have struggled with.

    • phito 17 hours ago

      I hope I never get to share road with you. Oh wait I won't, this crazyness is illegal here.

  • potato3732842 2 days ago

    If you were a poorer driver who did these things you wouldn't find these faults so damning because it'd only be say 10% dumber than you rather than 40% or whatever (just making up those numbers).

    • bastawhiz 2 days ago

      That just implies FSD is as good as a bad driver, which isn't really an endorsement.

      • potato3732842 a day ago

        I agree it's not an endorsement but we allow chronically bad drivers on the road as long as they're legally bad and not illegally bad.

        • kelnos a day ago

          We do that for reasons of practicality: the US is built around cars. If we were to revoke the licenses of the 20% worst drivers, most of those people would be unable to get to work and end up homeless.

          So we accept that there are some bad drivers on the road because the alternative would be cruel.

          But we don't have to accept bad software drivers.

          • potato3732842 14 hours ago

            Oh, I'm well aware how things work.

            But we should look down on them and speak poorly of them same as we look down on and speak poorly of everyone else who's discourteous in public spaces.

  • dekhn 2 days ago

    I don't think you're supposed to merge left when people are merging on the highway into your lane- you have right of way. I find even with the right of way many people merging aren't paying attention, but I deal with that by slightly speeding up (so they can see me in front of them).

    • bastawhiz 2 days ago

      Just because you have the right of way doesn't mean the correct thing to do is to remain in the lane. If remaining in your lane is likely to make someone else do something reckless, you should have been proactive. Not legally, for the sake of being a good driver.

      • dekhn 2 days ago

        Can you point to some online documentation that recommends changing lanes in preference to speeding up when a person is merging at too slow a speed? What I'm doing is following CHP guidance in this post: https://www.facebook.com/chpmarin/posts/lets-talk-about-merg... """Finally, if you are the vehicle already traveling in the slow lane, show some common courtesy and do what you can to create a space for the person by slowing down a bit or speeding up if it is safer. """

        (you probably misinterpreted what I said. I do sometimes change lanes, even well in advance of a merge I know is prone to problems, if that's the safest and most convenient. What I am saying is the guidance I have read indicates that staying in the same lane is generally safer than changing lanes, and speeding up into an empty space is better for everybody than slowing down, especially because many people who are merging will keep slowing down more and more when the highway driver slows for them)

        • jazzyjackson a day ago

          I read all this thread and all I can say is not everything in the world is written down somewhere

        • bastawhiz a day ago

          > recommends changing lanes in preference to speeding up when a person is merging at too slow a speed

          It doesn't matter, Tesla does neither. It always does the worst possible non-malicious behavior.

    • sangnoir 2 days ago

      You don't have a right of way over a slow moving vehicle that merged ahead of you. Most ramps are not long enough to allow merging traffic to accelerate to highway speeds before merging, so many drivers free up the right-most lane for this purpose (by merging left)

      • SoftTalker 2 days ago

        If you can safely move left to make room for merging traffic, you should. It’s considerate and reduces the chances of an accident.

      • dekhn 2 days ago

        Since a number of people are giving pushback, can you point to any (California-oriented) driving instructions consistent with this? I'm not seeing any. I see people saying "it's curteous", but when I'm driving I'm managing hundreds of variables and changing lanes is often risky, given motorcycles lanesplitting at high speed (quite common).

        • sangnoir a day ago

          It's not just courteous, it's self serving, AFAIK, a self-emergent phenomenon. If you're driving at 65 mph and anticipate a slow down in your lane due merging traffic, do you stay in your lane and slow down to 40 mph, or do you change lanes (if it's safe to do so) and maintain your speed?

          Texas highways allow for much higher merging speeds at the cost of far large (land area), 5-level interchanges rather than 35 mph offramps and onramps common in California.

          Any defensive driving course (which fall under instruction IMO) states that you don't always have to exercise your right of way, and indeed it may be unsafe to do so in some circumstances. Anticipating the actions of other drivers around you and avoiding potentially dangerous are the other aspects of being a defensive driver, and those concepts are consistent with freeing up the lane slower-moving vehicles are merging onto when it's safe to do so.

        • davidcalloway 2 days ago

          Definitely not California but literally the first part of traffic law in Germany says that caution and consideration are required from all partaking in traffic.

          Germans are not known for poor driving.

          • dekhn a day ago

            Right- but the "consideration" here is the person merging onto the highway actually paying attention and adjusting, rather than pointedly not even looking (this is a very common merging behavior where I life). Changing lanes isn't without risk even on a clear day with good visibility. Seems like my suggestion of slowing down or speeding up makes perfect sense because it's less risky overall, and is still being considerate.

            Note that I personally do change lanes at times when it's safe, convenient, I am experienced with the intersection, and the merging driver is being especially unaware.

            • watwut a day ago

              Consideration is also making space for slower car wanting to merge and Germans do it.

      • potato3732842 2 days ago

        Most ramps are more than long enough to accelerate close enough to traffic speed if one wants to, especially in most modern vehicles.

        • wizzwizz4 2 days ago

          Unless the driver in front of you didn't.

deergomoo 18 hours ago

This is an opinion almost certainly based more in emotion than logic, but I don't think I could trust any sort of fully autonomous driving system that didn't involve communication with transmitters along the road itself (like a glideslope and localiser for aircraft approaches) and with other cars on the road.

Motorway driving sure, there it's closer to fancy cruise control. But around town, no thank you. I regularly drive through some really crappily designed bits of road, like unlabelled approaches to multi-lane roundabouts where the lane you need to be in for a particular exit sorta just depends on what the people in front and to the side of you happen to have chosen. If it's difficult as a human to work out what the intent is, I don't trust a largely computer vision-based system to work it out.

The roads here are also in a terrible state, and the lines on them even moreso. There's one particular patch of road where the lane keep assist in my car regularly tries to steer me into the central reservation, because repair work has left what looks a bit like lane markings diagonally across the lane.

  • sokoloff 18 hours ago

    > didn't involve communication with transmitters along the road itself (like a glideslope and localiser for aircraft approaches) and with other cars on the road

    There will be a large number of non-participating vehicles on the road for at least another 50 years. (The average age of a car in the US is a little over 12 years and rising. I doubt we'll see a comms-based standard emerge and be required equipment on new cars for at least another 20 years.)

    • lukan 17 hours ago

      "There will be a large number of non-participating vehicles on the road for at least another 50 years."

      I think so too, but I also think, if we would really want to, all it would take is a GPS device with internet connection, like a smart phone, to make a normal car into a realtime connected one.

      But I also think we need to work out some social and institutional issues first.

      Currently I would not like my position to be avaiable in real time to some obscure agency.

    • stouset 13 hours ago

      Hell, ignore vehicles. What about pedestrians, cyclists, animals, construction equipment, potholes, etc?

  • sva_ 17 hours ago

    I agree with you about the trust issues and feel similarly, but also feel like the younger generations who grow up with these technologies might be less skeptical about adopting them.

    I've been kind of amazed how much younger people take some newer technologies for granted, the ability of humans to adapt to changes is marvelous.

  • vmladenov 16 hours ago

    Once insurance requires it or makes you pay triple to drive manually, that will likely be the tipping point for many people.

  • emmelaich 17 hours ago

    Potential problem with transmitters is that they could be faked.

    You could certainly never rely on them alone.

    • wtallis 17 hours ago

      There are lots of other areas where intentionally violating FCC regulations to transmit harmful signals is already technologically feasible and cheap, but hasn't become a widespread problem in practice. Why would it be any worse for cars communicating with each other? If anything, having lots of cars on the road logging what they receive from other cars (spoofed or otherwise) would make it too easy to identify which signals are fake, thwarting potential use cases like insurance fraud (since it's safe to assume the car broadcasting fake data is at fault in any collision).

      • johnisgood 16 hours ago

        I agree, the problem has been solved.

        If a consensus mechanism similar to those used in blockchain were implemented, vehicles could cross-reference the data they receive with data from multiple other vehicles. If inconsistencies are detected (for example, a car reporting a different speed than what others are observing), that data could be flagged as potentially fraudulent.

        Just as blockchain technologies can provide a means of verifying the authenticity of transactions, a network of cars could establish a decentralized validation process for the data they exchange. If one car broadcasts false data, the consensus mechanism among the surrounding vehicles would allow for the identification of this "anomaly", similar to how fraudulent transactions can be identified and rejected in a blockchain system.

        What you mentioned with regarding to insurance could be used as a deterrent, too, along with laws making it illegal to spoof relevant data.

        In any case, privacy is going to take a toll here, I believe.

        • 15155 16 hours ago

          This is a complicated, technical solution looking for a problem.

          Simple, asymmetrically-authenticated signals and felonies for the edge cases solve this problem without any futuristic computer wizardry.

          • johnisgood 15 hours ago

            I did not intend to state that we ought to use the blockchain, at all, for what it is worth. Vehicles should cross-reference the data they receive with data from multiple other vehicles and detect inconsistencies, any consensus mechanism could work, if we could call it that.

  • michaelt 18 hours ago

    > If it's difficult as a human to work out what the intent is, I don't trust a largely computer vision-based system to work it out.

    Most likely, every self-driving car company will send drivers down every road in the country, recording everything they see. Then they'll have human labellers figure out any junctions where the road markings are ambiguous.

    They've had sat nav maps covering every road for decades, and the likes of Google Street View, so to have a detailed map of every junction is totally possible.

    • deergomoo 18 hours ago

      In that case I hope they're prepared to work with local authorities to immediately update the map every time road layouts change, temporarily or permanently. Google Maps gets lane guidance wrong very often in my experience, so that doesn't exactly fill me with confidence.

      • crazygringo 18 hours ago

        I kind of assumed that already happened. Does it not? Is anyone pushing for it?

        Honestly it seems like it ought to be federal law by now that municipalities need to notify a designated centralized service of all road/lane/sign/etc. changes in a standardized format, that all digital mapping providers can ingest from.

        Is this not a thing? If not, is anyone lobbying for it? Is there opposition?

        • jjav 18 hours ago

          > I kind of assumed that already happened.

          Road layout can change daily, sometimes multiple times per day. Sometimes in a second, like when a tree falls on a lane and now you have to reroute on the oncoming lane for some distance, etc.

        • fweimer 17 hours ago

          Coordinating roadwork is challenging in most places, I think. Over here, it's apparently cheaper to open up a road multiple times in a year, rather than coordinating all the different parties that need underground access in the foreseeable future.

        • lukan 17 hours ago

          "Honestly it seems like it ought to be federal law by now that municipalities need to notify a designated centralized service of all road/lane/sign/etc. changes in a standardized format, that all digital mapping providers can ingest from"

          Why not just anyone and make that data openly avaiable?

      • tjpnz 18 hours ago

        And the contractors employed by the local authorities to do roadworks big and small.

massysett 2 days ago

"Tesla says on its website its FSD software in on-road vehicles requires active driver supervision and does not make vehicles autonomous."

Despite it being called "Full Self-Driving."

Tesla should be sued out of existence.

  • hedora 2 days ago

    Our non-Tesla has steering assist. In my 500 miles of driving before I found the buried setting that let me completely disable it, the active safety systems never made it more than 10-20 miles without attempting to actively steer the car left-of-center or into another vehicle, even when it was "turned off" via the steering wheel controls.

    When it was turned on according to the dashboard UI, things were even worse. It'd disengage less than every ten miles. However, there wasn't an alarm when it disengaged, just a tiny gray blinking icon on the dash. A second or so after the blinking, it'd beep once and then pull crap like attempt a sharp left on an exit ramp that curved to the right.

    I can't imagine this model kills fewer people per mile than Tesla FSD.

    I think there should be a recall, but it should hit pretty much all manufacturers shipping stuff in this space.

    • noapologies 2 days ago

      I'm not sure how any of this is related to the article. Does this non-Tesla manufacturer claim that their steering assist is "full self driving"?

      If you believe their steering assist kills more people than Tesla FSD then you're welcome, encouraged even, to file a report with the NHTSA here [1].

      [1] https://www.nhtsa.gov/report-a-safety-problem

    • HeadsUpHigh 2 days ago

      Ive had similar experience with a Hyundai with steering assist. It would get confused by messed road lining all the time. Meanwhile it had no problem climbing a road curb that was unmarked. And it would try to constantly nudge the steering wheel meaning I had to put force into holding it in place all the time since it which was extra fatigue.

      Oh and it was on by default, meaning I had to disable it every time I turned the car on.

      • shepherdjerred 2 days ago

        What model year? I'm guessing it's an older one?

        My Hyundai is a 2021 and I have to turn on the steering assist every time which I find annoying. My guess is that you had an earlier model where the steering assist was more liability than asset.

        It's understandable that earlier versions of this kind of thing wouldn't function as well, but it is very strange that they would have it on by default.

        • HeadsUpHigh 4 hours ago

          >What model year? I'm guessing it's an older one?

          Not 100% sure which year since it wasn't mine I think around 2018 +-2y. It was good at following bright painted white lines and nothing else. I didn't mind the beeping and the vibration when I stepped on a line but it wanted to actively steer the wheel which was infuriating. I wouldn't mind it if it was just a suggestion.

    • shepherdjerred 2 days ago

      My Hyundai has a similar feature and it's excellent. I don't think you should be painting with such a broad brush.

    • gamblor956 2 days ago

      If what you say is true, name the car model and file a report with the NHTSA.

  • m463 2 days ago

    I believe it's called "Full Self Driving (Supervised)"

    • tsimionescu a day ago

      The correct name would be "Not Self Driving". Or, at least, Partial Self Driving.

    • maeil 2 days ago

      The part in parentheses has only recently been added.

      • tharant 2 days ago

        Prior to that, FSD was labeled ‘Full Self Driving (Beta)’ and enabling it triggered a modal that required two confirmations explaining that the human driver must always pay attention and is ultimately responsible for the vehicle. The feature also had/has active driver monitoring (via both vision and steering-torque sensors) that would disengage FSD if the driver ignored the loud audible alarm to “Pay attention”. Since changing the label to ‘(Supervised)’, the audible nag is significantly reduced.

        • tsimionescu a day ago

          The problem is not so much the lack of disclaimers, it is the adberitising. Tesla is asking for something like 15 000 dollars for access to this "beta", and you don't get two modal dialogs before you sign up for that.

          This is called "false advertising", and even worse - recognizing revenue on a feature you are not delivering (a beta is not a delivered feature) is not GAAP.

        • rty32 20 hours ago

          Do they have warnings as big as "full self driving" texts in advertisements? And if it is NOT actually full self driving, why call it full self driving?

          That's just false advertising. You can't get around that.

          I can't believe our current laws let Tesla get away like that.

      • rsynnott 2 days ago

        And is, well, entirely contradictory. An absolute absurdity; what happens when the irresistible force of the legal department meets the immovable object of marketing.

  • fhdsgbbcaA 2 days ago

    “Sixty percent of the time, it works every time”

  • fallingknife 2 days ago

    [flagged]

    • vineyardmike 2 days ago

      Magic isn’t real. No one should be confused that the eraser isn’t magic.

      Fully self driving cars are real. Just not made by Tesla.

      • bqmjjx0kac 2 days ago

        What's the verdict on X-Ray Specs?

        • largbae 2 days ago

          I think everyone just gave up and went to pornhub

    • jazzyjackson 2 days ago

      Have you used one? They basically do what they say, at least, which is erase things.

    • BolexNOLA 2 days ago

      Nobody who buys a magic eraser thinks it’s literally a magical object or in any way utilizes magic. It’s not comparable.

      • kyriakos 2 days ago

        Even if at least it won't kill you or anyone around you for using it.

      • fallingknife 2 days ago

        Just like nobody who buys FSD actually thinks it's really self driving.

        • BolexNOLA 2 days ago

          Surely you can understand why “magic” and “fully self driving” have different levels of plausibility?

          In 2024 if you tell me a car is “fully self driving” it’s pretty reasonable of me to think it’s a fully self driving car given the current state of vehicle technology. They didn’t say “magic steering” or something clearly ridiculous to take at face value. It sounds like what it should be able to do. Especially with “full” in the name. Just call it “assisted driving” or hell “self driving.” The inclusion of “fully” makes this impossible to debate in good faith.

  • systemvoltage 2 days ago

    [flagged]

    • mbernstein 2 days ago

      Nuclear power adoption is the largest force to combat climate change.

      • Retric 2 days ago

        Historically, hydro has prevented for more CO2 than nuclear by a wide margin. https://ourworldindata.org/grapher/electricity-prod-source-s...

        Looking forward Nuclear isn’t moving the needle. Solar grew more in 2023 alone than nuclear has grown since 1995. Worse nuclear can’t ramp up significantly in the next decade simply due to construction bottlenecks. 40 years ago nuclear could have played a larger role, but we wasted that opportunity.

        It’s been helpful, but suggesting it’s going to play a larger role anytime soon is seriously wishful thinking at this point.

        • dylan604 2 days ago

          > Historically, hydro has

          done harm to the ecosystems where they are installed. This is quite often overlooked and brushed aside.

          There is no single method of generating electricity without downsides.

          • Retric 2 days ago

            We’ve made dams long before we knew about electricity. At which point tacking hydropower to a dam that would exist either way has basically zero environmental impact.

            Pure hydropower dams definitely do have significant environmental impact.

            • dylan604 2 days ago

              I just don't get the premise of your argument. Are you honestly saying that stopping the normal flow of water has no negative impact on the ecosystem? What about the area behind the dam that is now flooded? What about the area in front of the dam where there is now no way to traverse back up stream?

              Maybe your just okay and willing to accept that kind of change. That's fine, just as some people are okay with the risk of nuclear, the use of land for solar/wind. But to just flat out deny that it has impact is just dishonest discourse at best

              • Retric 2 days ago

                It’s the same premise as rooftop solar. You’re building a home anyway so adding solar panels to the roof isn’t destroying pristine habitat.

                People build dams for many reasons not just electricity.

                Having a reserve of rainwater is a big deal in California, Texas, etc. Letting millions of cubic meters more water flow into the ocean would make the water problems much worse in much of the world. Flood control is similarly a serious concern. Blaming 100% of the issues from dams on Hydropower is silly if outlawing hydropower isn’t going to remove those dams.

              • chgs a day ago

                You are asserting building a dam has downsides. That’s correct (there are upsides too - flood control, fresh water storage etc)

                However you are conflating dam building with hydro generation.

        • masklinn a day ago

          > Historically, hydro has prevented for more CO2 than nuclear by a wide margin.

          Hydro is not evenly distributed and mostly tapped out outside of a few exceptions. Hydro literally can not solve the issue.

          Even less so as AGW starts running meltwater sources dry.

          • Retric a day ago

            I wasn’t imply it would, just covering the very short term.

            Annual production from nuclear is getting passed by wind in 2025 and possibly 2024. So just this second it’s possibly #1 among wind, solar and nuclear but they are all well behind hydro.

        • mbernstein 2 days ago

          History is a great reference, but it doesn't solve our problems now. Just because hydro has prevented more CO2 until now doesn't mean that plus solar are the combination that delivers abundant, clean energy. There are power storage challenges and storage mechanisms aren't carbon neutral. Even if we assume that nuclear, wind, and solar (without storage) all have the same carbon footprint - I believe nuclear is less that solar pretty much equivalent to wind - you have to add the storage mechanisms for scenarios where there's no wind, sun, or water.

          All of the above are significantly better than burning gas or coal - but nuclear is the clear winner from an CO2 and general availability perspective.

          • Retric 2 days ago

            Seriously scaling nuclear would involve batteries. Nuclear has issues being cost effective at 80+% capacity factors. When you start talking sub 40% capacity factors the cost per kWh spirals.

            The full cost of operating a multiple nuclear reactor for just 5 hours per day just costs more than a power plant at 80% capacity factor charging batteries.

            • mbernstein 2 days ago

              > Seriously scaling nuclear would involve batteries. Nuclear has issues being cost effective at 80+% capacity factors.

              I assume you mean that sub 80% capacity nuclear has issues being cost effective (which I agree is true).

              You could pair the baseload nuclear with renewables during peak times and reduce battery dependency for scaling and maintaining higher utilization.

              • Retric 2 days ago

                I meant even if you’re operating nuclear as baseload power looking forward the market rate for electricity looks rough without significant subsidies.

                Daytime you’re facing solar head to head which is already dropping wholesale rates. Off peak is mostly users seeking cheap electricity so demand at 2AM is going to fall if power ends up cheaper at noon. Which means nuclear needs to make most of its money from the duck curve price peaks. But batteries are driving down peak prices.

                Actually cheap nuclear would make this far easier, but there’s no obvious silver bullet.

        • UltraSane 2 days ago

          That just goes to show how incredibly short sighted humanity is. We new about the risk of massive CO2 emissions from burning fossil fuels but just ignored it while irrationally demonizing nuclear energy because it is scawy. If humans were sane and able to plan earth would be getting 100% of all electricity from super-efficient 7th generation nuclear reactors.

          • mbernstein 2 days ago

            When talking to my parents, I hear a lot about Jane Fonda and the China Syndrome as far as the fears of nuclear power.

            She's made the same baseless argument for a long time: "Nuclear power is slow, expensive — and wildly dangerous"

            https://ourworldindata.org/nuclear-energy#:~:text=The%20key%....

            CO2 issues aside, it's just outright safer than all forms of coal and gas and about as safe as solar and wind, all three of which are a bit safer than hydro (still very safe).

            • chgs a day ago

              She’s two thirds right. It’s slow and expensive.

          • Retric 2 days ago

            I agree costs could have dropped significantly, but I doubt 100% nuclear was ever going to happen.

            Large scale dams will exist to store water, tacking hydroelectric on top of them is incredibly cost effective. Safety wise dams are seriously dangerous, but they also save a shocking number of lives by reducing flooding.

          • valval 2 days ago

            There was adequate evidence that nuclear is capable of killing millions of people and causing large scale environmental issues.

            It’s still not clear today what effect CO2 or fossil fuel usage has on us.

            • UltraSane a day ago

              Nuclear reactors are not nuclear bombs. Nuclear reactors are very safe on a Joules per death bases

      • porphyra 2 days ago

        I think solar is a lot cheaper than nuclear, even if you factor in battery storage.

      • ivewonyoung 2 days ago

        Are you proposing that cars should have nuclear reactors in them?

        Teslas run great on nuclear power, unlike fossil fuel ICE cars.

        • mbernstein 2 days ago

          Of course not.

          • ivewonyoung 2 days ago

            In a world where nuclear power helped with climate change, would also be a world where Teslas would eliminate a good chunk of harmful pollution by allowing cars to be moved by nuclear, so not sure what point you were trying to make.

            Even at this minute, Teslas are moving around powered by nuclear power.

    • gamblor956 2 days ago

      Every year Musk personally flies enough in his private jet to undo the emissions savings of over 100,000 EVs...

      Remember that every time you get in your Tesla that you're just a carbon offset for a spoiled billionaire.

      • enslavedrobot 2 days ago

        Hmmmm average car uses 489 gallons a year. Large private jet uses 500 gallons an hour. There are 9125 hours in a year.

        So if Elon lives in a jet that flys 24/7 you're only very wrong. Since that's obviously not the case you're colossally and completely wrong.

        Remember that the next time you try to make an argument that Tesla is not an incredible force for decarbonization.

        • roca a day ago

          Not Tesla exactly, but Musk has gone all-in trying to get a man elected to be US President who consistently says climate change is a hoax, or words to that effect.

          • enslavedrobot 15 hours ago

            US oil production under the current administration is at 13.5M barrels per day. The highest ever. The US is shitting the bed on the energy transition. Meanwhile global solar cell production is slated to hit 2TW/year by the end of 2025 @ under 10cents/watt. China, the land of coal, is on track to hit net zero before the US. Both parties and all levels of government have a disgraceful record on climate change.

            PS: For context 2TW of solar can generate about 10% of global electricity. Production capacity will not stop at 2TW. All other forms of electricity are basically doomed, no matter what the GOP says about climate change.

        • briansm 2 days ago

          I think you missed the 'EV' part of the post.

      • valval 2 days ago

        As opposed to all the other execs whose companies aren’t a force to combat climate change and still fly their private jets.

        But don’t get me wrong, anyone and everyone can fly their private jets if they can afford such things. They will already have generated enough taxes at that point that they’re offsetting thousands or millions of Prius drivers.

        • gamblor956 2 days ago

          As opposed to all the other execs

          Yes, actually.

          Other execs fly as needed because they recognize that in this wondrous age of the internet that teleconferencing can replace most in-person meetings. Somehow, only a supposed technology genius like Elon Musk thinks that in-person meetings required for everything.

          Other execs also don't claim to be trying to save the planet while doing everything in their power to exploit its resources or destroy natural habitats.

    • gitaarik 2 days ago

      As I understand, electric cars are more polluting than non-electric, because first of all manufacturing and resources footprint is larger, but also because they are heavier (because of the batteries), the tires wear down much faster, needing more tire replacement, which is so significantly much that their emission free-ness doesn't compensate for it.

      Besides, electric vehicles still seem to be very impractical compared to normal cars, because they can't drive very far without needing a lengthy recharge.

      So I think the eco-friendliness of electric vehicles is maybe like the full self-driving system: nice promises but no delivery.

      • theyinwhy 2 days ago

        That has been falsified by more studies than I can keep track of. And yes, if you charge your electric with electricity produced by oil, the climate effect will be non-optimal.

      • djaychela a day ago

        Pretty much everything you've said here isn't true. You are just repeating tropes that are fossil fuel industry FUD.

  • ivewonyoung 2 days ago

    [flagged]

    • rsynnott 2 days ago

      It’s really unfortunate that puffery survived as a common law defence. It’s really from an earlier era, when fraud was far more acceptable and people were more conditioned to assume that vendors were outright lying to them; it has no place in modern society.

      Also, that’s investors, not consumers. While the rise of retail investing has made this kind of dubious, investors are generally assumed to be far less in need of protection than consumers by the law; it is assumed that they take care about their investment that a consumer couldn’t reasonably take around every single product that they buy.

    • maeil 2 days ago

      This was a lawsuit by shareholders, and the judge thought investors should know whatever Elon says is bullshit.

      Completely different from e.g. consumers, of whom less such understanding is expected.

    • mikeweiss 2 days ago

      I think you mean fortunately?

      • valval 2 days ago

        Unfortunately for them and their ideological allies, fortunately for people with common sense.

    • UltraSane 2 days ago

      Tesla's BS with FSD is as bad as Theranos was with their blood tests.

  • innocentoldguy a day ago

    It's called "Full Self-Driving (Supervised) Beta" and you agree that you understand that you have to pay attention and are responsible for the safety of the car before you turn it on.

    • kelnos a day ago

      So the name of it is a contradiction, and the fine print contradicts the name. "Full self driving" (the concept, not the Tesla product) does not need to be supervised.

    • rty32 20 hours ago

      Come on, you know it's an oxymoron. "full" and "supervised" don't belong to the same sentence. Ask any 10 year old or a non native English speaker who only learned the language from textbooks for 5 years can tell you that. Just... stop defending Tesla.

AlchemistCamp 2 days ago

The interesting question is how good self-driving has to be before people tolerate it.

It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable. How about a quarter? Or a tenth? Accidents caused by human drivers are one of the largest causes of injury and death, but they're not newsworthy the way an accident involving automated driving is. It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

  • Arainach 2 days ago

    This is about lying to the public and stoking false expectations for years.

    If it's "fully self driving" Tesla should be liable for when its vehicles kill people. If it's not fully self driving and Tesla keeps using that name in all its marketing, regardless of any fine print, then Tesla should be liable for people acting as though their cars could FULLY self drive and be sued accordingly.

    You don't get to lie just because you're allegedly safer than a human.

    • jeremyjh 2 days ago

      I think this is the answer: the company takes on full liability. If a Tesla is Fully Self Driving then Tesla is driving it. The insurance market will ensure that dodgy software/hardware developers exit the industry.

      • blagie 2 days ago

        This is very much what I would like to see.

        The price of insurance is baked into the price of a car. If the car is as safe as I am, I pay the same price in the end. If it's safer, I pay less.

        From my perspective:

        1) I would *much* rather have Honda kill someone than myself. If I killed someone, the psychological impact on myself would be horrible. In the city I live in, I dread ageing; as my reflexes get slower, I'm more and more likely to kill someone.

        2) As a pedestrian, most of the risk seems to come from outliers -- people who drive hyper-aggressively. Replacing all cars with a median driver would make me much safer (and traffic, much more predictable).

        If we want safer cars, we can simply raise insurance payouts, and vice-versa. The market works everything else out.

        But my stress levels go way down, whether in a car, on a bike, or on foot.

        • gambiting 2 days ago

          >> I would much rather have Honda kill someone than myself. If I killed someone, the psychological impact on myself would be horrible.

          Except that we know that it doesn't work like that. Train drivers are ridden with extreme guilt every time "their" train runs over someone, even though they know that logically there was absolutely nothing they could have done to prevent it. Don't see why it would be any different here.

          >>If we want safer cars, we can simply raise insurance payouts, and vice-versa

          In what way? In the EU the minimum covered amount for any car insurance is 5 million euro, it has had no impact on the safety of cars. And of course the recent increase in payouts(due to the general increase in labour and parts cost) has led to a dramatic increase in insurance premiums which in turn has lead to a drastic increase in the number of people driving without insurance. So now that needs increased policing and enforcement, which we pay for through taxes. So no, market doesn't "work everything out".

          • kelnos a day ago

            Being in a vehicle that collides with someone and kills them is going to be traumatic regardless of whether or not you're driving.

            But it's almost certainly going to be more traumatic and more guilt-inducing if you are driving.

            If I only had two choices, I would much rather my car kill someone than I kill someone with my car. I'm gonna feel bad about it either way, but one is much worse than the other.

          • blagie 2 days ago

            > Except that we know that it doesn't work like that. Train drivers are ridden with extreme guilt every time "their" train runs over someone, even though they know that logically there was absolutely nothing they could have done to prevent it. Don't see why it would be any different here.

            It's not binary. Someone dying -- even with no involvement -- can be traumatic. I've been in a position where I could have taken actions to prevent someone from being harmed. Rationally not my fault, but in retrospect, I can describe the exact set of steps needed to prevent it. I feel guilty about it, even though I know rationally it's not my fault (there's no way I could have known ahead of time).

            However, it's a manageable guilt. I don't think it would be if I knew rationally that it was my fault.

            > So no, market doesn't "work everything out".

            Whether or not a market works things out depends on issues like transparency and information. Parties will offload costs wherever possible. In the model you gave, there is no direct cost to a car maker making less safe cars or vice-versa. It assumes the car buyer will even look at insurance premiums, and a whole chain of events beyond that.

            That's different if it's the same party making cars, paying money, and doing so at scale.

            If Tesla pays for everyone damaged in any accident a Tesla car has, then Tesla has a very, very strong incentive to make safe cars to whatever optimum is set by the damages. Scales are big enough -- millions of cars and billions of dollars -- where Tesla can afford to hire actuaries and a team of analysts to make sure they're at the optimum.

            As an individual car buyer, I have no chance of doing that.

            Ergo, in one case, the market will work it out. In the other, it won't.

      • KoolKat23 2 days ago

        That's just reducing the value of a life to a number. It can be gamed to a situation where it's just more profitable to mow down people.

        What's an acceptable number/financial cost is also just an indirect approximated way of implementing a more direct/scientific regulation. Not everything needs to be reduced to money.

        • jeremyjh 2 days ago

          There is no way to game it successfully; if your insurance costs are much higher than your competitors you will lose in the long run. That doesn’t mean there can’t be other penalties when there is gross negligence.

          • KoolKat23 a day ago

            Who said management and shareholders are in it for the long run. Plenty of examples where businesses are purely run in the short term. Bonuses and stock pumps.

      • stormfather 2 days ago

        That would be good because it would incentivize all FSD cars communicating with each other. Imagine how safe driving would be if they are all broadcasting their speed and position to each other. And each vehicle sending/receiving gets cheaper insurance.

        • Terr_ 2 days ago

          It goes kinda dsytopic if access to the network becomes a monopolistic barrier.

          • tmtvl 18 hours ago

            Not to mention the possibility of requiring pedestrians and cyclists to also be connected to the same network. Anyone with access to the automotive network could track any pedestrian who passes by the vicinity of a road.

            • Terr_ 12 hours ago

              It's hard to think of a good blend of traffic safety, privacy guarantees, and resistance to bad-actors. Having/avoiding persistent identification is certainly a factor.

              Perhaps one approach would be to declare that automated systems are responsible for determining the position/speed of everything around them using regular sensors, but may elect to take hints from anonymous "notice me" marks or beacons.

      • tensor 2 days ago

        I’m for this as long as the company also takes on liability for human errors they could prevent. I’d want to see cars enforcing speed limits and similar things. Humans are too dangerous to drive.

    • awongh a day ago

      Also force other auto makers to be liable when their over-tall SUVs cause more deaths than sedan type cars.

    • mrpippy 2 days ago

      Tesla officially renamed it to “Full Self Driving (supervised)” a few months ago, previously it was “Full Self Driving (beta)”

      Both names are ridiculous, for different reasons. Nothing called a “beta” should be tested on public roads without a trained employee supervising it (i.e. being paid to pay attention). And of course it was not “full”, it always required supervision.

      And “Full Self Driving (supervised)” is an absurd oxymoron. Given the deaths and crashes that we’ve already seen, I’m skeptical of the entire concept of a system that works 98% of the time, but also needs to be closely supervised for the 2% of the time when it tries to kill you or others (with no alerts).

      It’s an abdication of duty that NHTSA has let this continue for so long, they’ve picked up the pace recently and I wouldn’t be surprised if they come down hard on Tesla (unless Trump wins, in which case Elon will be put in charge of NHTSA, the SEC, and FAA)

      • ilyagr a day ago

        I hope they soon rename it into "Fully Supervised Driving".

    • SoftTalker 2 days ago

      It’s your car, so ultimately the liability is yours. That’s why you have insurance. If Tesla retains ownership, and just lets you drive it, then they have (more) liability.

      • kelnos a day ago

        > It’s your car, so ultimately the liability is yours

        No, that's not how it works. The driver and the driver's insurer are on the hook when something bad happens. The owner is not, except when the owner is also the one driving, or if the owner has been negligent with maintenance, and the crash was caused by mechanical failure related to that negligence.

        If someone else is driving my car and I'm a passenger, and they hurt someone with it, the driver is liable, not me. If that "someone else" is a piece of software, and that piece of software has been licensed/certified/whatever to drive a car, why should I be liable for its failures? That piece of software needs to be insured, certainly. It doesn't matter if I'm required to insure it, or if the manufacturer is required to insure it.

        Tesla FSD doesn't fit into this scenario because it's not the driver. You are still the driver when you engage FSD, because despite its name, FSD is not capable of filling that role.

  • kelnos a day ago

    If Tesla's FSD was actually self-driving, maybe half the casualty rate of the median human driver would be fine.

    But it's not. It requires constant supervision, and drivers sometimes have to take control (without the system disengaging on its own) in order to correct it from doing something unsafe.

    If we had stats for what the casualty rate would be if every driver using it never took control back unless the car signaled it was going to disengage, I suspect that casualty rate would be much worse than the median human driver. But we don't have those stats, so we shouldn't trust it until we do.

    This is why Waymo is safe and tolerated and Tesla FSD is not. Waymo test drivers record every time they have to take over control of the car for safety reasons. That was a metric they had to track and improve, or it would have been impossible to offer people rides without someone in the driver's seat.

  • triyambakam 2 days ago

    Hesitation around self-driving technology is not just about the raw accident rate, but the nature of the accidents. Self-driving failures often involve highly visible, preventable mistakes that seem avoidable by a human (e.g., failing to stop for an obvious obstacle). Humans find such incidents harder to tolerate because they can seem fundamentally different from human error.

    • crazygringo 2 days ago

      Exactly -- it's not just the overall accident rate, but the rate per accident type.

      Imagine if self-driving is 10x safer on freeways, but on the other hand is 3x more likely to run over your dog in the driveway.

      Or it's 5x safer on city streets overall, but actually 2x worse in rain and ice.

      We're fundamentally wired for loss aversion. So I'd say it's less about what the total improvement rate is, and more about whether it has categorizable scenarios where it's still worse than a human.

  • gambiting 2 days ago

    >>. How about a quarter? Or a tenth?

    The answer is zero. An airplane autopilot has increased the overall safety of airplanes by several orders of magnitude compared to human pilots, but literally no errors in its operation are tolerated, whether they are deadly or not. The exact same standard has to apply to cars or any automated machine for that matter. If there is any issue discovered in any car with this tech then it should be disabled worldwide until the root cause is found and eliminated.

    >> It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

    I really don't like this argument, because we could already prevent literally all automotive deaths tomorrow through existing technology and legislation and yet we are choosing not to do this for economic and social reasons.

    • esaym 2 days ago

      You can't equate airplane safety with automotive safety. I worked at an aircraft repair facility doing government contracts for a number of years. In one instance, somebody lost the toilet paper holder for one of the aircraft. This holder was simply a piece of 10 gauge wire that was bent in a way to hold it and supported by wire clamps screwed to the wall. Making a new one was easy but since it was a new part going on the aircraft we had to send it to a lab to be certified to hold a roll of toilet paper to 9 g's. In case the airplane crashed you wouldn't want a roll of toilet paper flying around I guess. And that cost $1,200.

      • gambiting 2 days ago

        No, I'm pretty sure I can in this regard - any automotive "autopilot" has to be held to the same standard. It's either zero accidents or nothing.

        • murderfs a day ago

          This only works for aerospace because everything and everyone is held to that standard. It's stupid to hold automotive autopilots to the same standard as a plane's autopilot when a third of fatalities in cars are caused by the pilots being drunk.

          • kelnos a day ago

            I don't think that's a useful argument.

            I think we should start allowing autonomous driving when the "driver" is at least as safe as the median driver when the software is unsupervised. (Teslas may or may not be that safe when supervised, but they absolutely are not when unsupervised.)

            But once we get to that point, we should absolutely ratchet those standards so automobile safety over time becomes just as safe as airline safety. Safer, if possible.

            > It's stupid to hold automotive autopilots to the same standard as a plane's autopilot when a third of fatalities in cars are caused by the pilots being drunk.

            That's a weird argument, because both pilots and drivers get thrown in jail if they fly/drive drunk. The standard is the same.

    • travem 2 days ago

      > The answer is zero

      If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

      I agree that it should be regulated and incidents thoroughly investigated, however letting perfect be the enemy of good leads to stagnation and lack of practical improvement and greater injury to the population as a whole.

      • gambiting 2 days ago

        >>If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

        And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

        If an automated system makes a wrong decision and it contributes to harm/death then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.

        • Aloisius a day ago

          Depends on what one considers a "problem." As long as the autopilot's failures conditions and mitigation procedures are documented, the burden is largely shifted to the operator.

          Autopilot didn't prevent slamming into a mountain? Not a problem as long as it wasn't designed to.

          Crashed on landing? No problem, the manual says not to operate it below 500 feet.

          Runaway pitch trim? The manual says you must constantly be monitoring the autopilot and disengage it when it's not operating as expected and to pull the autopilot and pitch trim circuit breakers. Clearly insufficient operator training is to blame.

        • exe34 2 days ago

          > And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

          just because we do something dumb in one scenario isn't a very persuasive reason to do the same in another.

          > then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.

          ambulances sometimes get into accidents - we should ban all ambulances, no matter how many lives they save otherwise.

        • CrimsonRain 2 days ago

          So your only concern is, when something goes wrong, need someone to blame. Who cares about lives saved. Vaccines can cause adverse effects. Let's ban all of them.

          If people like you were in charge of anything, we'd still be hitting rocks for fire in caves.

          • gambiting 11 hours ago

            Ok, consider this for a second. You're a director of a hospital that owns a Therac radiotherapy machine for treating cancer. The machine is without any shadow of a doubt saving lives. People without access to it would die or have their prognosis worsen. Yet one day you get a report saying that the machine might sometimes, extremely rarely, accidentally deliver a lethal dose of radiation instead of the therapeutic one.

            Do you decide to keep using the machine, or do you order it turned off until that defect can be fixed? Why yes or why not? Why does the same argument apply/not apply in the discussion about self driving cars?

            (And in case you haven't heard about it - the Therac radiotherapy machine fault was a real thing, it's being used as a cautionary tell for software development but I sometimes wonder if it should be used in philosophy classes too)

      • penjelly 2 days ago

        I'd challenge the legitimacy of the claim that it's 10x safer, or even safer at all. The safety data provided isn't compelling to me, it can be games or misrepresented in various ways, as pointed out by others.

        • yCombLinks 2 days ago

          That claim wasn't made. It was a hypothetical, what if it was 10x safer? Then would people tolerate it.

          • penjelly 18 hours ago

            yes people would, if we had a reliable metric for safety of these systems besides engaged/disengaged. We don't, and 10x safer with the current metrics is not satisfactory.

    • V99 2 days ago

      Airplane autopilots follow a lateral & sometimes vertical path through the sky prescribed by the pilot(s). They are good at doing that. This does increase safety, because it frees up the pilot(s) from having to carefully maintain a straight 3d line through the sky for hours at a time.

      But they do not listen to ATC. They do not know where other planes are. They do not keep themselves away from other planes. Or the ground. Or a flock of birds. They do not handle emergencies. They make only the most basic control-loop decisions about the control surface and power (if even autothrottle equipped, otherwise that's still the meatbag's job) changes needed to follow the magenta line drawn by the pilot given a very small set of input data (position, airspeed, current control positions, etc).

      The next nearest airplane is typically at least 3 miles laterally and/or 500' vertically away, because the errors allowed with all these components are measured in hundreds of feet.

      None of this is even remotely comparable to a car using a dozen cameras (or lidar) to make real-time decisions to drive itself around imperfect public streets full of erratic drivers and other pedestrians a few feet away.

      What it is a lot like is what Tesla actually sells (despite the marketing name). Yes it's "flying" the plane, but you're still responsible for making sure it's doing the right thing, the right way, and not and not going to hit anything or kill anybody.

      • kelnos a day ago

        Thank you for this. The number of people conflating Tesla's Autopilot with an airliner's autopilot, and expecting that use and policies and situations surrounding the two should be directly comparable, is staggering. You'd think people would be better at critical thinking with this, but... here we are.

        • Animats a day ago

          Ah. Few people realize how dumb aircraft autopilots really are. Even the fanciest ones just follow a series of waypoints.

          There is one exception - Garmin Safe Return. That's strictly an emergency system. If it activates, the plane is squawking emergency to ATC and and demanding that airspace and a runway be cleared for it.[1] This has been available since 2019 and does not seem to have yet been activated in an emergency.

          [1] https://youtu.be/PiGkzgfR_c0?t=87

          • V99 15 hours ago

            It does do that and it's pretty neat, if you have one of the very few modern turboprops or small jets that have G3000s & auto throttle to support it.

            Airliners don't have this, but they have a 2nd pilot. A real-world activation needs a single-pilot operation where they're incapacitated, in one of the maybe few hundred nice-but-not-too-nice private planes it's equipped in, and a passenger is there to push it.

            But this is all still largely using the current magenta line AP system, and that's how it's verifiable and certifiable. There's still no cameras or vision or AI deciding things, there are a few new bits of relatively simple standalone steps combined to get a good result.

            - Pick a new magenta line to an airport (like pressing NRST Enter Enter if you have filtering set to only suitable fields)

            - Pick a vertical path that intersects with the runway (Load a straight-in visual approach from the database)

            - Ensure that line doesn't hit anything in the terrain/obstacle database. (Terrain warning system has all this info, not sure how it changes the plan if there is a conflict. This is probably the hardest part, with an actual decision to make).

            - Look up the tower frequency in DB and broadcast messages. As you said it's telling and not asking/listening.

            - Other humans know to get out of the way because this IS what's going to happen. This is normal, an emergency aircraft gets whatever it wants.

            - Standard AP and autothrottle flies the newly prescribed path.

            - The radio altimeter lets it know when to flare.

            - Wheel weight sensors let it know to apply the brakes.

            - The airport helps people out and tows the plane away, because it doesn't know how to taxi.

            There's also "auto glide" on the more accessible G3x suite for planes that aren't necessarily $3m+. That will do most of the same stuff and get you almost, but not all the way, to the ground in front of a runway automatically.

            • Animats 12 hours ago

              > and a passenger is there to push it.

              I think it will also activate if the pilot is unconscious, for solo flights. It has something like a driver alertness detection system that will alarm if the pilot does nothing for too long. The pilot can reset the alarm, but if they do nothing, the auto return system takes over and lands the plane someplace.

      • josephcsible 21 hours ago

        > They do not know where other planes are.

        Yes they do. It's called TCAS.

        > Or the ground.

        Yes they do. It's called Auto-GCAS.

        • V99 16 hours ago

          Yes those are optional systems that exist, but they are unrelated to the autopilot (in at least the vast majority of avionics).

          They are warning systems that humans respond to. For a TCAS RA the first thing you're doing is disengaging the autopilot.

          If you tell the autopilot to fly straight into the path of a mountain, it will happily comply and kill you while the ground proximity warnings blare.

          Humans make the decisions in planes. Autopilots are a useful but very basic tool, much more akin to cruise control in a 1998 Civic than a self-driving Tesla/Waymo/erc.

    • AlchemistCamp a day ago

      > ”The answer is zero…”

      > ”If there is any issue discovered in any car with this tech then it should be disabled worldwide until the root cause is found and eliminated.”

      This would literally cost millions of needless deaths in a situation where AI drivers had 1/10th the accident injury rate of human drivers.

    • peterdsharpe 2 days ago

      > literally no errors in its operation are tolerated

      Aircraft designer here, this is not true. We typically certify to <1 catastrophic failure per 1e9 flight hours. Not zero.

    • Aloisius 2 days ago

      Autopilots aren't held to a zero error standard let alone a zero accident standard.

  • akira2501 2 days ago

    > traveled of the median human driver isn't acceptable.

    It's completely acceptable. In fact the numbers are lower than they have been since we've started driving.

    > Accidents caused by human drivers

    Are there any other types of drivers?

    > are one of the largest causes of injury and death

    More than half the fatalities on the road are actually caused by the use of drugs and alcohol. The statistics are very clear on this. Impaired people cannot drive well. Non impaired people drive orders of magnitude better.

    > technology that could save lives

    There is absolutely zero evidence this is true. Everyone is basing this off of a total misunderstanding of the source of fatalities and a willful misapprehension of the technology.

    • blargey 2 days ago

      > Non impaired people drive orders of magnitude better.

      That raises the question - how many impaired driver-miles are being baked into the collision statistics for "median human" driver-miles? Shouldn't we demand non-impaired driving as the standard for automation, rather than "averaged with drunk / phone-fiddling /senile" driving? We don't give people N-mile allowances for drunk driving based on the size of the drunk driver population, after all.

      • akira2501 a day ago

        Motorcycles account for a further 15% of all fatalities in a typical year. Weather is often a factor. Road design is sometimes a factor, remembering several rollover crashes that ended in a body of water and no one in the vehicle surviving. Likewise ejections during fatalities due to lack of seatbelt use is also noticeable.

        Once you dig into the data you see that almost every crash, at this point in history, is really a mini-story detailing the confluence of several factors that turned a basic accident into something fatal.

        Also, and I only saw this once, but if you literally have a heart attack behind the wheel, you are technically a roadway fatality. The driver was 99. He just died while sitting in slow moving traffic.

        Which brings me to my final point which is the rear seats in automobiles are less safe than the front seats. This is true for almost every vehicle on the road. You see _a lot_ of accidents where two 40 to 50 year old passengers are up front and two 70 to 80 year old passengers are in back. The ones up front survive. One or both passengers in the back typically die.

      • kelnos a day ago

        No, that makes no sense, because we can't ensure that human drivers aren't impaired. We test and compare against the reality, not the ideal we'd prefer.

        • akira2501 17 hours ago

          We can sample rate of impairment. We do this quite often actually. It turns out the rate depends on the time of day.

    • kelnos a day ago

      > Are there any other types of drivers [than human drivers]?

      Waymo says yes, there are.

  • Terr_ 2 days ago

    > It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

    Even if we optimistically assume no "gotchas" in the statistics [0], distilling performance down to a casualty/injury/accident-rate can still be dangerously reductive, when the have a different distribution of failure-modes which do/don't mesh with our other systems and defenses.

    A quick thought experiment to prove the point: Imagine a system which compared to human drivers had only half the rate of accidents... But many of those are because it unpredictably decides to jump the sidewalk curb and kill a targeted pedestrian.

    The raw numbers are encouraging, but it represents a risk profile that clashes horribly with our other systems of road design, car design, and what incidents humans are expecting and capable of preventing or recovering-from.

    [0] Ex: Automation is only being used on certain subsets of all travel which are the "easier" miles or circumstances than the whole gamut a human would handle.

    • kelnos a day ago

      Re: gotchas: an even easier one is that the Tesla FSD statistics don't include when the car does something unsafe and the driver intervenes and takes control, averting a crash.

      How often does that happen? We have no idea. Tesla can certainly tell when a driver intervenes, but they can't count every occurrence as safety-related, because a driver might take control for all sorts of reasons.

      This is why we can make stronger statements about the safety of Waymo. Their software was only tested by people trained and paid to test it, who were also recording every time they had to intervene because of safety, even if there was no crash. That's a metric they could track and improve.

  • croes 2 days ago

    > It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

    Were the Teslas driving under all weather conditions at any location like humans do or is it just cherry picked from the easy travelling conditions?

  • jakelazaroff 2 days ago

    I think we should not be satisfied with merely “better than a human”. Flying is so safe precisely because we treat any casualty as unacceptable. We should aspire to make automobiles at least that safe.

    • kelnos a day ago

      I don't think the question was what we should be satisfied with or what we should aspire to. I absolutely agree with you that we should strive to make autonomous driving as safe as airline travel.

      But the question was when should we allow autonomous driving on our public roads. And I think "when it's at least as safe as the median human driver" is a reasonable threshold.

      (The thing about Tesla FSD is that it -- unsupervised -- would probably fall super short of that metric. FSD needs to be supervised to be safer than the median human driver, assuming that's evn currently the case, and not every driver is going to be equally good at supervising it.)

    • josephcsible 21 hours ago

      Aspire to, yes. But if we say "we're going to ban FSD until it's perfect, even though it already saves lives relative to the average human driver", you're making automobiles less safe.

    • cubefox a day ago

      > I think we should not be satisfied with merely “better than a human”.

      The question is whether you want to outlaw automatic driving just because the system is, say, "only" 50% safer than us.

    • aantix 2 days ago

      Before FSD is allowed on public roads?

      It’s a net positive, saving lives right now.

  • jillesvangurp a day ago

    The key here is insurers. Because they pick up the bill when things go wrong. As soon as self driving becomes clearly better than humans, they'll be insisting we stop risking their money by driving ourselves whenever that is feasible. And they'll do that with price incentives. They'll happily insure you if you want to drive yourself. But you'll pay a premium. And a discount if you are happy to let the car do the driving.

    Eventually, manual driving should come with a lot more scrutiny. Because once it becomes a choice rather than an economic necessity, other people on the road will want to be sure that you are not needlessly endangering them. So, stricter requirements for getting a drivers license with more training and fitness/health requirements. This too will be driven by insurers. They'll want to make sure you are fit to drive.

    And of course when manual driving people get into trouble, taking away their driving license is always a possibility. The main argument against doing that right now is that a lot of people depend economically on being able to drive. But if that argument goes away, there's no reason to not be a lot stricter for e.g. driving under influence, or routinely breaking laws for speeding and other traffic violations. Think higher fines and driving license suspensions.

  • smitty1110 2 days ago

    There’s two things going on here with there average person that you need to overcome: That when Tesla dodges responsibility all anyone sees is a liar, and that people amalgamate all the FSD crashes and treat the system like a dangerous local driver that nobody can get off the road.

    Tesla markets FSD like it’s a silver bullet, and the name is truly misleading. The fine print says you need attention and all that. But again, people read “Full Self Driving” and all the marketing copy and think the system is assuming responsibility for the outcomes. Then a crash happens, Tesla throws the driver under the bus, and everyone gets a bit more skeptical of the system. Plus, doing that to a person rubs people the wrong way, and is in some respects a barrier to sales.

    Which leads to the other point: People are tallying up all the accidents and treating the system like a person, and wondering why this dangerous driver is still on the road. Most accidents with dead pedestrian start with someone doing something stupid, which is when they assume all responsibility, legally speaking. Drunk, speeding, etc. Normal drivers in poor conditions slow down and drive carefully. People see this accident, and treat FSD like a serial drunk driver. It’s to the point that I know people that openly say they treat teslas on roads like they’re erratic drivers just for existing.

    Until Elon figures out how to fix his perception problem, the calls for investigations and to keep his robotaxis is off the road will only grow.

  • becquerel 2 days ago

    My dream is of a future where humans are banned from driving without special licenses.

    • gambiting 2 days ago

      So.........like right now you mean? You need a special licence to drive on a public road right now.

      • nkrisc 2 days ago

        The problem is it’s obviously too easy to get one and keep one, based on some of the drivers I see on the road.

        • gambiting 2 days ago

          That sounds like a legislative problem where you live, sure it can be fixed by overbearing technology but we already have all the tools we need to fix it, we are just choosing not to for some reason.

      • kelnos a day ago

        No, you need an entirely common, unspecial license drive on a public road right now.

    • FireBeyond 2 days ago

      And yet Tesla's FSD never passed a driving test.

      • grecy a day ago

        And it can’t legally drive a vehicle

  • danans 2 days ago

    > The interesting question is how good self-driving has to be before people tolerate it.

    It's pretty simple: as good as it can be given available technologies and techniques, without sacrificing safety for cost or style.

    With AVs, function and safety should obviate concerns of style, cost, and marketing. If that doesn't work with your business model, well tough luck.

    Airplanes are far safer than cars yet we subject their manufacturers to rigorous standards, or seemingly did until recently, as the 737 max saga has revealed. Even still the rigor is very high compared to road vehicles.

    And AVs do have to be way better than people at driving because they are machines that have no sense of human judgement, though they operate in a human physical context.

    Machines run by corporations are less accountable than human drivers, not at the least because of the wealth and legal armies of those corporations who may have interests other than making the safest possible AV.

    • mavhc 2 days ago

      Surely the number of cars than can do it, and the price, also matters, unless you're going to ban private cars

      • danans 2 days ago

        > Surely the number of cars than can do it, and the price, also matters, unless you're going to ban private cars

        Indeed, like this: the more cars sold that claim fully autonomous capability, and the more affordable they get, the higher the standards should be compared to their AV predecessors, even if they have long eclipsed human driver's safety record.

        If this is unpalatable, then let's assign 100% liability with steep monetary penalties to the AV manufacturer for any crash that happens under autonomous driving mode.

  • aithrowawaycomm 2 days ago

    Many people don't (and shouldn't) take the "half the casualty rate" at face value. My biggest concern is that Waymo and Tesla are juking the stats to make self-driving cars seem safer than they really are. I believe this is largely an unintentional consequence of bad actuary science coming from bad qualitative statistics; the worst kind of lying with numbers is lying to yourself.

    The biggest gap in these studies: I have yet to see a comparison with human drivers that filters out DUIs, reckless speeding, or mechanical failures. Without doing this it is simply not a fair comparison, because:

    1) Self-driving cars won't end drunk driving unless it's made mandatory by outlawing manual driving or ignition is tied to a breathalyzer. Many people will continue to make the dumb decision to drive themselves home because they are drunk and driving is fun. This needs regulation, not technology. And DUIs need to be filtered from the crash statistics when comparing with Waymo.

    2) A self-driving car which speeds and runs red lights might well be more dangerous than a similar human, but the data says nothing about this since Waymo is currently on their best behavior. Yet Tesla's own behavior and customers prove that there is demand for reckless self-driving cars, and manufacturers will meet the demand unless the law steps in. Imagine a Waymo competitor that promises Uber-level ETAs for people in a hurry. Technology could in theory solve this but in practice the market could make things worse for several decades until the next research breakthrough. Human accidents coming from distraction are a fair comparison to Waymo, but speeding or aggressiveness should be filtered out. The difficulty of doing so is one of the many reasons I am so skeptical of these stats.

    3) Mechanical failures are a hornets' nest of ML edge cases that might work in the lab but fail miserably on the road. Currently it's not a big deal because the cars are shiny and new. Eventually we'll have self-driving clunkers owned by drivers who don't want to pay for the maintenance.

    And that's not even mentioning that Waymos are not self-driving, they rely on close remote oversight to guide AI through the many billions of common-sense problems that computets will not able to solve for at least the next decade, probably much longer. True self-driving cars will continue to make inexplicably stupid decisions: these machines are still much dumber than lizards. Stories like "the Tesla slammed into an overturned tractor trailer because the AI wasn't trained on overturned trucks" are a huge problem and society will not let Tesla try to launder it away with statistics.

    Self-driving cars might end up saving lives. But would they save more lives than adding mandatory breathalyzers and GPS-based speed limits? And if market competition overtakes business ethics, would they cost more lives than they save? The stats say very little about this.

    • kelnos a day ago

      > My biggest concern is that Waymo and Tesla are juking the stats to make self-driving cars seem safer than they really are

      Even intentional juking aside, you can't really compare the two.

      Waymo cars drive completely autonomously, without a supervising driver in the car. If it does something unsafe, there's no one there to correct it, and it may get into a crash, in the same way a human driver doing that same unsafe thing might.

      With Tesla FSD, we have no idea how good it really is. We know that a human is supervising it, and despite all the reports we see of people doing super irresponsible things while "driving" a Tesla (like taking a nap), I imagine most Tesla FSD users are actually attentively supervising for the most part. If all FSD users stopped supervising and started taking naps, I suspect the crash rate and fatality rate would start looking like the rate for the worst drivers on the road... or even worse than that.

      So it's not that they're juking their stats (although they may be), it's that they don't actually have all the stats that matter. Waymo has and had those stats, because their trained human test drivers were reporting when the car did something unsafe and they had to take over. Tesla FSD users don't report when they have to do that. The data is just not there.

  • moogly a day ago

    > Accidents caused by human drivers are one of the largest causes of injury and death

    In some parts of the world. Perhaps some countries should look deeper into why and why self-driving cars might not be the No. 1 answer to reduce traffic accidents.

  • alkonaut 2 days ago

    > How about a quarter? Or a tenth?

    Probably closer to the latter. The "skin in the game" (physically) argument makes me more willing to accept drunk drivers than greedy manufacturers when it comes to making mistakes or being negligent.

  • fma a day ago

    Flying is safer than driving but Boeing isn't getting a free pass on quality issues. Why would Tesla?

  • __loam 2 days ago

    The problem is that Tesla is way behind the industry standards here and it's misrepresenting how good their tech is.

  • mvdtnz 8 hours ago

    How about fewer accidents per distance of equivalent driving?

  • iovrthoughtthis 2 days ago

    at least 10x better than a human

    • becquerel 2 days ago

      I believe Waymo has already beaten this metric.

      • szundi 2 days ago

        Waymo is limited to cities that their engineers has to map and this map maintained.

        You cannot put a waymo in a new city before that. With Tesla, what you get is universal.

        • kelnos a day ago

          Waymo is safe where they've mapped and trained and tested, because they track when their test drivers have to take control.

          Tesla FSD is just everywhere, without any accountability or trained testing on all the roads people use them on. We have no idea how often Tesla FSD users have to take control from FSD due to a safety issue.

          Waymo is objectively safer, and their entire approach is objectively safer, and is actually measurable, whereas Tesla FSD's safety cannot actually be accurately measured.

        • RivieraKid 2 days ago

          Waymo is robust to removing the map / lidars / radars / cameras or adding inaccuracies to any of these 4 inputs.

          (Not sure if this is true for the production system or the one they're still working on.)

        • dageshi a day ago

          I think the Waymo approach is the one that will actually deliver some measure of self driving cars that people will be comfortable to use.

          It won't operate everywhere, but it will gradually expand to cover large areas and it will keep expanding till it's near ubiquitous.

          I'm dubious that the Tesla approach will actually ever work.

  • sebzim4500 2 days ago

    >It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

    Are you sure? Right now FSD is active with no one actually knowing its casualty rate, and the for the most part the only people upset about it are terminally online people on twitter or luddites on HN.

alexjplant 2 days ago

> The collision happened because the sun was in the Tesla driver's eyes, so the Tesla driver was not charged, said Raul Garcia, public information officer for the department.

Am I missing something or is this the gross miscarriage of justice that it sounds like? The driver could afford a $40k vehicle but not $20 polarized shades from Amazon? Negligence is negligence.

  • smdyc1 2 days ago

    Not to mention that when you can't see, you slow down? Does the self-driving system do that sufficiently in low visibility? Clearly not if it hit a pedestrian with enough force to kill them.

    The article mentions that Tesla's only use cameras in their system and Musk believes they are enough, because humans only use their eyes. Well firstly, don't you want self-driving systems to be better than humans? Secondly, humans don't just respond to visual cues as a computer would. We also hear and respond to feelings, like the sudden surge of anxiety or fear as our visibility is suddenly reduced at high speed.

    • jsight 2 days ago

      Unfortunately there is also an AI training problem embedded in this. As Mobileye says, there are a lot of driver decisions that are common, but wrong. The famous example is rolling stops, but also failing to slow down for conditions is really common.

      It wouldn't shock me if they don't have nearly enough training samples of people slowing appropriately for visibility with eyes, much less slowing for the somewhat different limitations of cameras.

    • hshshshshsh 2 days ago

      I think one of the reasons they focus only on vision is basically the entire transportation infra is designed using human eyes a primary way to channel information.

      Useful information for driving are communicated through images in form of road signs, traffic signals etc.

      • nkrisc 2 days ago

        I dunno, knowing the exact relative velocity of the car in front of you seems like it could be useful and is something humans can’t do very well.

        I’ve always wanted a car that shows my speed and the relative speed (+/-) of the car in front of me. My car’s cruise control can maintain a set distance so obviously it’s capable of it but it doesn’t show it.

        • dham 13 hours ago

          If your car is maintaining speed of the car in front then the car in front is going speed that is showing on your speedometer.

          • nkrisc 3 hours ago

            Yes, that’s true, but I don’t see how that relates to my comment at all.

            I said the relative speed. If the car is going the same speed as me then the relative speed is 0mph. I want to see that when I’m not using cruise control.

      • SahAssar 2 days ago

        We are "designed" (via evolution) to perceive and understand the environment around us. The signage is designed to be easily readable for us.

        The models that drive these cars clearly either have some more evolution to do or for us to design the world more to their liking.

        • hshshshshsh 2 days ago

          Yes. I was talking why Tesla choose to use vision. Since they can't control designing the transport infra to their liking at least for now.

    • plorg 2 days ago

      I would think one relevant factor is that human vision is different than and in some ways significantly better than cameras.

    • pmorici 2 days ago

      The Tesla knows when it's cameras and blinded by sun and act accordingly or tells the human to take over.

      • kelnos 2 days ago

        Expect when it doesn't actually do that, I guess? Like when this pedestrian was killed?

      • eptcyka 2 days ago

        If we were able to know when a neural net is failing to categorize something, wouldn’t we get AGI for free?

  • jabroni_salad 2 days ago

    Negligence is negligence but people tend to view vehicle collisions as "accidents", as in random occurrences dealt by the hand of fate completely outside of anyone's control. As such, there is a chronic failure to charge motorists with negligence, even when they have killed someone.

    If you end up in court, just ask for a jury and you'll be okay. I'm pretty sure this guy didnt even go to court, sounds like it got prosecutor's discretion.

    • tsimionescu a day ago

      That sounds like the justice system living up to its ideals. If the 12 jurors know they would have done the same in your situation, as would their family and friends, then they can't in good conscience convict you for negligence.

      • pessimizer 18 hours ago

        It sounds like the kind of narcissism that perverts justice. People understand things they could see themselves doing, don't understand things that they can't see themselves doing, and disregard the law entirely. It makes non-doctors and non-engineers incapable of judging doctors and engineers, rich people incapable of judging poor people, and poor people incapable of judging rich people.

        It's just a variation of letting off the defendant that looks like your kid, or brutalizing someone whose victim looks like your kid, it's no ideal of justice.

        • tsimionescu an hour ago

          I can agree with you in many cases. But for convicting someone due to negligence, they have to, by definition, have conducted themselves in a way that competent people engaged in that activity wouldn't usually. If all drivers drive a certain way, even if it's dangerous, then you're not negligent by the standards of the law for driving that same way.

    • sokoloff a day ago

      Negligence is the failure to act with the level of care that a reasonable person would exercise in a similar situation; if a reasonable person likely would have done the things that led to that person’s death, they’re not guilty of negligence.

  • crazygringo 18 hours ago

    I'm genuinely not sure what the answer is.

    When you're driving directly in the direction of a setting sun, polarized sunglasses won't help you at all. That's what sun visors are for, but they won't always work if you're short, and can block too much of the environment if you're too tall.

    The only truly safe answer is really to pull to the side of the road and wait for the sun to set. But in my life I've never seen anybody do that ever, and it would absolutely wreck traffic with little jams all over the city that would cascade.

    • kjkjadksj 17 hours ago

      No, polarized sunglasses work fine. I drive into a setting sun probably once a week to no incident.

      • crazygringo 17 hours ago

        That doesn't make any sense to me.

        First of all, polarization is irrelevant when looking at the sun. It only affects light that is reflected off things like other cars' windows, or water on the street. In fact, it's often recommended not to use polarized sunglasses while driving because you can miss wet or icy patches on the road.

        Secondly, standard sunglasses don't let you look directly at the sun, even a setting one. The sun is still dangerously bright.

        • kjkjadksj 15 hours ago

          I’m not looking directly at the sun I am looking at the road. Either way it makes a big difference and you don’t get much black ice here in sunny southern california.

          • crazygringo 14 hours ago

            But the scenario we're talking about is when the sun is just a few degrees away from the road. It's still entering your eyeball directly. It's still literally blinding, so I just... don't understand how you can do that? Like, I certainly can't. Sunglasses -- polarized or otherwise -- don't make the slightest difference. It's why sun visors exist.

            Also, I'm assuming you get rain in SoCal at least sometimes, that then mostly dries up but not completely? Or leaking fire hydrants and so forth? It's the unexpected wet patches.

            • kjkjadksj 10 hours ago

              When we get rain in socal its a deluge. The first couple months of 2024 we had more rain than seattle. That being said there is a big difference wearing sunglasses and not. Actually I was in a parking garage today and thought of this very thread, because the sun was shining on the cement ground through the side and was literally blinding. To my naked eye it was like a blown out photograph, just pure light on the ground. I put the sunglasses on that were around my neck and what do you know. Not only did the glare go down it went down to the point I could now make out the little half rainbow streaks the cement pavers added to the floor. Same thing happens on our cement highways or when you get bad glare off of someones relatively freshly waxed car. I had a period of two weeks where I lost my polarized glasses and it was like I was disabled going outside in the day; I had to squint to even stand it because of how many white painted or cement surfaces we have here in socal. I grew up in the midwest where I did not really own sunglasses at all fwiw. Here it is mandatory for the amount of unclouded sunlight coupled with the usually white or light grey surface treatment on a lot of things.

  • theossuary 2 days ago

    You know what they say, if you want to kill someone in the US, do it in a car.

    • littlestymaar 2 days ago

      Crash Course: If You Want to Get Away With Murder Buy a Car Woodrow Phoenix

    • immibis 2 days ago

      In the US it seems you'd do it with a gun, but in Germany it's cars.

      There was this elderly driver who mowed down a family in a bike lane waiting to cross the road in Berlin, driving over the barriers between the bike lane and the car lane because the cars in the car lane were too slow. Released without conviction - it was an unforeseeable accident.

  • renewiltord a day ago

    Yeah, I have a couple of mirrors placed around my car that reflect light into my face so that I can get out of running into someone. Tbh I understand why they do this. Someone on HN explained it to me: Yield to gross tonnage. So I just drive where I want. If other people die, that’s on them: the graveyards are full of people with the right of way, as people say.

  • macintux 2 days ago

    I have no idea what the conditions were like for this incident, but I’ve blown through a 4-way stop sign when the sun was setting. There’s only so much sunglasses can do.

    • eptcyka 2 days ago

      If environmental factors incapacitate you, should you not slow down or stop?

    • kelnos a day ago

      Your license should be suspended. If conditions don't allow you to see things like that, you slow down until you can. If you still can't, then you need to pull over and wait until conditions make it safe to drive again.

      Gross.

    • singleshot_ 2 days ago

      > There’s only so much sunglasses can do.

      For everything else, you have brakes.

    • alexjplant 2 days ago

      ¯\_(ツ)_/¯ If I can't see because of rain, hail, intense sun reflections, frost re-forming on my windshield, etc. then I pull over and put my flashers on until the problem subsides. Should I have kept the 4700 lb vehicle in fifth gear at 55 mph without the ability to see in front of me in each of these instances? I submit that I should not have and that I did the right thing.

    • ablation 2 days ago

      Yet so much more YOU could have done, don’t you think?

    • Doctor_Fegg 2 days ago

      Yes, officer, this one right here.

    • vortegne 2 days ago

      You shouldn't be on the road then? If you can't see, you should slow down. If you can't handle driving in given conditions safely for everyone involved, you should slow down or stop. If everybody would drive like you, there'd be a whole lot more death on the roads.

    • IshKebab a day ago

      I know right? Once I got something in my eye so I couldn't see at all, but I decided that since I couldn't do anything about it the best thing was to keep driving. I killed a few pedestrians but... eh, what was I going to do?

InsomniacL a day ago

As I come over the top of a crest, there was suddenly a lot of sun glare and the my Model Y violently swerved to the left, fortunately I had just overtaken a car on a two lane, dual carriageway and hadn't moved back to the left hand lane yet.

The driver I had just overtaken, although he wasn't very close anymore slowed right down to get away from me and I didn't blame him.

That manoeuvre in another car likely would have put it on two wheels.

They say FSD crashes less often than a human per mile driven, but I can only use FSD on roads like motorways, so I don't think it's a fair comparison.

I don't trust FSD, I still use it occasionally but never in less than ideal conditions. Typically when doing something like changing the music on a motorway.

It probably is safer than just me driving alone, when it's in good conditions on a straight road with light traffic with an alert driver.

  • mglz a day ago

    The underlying problem is that the current FSD architecture doesn't seem to have good guard rails for these outlier situations you describe (sunlight blinding the camera from just the right angle probably?) and it is probably not possible to add such rules without limiting the system enormously.

    Fundamentally driving consists of a set of fairly clear cut rules with a ridiculous amount of "it depends" cases.

    • iknowstuff 11 hours ago

      it actually does. he’s in europe, on a 5 year old autopilot, basically.

      current fsd uses a multiexposure camera feed so its not really much susceptible to sun glare.

  • iknowstuff 11 hours ago

    what do you mean you can only use fsd on motorways? Just by phrasing I assume you’re referring to the old FSD which is just beefed up autopilot based on heuristics - europe is stuck on a gimped 5 year old autopilot stack due to overzealous regulation.

    American FSD is a completely different beast, usable on every road and street, so your anecdote is actually not relevant to the thread

rKarpinski 2 days ago

'Pedestrian' in this context seems pretty misleading

"Two vehicles collided on the freeway, blocking the left lane. A Toyota 4Runner stopped, and two people got out to help with traffic control. A red Tesla Model Y then hit the 4Runner and one of the people who exited from it. "

edit: Parent article was changed... I was referring to the title of the NPR article.

  • danans 2 days ago

    > Pedestrian' in this context seems pretty misleading

    What's misleading? The full quote:

    "A red Tesla Model Y then hit the 4Runner and one of the people who exited from it. A 71-year-old woman from Mesa, Arizona, was pronounced dead at the scene."

    If you exit a vehicle, and are on foot, you are a pedestrian.

    I wouldn't expect FSD's object recognition system to treat a human who has just exited a car differently than a human walking across a crosswalk. A human on foot is a human on foot.

    However, from the sound of it, the object recognition system didn't even see the 4Runner, much less a person, so perhaps there's a more fundamental problem with it?

    Perhaps this is something that lidar or radar, if the car had them, would have helped the OR system to see.

    • jfoster 2 days ago

      The description has me wondering if this was definitely a case where FSD was being used. There have been other cases in the past where drivers had an accident and claimed they were using autopilot when they actually were not.

      I don't know for sure, but I would think that the car could detect a collision. I also don't know for sure, but I would think that FSD would stop once a collision has been detected.

      • pell 2 days ago

        > There have been other cases in the past where drivers had an accident and claimed they were using autopilot when they actually were not.

        Wouldn’t this be protocoled by the event data recorder?

      • danans 2 days ago

        > There have been other cases in the past where drivers had an accident and claimed they were using autopilot when they actually were not.

        If that were the case here, there wouldn't be a government probe, right? It would be a normal "multi car pileup with a fatality" and added to statistics.

        With the strong incentive on the part of both the driver and Tesla to lie about this, there should strong regulations around event data recorders [1] for self driving systems, and huge penalties for violating those. A search across that site doesn't return a hit for the word "retention" but it's gotta be expressed in some way there.

        1. https://www.ecfr.gov/current/title-49/subtitle-B/chapter-V/p...

      • FireBeyond 2 days ago

        > FSD would stop once a collision has been detected.

        Fun fact, at least until very recently, if not even to this moment, AEB (emergency braking) is not a part of FSD.

        • modeless a day ago

          I believe AEB can trigger even while FSD is active. Certainly I have seen the forward collision warning trigger during FSD.

      • bastawhiz 2 days ago

        Did the article say the Tesla didn't stop after the collision?

        • jfoster 2 days ago

          If it hit the vehicle and then hit one of the people who had exited the vehicle with enough force for it to result in a fatality, it sounds like it might not have applied any braking.

          Of course, that depends on the speed it was traveling at to begin with.

    • potato3732842 2 days ago

      Tesla's were famously poor at detecting partial lane obstructions for a long time. I wonder if that's what happened here.

  • Retric 2 days ago

    More clarity may change people’s opinion of the accident, but IMO pedestrian meaningfully represents someone who is limited to human locomotion and lacks any sort of protection in a collision.

    Which seems like a reasonable description of the type of failure involved in the final few seconds before impact.

    • rKarpinski 2 days ago

      Omitting that the pedestrian was on a freeway meaningfully mis-represents the situation.

      • Retric 2 days ago

        People walking on freeways may be rare from the perspective of an individual driver but not a self driving system operating on millions of vehicles.

        • rKarpinski 2 days ago

          What does that have to do with the original article's misleading title?

          • Retric 2 days ago

            I don’t think it’s misleading. It’s a tile not some hundred word description of what exactly happened.

            Calling them motorists would definitely be misleading by comparison. Using the simple “fatal crash” of the linked title implies the other people might in be responsible which is misleading.

            Using accident but saying Tesla was at fault could open them up to liability and therefore isn’t an option.

            • rKarpinski 2 days ago

              > I don’t think it’s misleading. It’s a tile not some hundred word description of what exactly happened.

              "Pedestrian killed on freeway" instead of "pedestrian killed" doesn't take 100 words and doesn't give the impression Tesla's are mowing people down on crosswalks (although that's a feature to get clicks, not a bug).

              • Retric 2 days ago

                Without context that implies the pedestrians shouldn’t have been on the freeway.

                It’s not an issue for Tesla, but it does imply bad things about the victims.

                • rKarpinski 2 days ago

                  A title of "U.S. to probe Tesla's 'Full Self-Driving' system after pedestrian killed on freeway" would in no way imply bad things about the pedestrian who was killed.

                  • Retric a day ago

                    It was my first assumption when I was read pedestrian on freeway in someone’s comment without context. Possibly due to Uber self driving fatality.

                    Stranded motorists who exit their vehicle, construction workers, first responders, tow truck drivers, etc are the most common victims but that’s not the association I had.

      • nkrisc a day ago

        No, you’re not allowed to hit pedestrians on the freeway either.

        There are many reasons why a pedestrian might be on the freeway. It’s not common but I see it at least once a month and I drive extra carefully when I do, moving over if I can and slowing down.

      • Arn_Thor a day ago

        Why? I would hope we all expect pedestrian detection (and object detection in general) to be just as good on a freeway as on a city street? It seems the Tesla barreled full-speed into an accident ahead of it. I would call it insane but that would be anthropomorphizing it.

    • potato3732842 2 days ago

      This sort of framing you're engaging in is exactly what the person you're replying to is complaining about.

      Yeah, the person who got hit was technically a pedestrian but just using that word with no other context doesn't covey that it was a pedestrian on a limited access highway vs somewhere pedestrians are allowed and expected. Without additional explanation people assume normalcy and think that the pedestrian was crossing a city street or something pedestrians do all the time and are expected to do all the time when that is very much not what happened here.

      • Retric 2 days ago

        Dealing with people on freeways is the kind of edge case humans aren’t good at but self driving cars have zero excuses. It’s a common enough situation that someone will exit a vehicle after a collision to make it a very predictable edge case.

        Remember all of the bad press Uber got when a pedestrian was struck and killed walking their bike across the middle of a street at night? People are going to be on limited access freeways and these systems need to be able to deal with it. https://www.bbc.com/news/technology-54175359

        • potato3732842 a day ago

          I'd make the argument that people are very good at dealing with random things that shouldn't be on freeways as long as they don't coincide with blinding sun or other visual impairment.

          Tesla had a long standing issue detecting partial lane obstructions. I wonder if the logic around that has anything to do with this.

          • Retric a day ago

            17 percent of pedestrian fatalities occur on freeways. Considering how rarely pedestrians are on freeways that suggests to me people aren’t very good at noticing them in time to stop / avoid them.

            https://usa.streetsblog.org/2022/06/09/why-20-of-pedestrians...

            • Arn_Thor a day ago

              That, and/or freeway speeds make the situation inherently more dangerous. When the traffic flows freeway speeds are fine but if a freeway-speed car has to handle a stationary object…problem.

  • neom 2 days ago

    That is the correct use of pedestrian as a noun.

    • echoangle 2 days ago

      Sometimes using a word correctly is still confusing because it’s used in a different context 90% of the time.

    • szundi 2 days ago

      I think parent commenter emphasized the context.

      Leaving out context that would otherwise change the interpretation of most or targeted people is the main way to misled those people without technically lying.

    • varenc 2 days ago

      By a stricter definition, a pedestrian is one who travels by foot. Of course, they are walking, but they’re traveling via their car, so by some interpretations you wouldn’t call them a pedestrian. You could call them a “motorist” or a “stranded vehicle occupant”.

      For understanding the accident it does seem meaningful that they were motorists that got out of their car on a highway and not pedestrians at a street crossing. (Still inexcusable of course, but changes the context)

      • bastawhiz 2 days ago

        Cars and drivers ideally shouldn't hit people who exited their vehicles after an accident on a highway. Identifying and avoiding hazards is part of driving.

      • neom 2 days ago

        As far as I am aware, pes doesn't carry an inherent meaning of travel. Pedestrian just means foot on, they don't need to be moving, they're just not in carriage. As an aside, distinguishing a person's mode of presence is precisely what reports aim to capture.

        (I also do tend to avoid this level of pedantry, the points here are all well taken to be clear. I do think the original poster was fine in their comment, I was just sayin' - but this isn't a cross I would die on :))

    • sebzim4500 2 days ago

      That's why he said misleading rather than an outright lie. He is not disputing that it is techincally correct to refer to the deceased as a pedestrian, but this scenario (someone out of their car on a freeway) is not what is going to spring to the mind of someone just reading the headline.

UltraSane 2 days ago

I'm astonished at how long Musk has been able to keep his autonomous driving con going. He has been lying about it to inflate Tesla shares for 10 years now.

  • ryandrake 2 days ago

    Without consequences, there is no reason to stop.

    • UltraSane 2 days ago

      When is the market going to realize Tesla is NEVER going to have real level 4 autonomy where Tesla takes legal liability for crashes the way Waymo has?

      • tstrimple 2 days ago

        Market cares far more about money than lives. Until the lives lost cost more than their profit, they give less than zero fucks. Capitalism. Yay!

  • heisenbit a day ago

    "I'm a technologist, I know a lot about computers," Musk told the crowd during the event. "And I'm like, the last thing I would do is trust a computer program, because it's just too easy to hack."

  • porphyra 2 days ago

    Just because it has taken 10 years longer than promised doesn't mean that it will never happen. FSD has made huge improvements this year and is on track to keep up the current pace so it actually does seem closer than ever.

    • UltraSane 2 days ago

      The current vision-only system is a clear technological dead-end that can't go much more than 10 miles between "disengagements". To be clear, "disengagements" would be crashes if a human wasn't ready to take over. And not needing a human driver is THE ENTIRE POINT! I will admit Musk isn't a liar when Tesla has FSD at least as good as Waymo's system and Tesla accepts legal liability for any crashes.

      • valval 2 days ago

        You’re wrong. Nothing about this is clear, and you’d be silly to claim otherwise.

        You should explore your bias and where it’s coming from.

        • UltraSane 2 days ago

          No Tesla vehicle has legally driven even a single mile with no driver in the driver's seat. They aren't even trying to play Waymo's game. The latest FSD software's failure rate is at least 100 times higher than it needs to be.

          • fallingknife 2 days ago

            That's a stupid point. I've been in a Tesla that's driven a mile by itself. It makes no difference if a person is in the seat.

            • UltraSane 2 days ago

              "It makes no difference if a person is in the seat." It does when Musk is claiming that Tesla is going to sell a car with no steering wheel!

              The current Tesla FSD fails so often that a human HAS to be in the driver seat ready to take over at any moment.

              You really don't understand the enormous difference between the current crappy level 2 Tesla FSD and Waymo's level 4 system?

              • fallingknife 15 hours ago

                Who said anything about Waymo? Waymo is building a very high cost commercial grade system intended for use on revenue generating vehicles. Tesla is building a low cost system intended for personal vehicles where Waymo's system would be cost prohibitive. Obviously Waymo's system is massively more capable. But that is about as surprising as the fact that a Ferrari is faster than a Ford Ranger.

                But this is all irrelevant to my point. You said a Tesla is not capable of driving itself for a mile. I have personally seen one do it. Whether a person is sitting in the driver's seat, or the regulators will allow it, has nothing to do with the fact that the vehicle does, in fact, have that capability.

              • valval 2 days ago

                The difference is that Tesla has a general algorithm, while Waymo is hard coding scenarios.

                I never really got why people bring Waymo up every time Tesla’s FSD is mentioned. Waymo isn’t competing with Tesla’s vision.

                • porphyra 2 days ago

                  Waymo uses a learned planner and is far from "hardcoded". In any case, imo both of these can be true:

                  * Tesla FSD works surprisingly well and improving capabilities to hands free actual autonomy isn't as far fetched as one might think.

                  * Waymo beat them to robotaxi deployment and scaling up to multiple cities may not be as hard as people say.

                  It seems that self driving car fans are way too tribal and seem to be convinced that the "other side" sucks and is guaranteed to fail. In reality, it is very unclear as both strategies have their merits and only time will tell in the long run.

                  • UltraSane 2 days ago

                    " Tesla FSD works surprisingly well and improving capabilities to hands free actual autonomy isn't as far fetched as one might think"

                    Except FSD doesn't work surprisingly well and there is no way it will get as good as Waymo using vision-only.

                    "It seems that self driving car fans are way too tribal and seem to be convinced that the "other side" sucks and is guaranteed to fail."

                    I'm not being tribal, I'm being realistic based on the very public performance of both systems.

                    If Musk was serious about his Robotaxi claims then Tesla would be operating very differently. Instead it is pretty obvious it all a con to inflate Tesla shares beyond all reason.

                • UltraSane 2 days ago

                  The difference is that Waymo has a very well engineered system using vision, LIDAR, and millimeter wave RADAR that works well enough in limited areas to provide tens of thousands of actual driver-less rides. Tesla has a vision only system that sucks so bad a human has to be ready to take over for it at any time like a parent monitoring a toddler near stairs.

                  • valval 3 hours ago

                    Wait until you hear Waymo has people ready to step in with remote controls at all times.

    • gitaarik 2 days ago

      Just like AGI and the year of the Linux desktop ;P

      • porphyra 2 days ago

        Honestly LLMs were a big step towards AGI, and gaming on Linux is practically flawless now. Just played through Black Myth Wukong with no issues out of the box.

        • UltraSane 2 days ago

          LLMs are to AGI

          as

          A ladder is to getting to orbit.

          I can seem LLMs serving as a kind of memory for an AGI but something fundamentally different will be needed for true reasoning and continues self-improvement.

  • jjmarr 2 days ago

    [flagged]

    • UltraSane 2 days ago

      It "works" if you mean often does incredibly stupid and dangerous things and requires a person to be ready to take over for it at any moment to prevent a crash. So far no Tesla car has ever legally driven even a single mile without a person in the driver's seat.

      • jjmarr 2 days ago

        And? How does that make Elon Musk a con artist?

        It's possible to physically get in a Tesla and have it drive you from point A to point B. That's a self-driving car. You're saying it's unreliable, makes mistakes, and can be used illegally. That doesn't mean the car can't drive itself, just that it doesn't do a very good job at "self-driving"

        • UltraSane a day ago

          "How does that make Elon Musk a con artist?"

          Because since 2014 he has made wildly unrealistic claims that he is smart enough to know were BS.

          December 2015: “We’re going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years.”

          January 2016 In ~2 years, summon should work anywhere connected by land & not blocked by borders, eg you're in LA and the car is in NY

          June 2016: “I really consider autonomous driving a solved problem, I think we are less than two years away from complete autonomy, safer than humans.”

          October 2016 By the end of next year, said Musk, Tesla would demonstrate a fully autonomous drive from, say, a home in L.A., to Times Square ... without the need for a single touch, including the charging.

          "A 2016 video that Tesla used to promote its self-driving technology was staged to show capabilities like stopping at a red light and accelerating at a green light that the system did not have, according to testimony by a senior engineer."

          January 2017 The sensor hardware and compute power required for at least level 4 to level 5 autonomy has been in every Tesla produced since October of last year.

          March 2017: “I think that [you will be able to fall asleep in a Tesla] in about two years.”

          May 2017 Update on the coast to coast autopilot demo? - Still on for end of year. Just software limited. Any Tesla car with HW2 (all cars built since Oct last year) will be able to do this.

          March 2018 I think probably by end of next year [end of 2019] self-driving will encompass essentially all modes of driving and be at least 100% to 200% safer than a person.

          February 2019 We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year

          April 2019 We expect to be feature complete in self driving this year, and we expect to be confident enough from our standpoint to say that we think people do not need to touch the wheel and can look out the window sometime probably around the second quarter of next year.

          May 2019 We could have gamed an LA/NY Autopilot journey last year, but when we do it this year, everyone with Tesla Full Self-Driving will be able to do it too

          December 2020 I'm extremely confident that Tesla will have level five next year, extremely confident, 100%

          January 2021 FSD will be capable of Level 5 autonomy by the end of 2021

    • two_handfuls 2 days ago

      I tried it. It drives worse than a teenager.

      There is absolutely no way this can safely drive a car without supervision.

      • valval 2 days ago

        It’s been safer than a human driver for years. It’s also not meant to be unsupervised.

        • two_handfuls a day ago

          "safer than a human driver for years" can be misleading, since the system is supervised - it assists the human driver. So what we're comparing is human+FSD vs avg car (with whatever driver assist it has).

          The claim that FSD+human is safer than an average car is old and has since been debunked: if instead of comparing vs all cars (old and new, with and without driver assistance) you compare like for like: other cars of similar price also with cruise control and lanekeeping assistance, then the Tesla cars are as safe as the others.

          And to be clear, none of those are autonomous. There is a certification process for autonomous cars, followed by Waymo Mercedes and others. Tesla has not even started this process.

        • gitaarik 2 days ago

          Something about these two statements seem to be in conflict with each other, but maybe that is just kinda Tesla PR talk.

          • UltraSane a day ago

            It is cultish doublespeak.

          • valval 2 days ago

            It’s quite easy to be safer than a human driver, since humans are just human. Supervision is required because the system can face edge cases.

            • UltraSane a day ago

              Edge cases like intersections?

            • gitaarik 2 days ago

              Ah ok so if humans would be supervised for their edge cases then humans would actually be safer!

        • lawn 2 days ago

          Safer than a human driver...

          According to Tesla.

testfrequency 2 days ago

I was in a Model 3 Uber yesterday and my driver had to serve onto and up a curb to avoid an (idiot) who was trying to turn into traffic going in the other direction.

The Model 3 had every opportunity in the world to brake and it didn’t, we were probably only going 25mph. I know this is about FSD here, but that moment 100% made me realize Tesla has awful obstacle avoidance.

I just happen to be looking forward and it was a very plain and clear T-Bone avoidance, and at no point did the car handle or trigger anything.

Thankfully everyone was ok, but the front lip got pretty beat up from driving up the curb. Of course the driver at fault that caused the whole incident drove off.

  • averageRoyalty a day ago

    Was the Uber driver using FSD or autopilot?

    Obstacle avoidance and automatic braking can easily be switched on or off by the driver.

kvgr an hour ago

I don't understand how it is possible to be used on public roads...

daghamm 3 days ago

While at it, please also investigate why it is sometimes impossible to leave a damaged vehicle. This has resulted in people dying more than once:

https://apnews.com/article/car-crash-tesla-france-fire-be8ec...

  • MadnessASAP 3 days ago

    The why is pretty well understood, no investigation needed. I don't like the design but it's because the doors are electronic and people don't know where the manual release is.

    In a panic people go on muscle memory, which is push the useless button. They don't remember to pull the unmarked unobtrusive handle that they may not even know exists.

    If it was up to me, sure have your electronic release, but make the manual release a big handle that looks like the ejection handle on a jet (yellow with black stripes, can't miss it).

    * Or even better, have the standard door handle mechanically connected to the latch through a spring loaded solenoid that disengages the mechanism. Thus when used under normal conditions it does the thing electronically but the moment power fails the door handle connects to the manual release.

    • Clamchop 3 days ago

      Or just use normal handles, inside and outside, like other cars. What they've done is made things worse by any objective metric in exchange for a "huh, nifty" that wears off after a few weeks.

      • nomel 3 days ago

        I think this is the way. Light pull does the electronic thing. Hard pull does the mechanical thing. They could have done this with the mechanical handle that's there already (that I have pulled almost every time I've used a Tesla, getting anger and weather stripping inspection from the owner).

    • daghamm 3 days ago

      There are situations where manual release has not worked

      https://www.businessinsider.com/how-to-manually-open-tesla-d...

      • willy_k 2 days ago

        The article you provided does not say that. The only failure related to the manual release it mentions is that using it breaks the window.

        > Exton said he followed the instructions for the manual release to open the door, but that this "somehow broke the driver's window."

    • carimura 3 days ago

      it's worse than that, at least in ours, the backseat latches are under some mat, literally hidden. i had no idea it was there for the first 6 months.

      • Schiendelman a day ago

        That's at the behest of the federal government. Child lock rules require they aren't accessible.

    • amluto 2 days ago

      I’ve seen an innovative car with a single door release. As you pull it, it first triggers the electronic mechanism (which lowers the window a bit, which is useful in a door with no frame above the window) and then, as you pull it farther, it mechanically unlatches the door.

      Tesla should build their doors like this. Oh, wait, the car I’m talking about is an older Tesla. Maybe Tesla should remember how to build doors like this.

      • crooked-v 2 days ago

        It's not very 'innovative' these days. My 2012 Mini Cooper has it.

    • Zigurd 3 days ago

      The inside trunk release on most cars has a glow-in-the-dark fluorescent color handle

    • leoh a day ago

      Crazy that with all of NHTSA's regulations, this is still legal.

gitaarik 2 days ago

It concerns me that these Tesla's can suddenly start acting differently after a software update. Seems like a great target for a cyber attack. Or just a fail from the company. A little bug that is accidentally spread to millions of cars all over the world.

And how is this regulated? Say the software gets to a point that we deem it safe for full self driving, then it gets approved on the road, and then Tesla adds a new fancy feature to their software and rolls out an update. How are we to be confident that it's safe?

  • boshalfoshal a day ago

    > how are we to be confident that its safe?

    I hope you realize that these companies dont just push updates to your car like vscode does.

    Every change has to be unit tested, integration tested, tested in simulation, driven on a multiple cars on an internal fleet (in multiple countries) for multiple days/weeks, then is sent out in waves, then finally, once a bunch of metrics/feedback comes back, they start sending it out wider.

    Admittedly you pretty much have to just trust that the above catches most egregious issues, but there will always be unknown unknowns that will be hard to account for, even with all that. Either that or legitimately willful negligence, in which case, yes they should be held accountable.

    These aren't scrappy startups pushing fast and breaking things, there is an actual process to this.

  • rightbyte 2 days ago

    Imagine all Teslas doing a full left right now. And full right in left steer countries.

    OTA updates and auto updates in general is just a thing that should not be in vehicles. The ecu:s should have to be air gaped to the internet to be considered road worthy.

botanical 2 days ago

Only the US government can allow corporations to beta test unproven technology on the public.

Governments should carry out comprehensive tests on a self-driving car's claimed capabilities. This is the same as cars without proven passenger safety (Euro NCAP) aren't allowed to be on roads carrying passengers.

  • krasin 2 days ago

    > Only the US government can allow corporations to beta test unproven technology on the public.

    China and Russia do it too. It's not an excuse, but definitely not just the US.

  • akira2501 2 days ago

    > Only the US government

    Any Legislative body can do so. There's no reason to limit this strictly to the federal government. States and municipalities should have a say in this as well. The _citizens_ are the only entity that _decide_ if beta technology can be used or not.

    > comprehensive tests on a self-driving car's claimed capabilities.

    This presupposes the government is naturally capable of performing an adequate job at this task or that the automakers won't sue the government to interfere with the testing regime and efficacy of it's standards.

    > aren't allowed to be on roads carrying passengers.

    According to Wikipedia Euro NCAP is a _voluntary_ organization and describes the situation thusly "legislation sets a minimum compulsory standard whilst Euro NCAP is concerned with best possible current practice." Which effectively highlights the above problems perfectly.

  • dham 13 hours ago

    Uhh, have you heard of the FDA? It's approved hundreds of chemicals that are put in all of food. And we're not talking about a few deaths, we're talking hundreds of thousands if not millions.

  • CTDOCodebases 2 days ago

    Meh. Happens all around the world. Even if the product works there is no guarantee that it will be safe.

    Asbestos products are a good example of this. A more recent one is Teflon made with PFOAs or engineered stone like Caesarstone.

  • dzhiurgis 2 days ago

    If it takes 3 months to approve where steel rocket falls you might as well give up iterating something as complex as FSD.

    • AlotOfReading 2 days ago

      There are industry standards for this stuff. ISO 21448, UL-4600, UNECE R157 for example, and even commercial certification programs like the one run by TÜV Süd for European homologation. It's a deliberate series of decisions on Tesla's part to make their regulatory life as difficult as possible.

    • bckr 2 days ago

      Drive it in larger and larger closed courses. Expand to neighboring areas with consent of the communities involved. Agree on limited conditions until enough data has been gathered to expand those conditions.

      • romon 2 days ago

        While controlled conditions promote safety, they do not yield effective training data.

        • AlotOfReading 2 days ago

          That's how all autonomous testing programs currently work around the world. That is, every driverless vehicle system on roads today was developed this way. You're going to have to be more specific when you say that it doesn't work.

dietsche 3 days ago

I would like more details. There are definitely situations where neither a car nor a human could respond quickly enough to a situation on the road.

for example, I recently hit a deer. The dashcam shows that I had less than 100 feet from when the deer became visible due to terrain to impact while driving at 60 mph. Keeping in mind that stopping a car in 100 feet at 60 mph is impossible. Most vehicles need more than triple that without accounting for human reaction time.

  • arcanemachiner 3 days ago

    This is called "overdriving your vision", and it's so common that it boggles my mind. (This opinion might have something to do with the deer I hit when I first started driving...)

    Drive according to the conditions, folks.

    • Zigurd 3 days ago

      We will inevitably see "AVs are too cautious! Let me go faster!" complaints as AVs drive in more places. But, really humans just suck at risk assessment. And at driving. Driving like a human is comforting in some contexts, but that should not be a goal when it trades away too much safety.

    • thebruce87m 2 days ago

      There is a difference between driving too fast around a corner to stop for something stationary on the road and driving through countryside where something might jump out.

      I live in a country with deer but the number of incidences of them interacting with road users is so low that it does not factor in to my risk tolerance.

      • Zigurd 2 days ago

        The risks vary with speed. At 30mph a deer will be injured and damage your car, and you might have to call animal control to find the deer if it was able to get away. At 45mph there is a good chance the deer will impact your windshield. If it breaks through, that's how people die in animal collisions. They get kicked to death by a frantic, panicked, injured animal.

    • Kirby64 3 days ago

      On many roads if a deer jumps across the road at the wrong time there’s literally nothing you can do. You can’t always drive at 30mph on back country roads just because a deer might hop out at you.

      • seadan83 2 days ago

        World of difference between, 30, 40, 50 and 60. Feels like something I have noticed between west and east coast drivers. Latter really send it on country turns and just trust the road. West coast, particularly montana, when vision is reduced, speed slows down. Just too many animals or road obstacles (eg: rocks, planks of wood) to just trust the road.

        • dragonwriter 2 days ago

          > West coast, particularly montana

          Montana is not "West coast".

          • seadan83 2 days ago

            Yeah, I was a bit glib. My impression is more specifically of the greater northwest vs rest. Perhaps just "the west" vs "the east".

            Indiana drivers for example really do send it (in my experience). Which is not east coast of course.

            There is a good bit of nuance... I would perhaps say more simply east of Mississippi vs west, but Texas varies by region and so-Cal drivers vary a lot as well, particularly compared to nor-Cal and central+eastern california. (I don't have an impression for nevada and new mexico drivers - I dont have any experience on country roads in those states)

        • Kirby64 2 days ago

          Road obstacles are static and can be seen by not “out driving your headlights”. Animals flinging themselves into the road cannot, in many instances.

          • amenhotep 2 days ago

            You are responding in a thread about a person saying they were driving at 60 when the deer only became visible "due to terrain" at 100 feet away, and therefore hitting it is no reflection on their skill or choices as a driver.

            I suppose we're meant to interpret charitably here, but it really seems to me like there is a big difference between the scenario described and the one you're talking about, where the deer really does fling itself out in front of you.

            • dietsche a day ago

              op here. you nailed it on the head. also, the car started breaking before i could!

              incidentally, i’ve also had the tesla dodge a deer successfully!

              autopilot has improved in BIG ways over the past 2 years. went 700 miles in one day on autopilot thru the mountains. no issues at all.

              that said expecting perfection from a machine or a human is a fools errand.

  • nomel 3 days ago

    I've had a person, high on drugs, walk out from between bushes that were along the road. I screeched to a halt in front of them, but 1 second later and physics would have made it impossible, regardless of reaction time (or non-negligible speed).

  • freejazz 3 days ago

    The article explains the investigation is based upon visibility issues... what is your point? I don't think any reasonable person doubts there are circumstances where nothing could adequately respond in order to prevent a crash. It seems a rather odd assumption to reach that these crashes would be in one of those scenarios such that we should be explained to otherwise, no less so when the report facially explains this to not be the case.

  • Log_out_ 3 days ago

    just have a drone fly ahead and have the lidar pointcloud on hud. This are very bio-logic excuses :)

drodio 2 days ago

I drive a 2024 Tesla Model Y and another person in my family drives a 2021 Model Y. Both cars are substantially similar (the 2021 actually has more sensors than the 2024, which is strictly cameras-only).

Both cars are running 12.5 -- and I agree that it's dramatically improved over 12.3.

I really enjoy driving. I've got a #vanlife Sprinter that I'll do 14 hour roadtrips in with my kids. For me, the Tesla's self-driving capability is a "nice to have" -- it sometimes drives like a 16 year old who just got their license (especially around braking. Somehow it's really hard to nail the "soft brake at a stop sign" which seems like it should be be easy. I find that passengers in the car are most uncomfortable when the car brakes like this -- and I'm the most embarrassed because they all look at me like I completely forgot how to do a smooth stop at a stop sign).

Other times, the Tesla's self-driving is magical and nearly flawless -- especially on long highway road trips, like up to Tahoe. Even someone like me who loves doing road trips really appreciates the ability to relax and not have to be driving.

But here's one observation I've had that I don't see quite sufficiently represented in the comments:

The other person in my family with the 2021 Model Y does not like to drive like I do, and they really appreciate that the Tesla is a better driver than they feel themselves to be. And as a passenger in their car, I also really appreciate that when the Tesla is driving, I generally feel much more comfortable in the car. Not always, but often.

There's so much variance in us as humans around driving skills and enjoyment. It's easy to lump us together and say "the car isn't as good as the human." And I know there's conflicting data from Tesla and NHTSA about whether in aggregate, Teslas are safer than human drivers or not.

But what I definitely know from my experience is that the Tesla is already a better driver than many humans are -- especially those that don't enjoy driving. And as @modeless points out, the rate of improvement is now vastly accelerating.

  • magnetowasright 6 hours ago

    Has this relative considered that they may not be capable of driving safely at all if they (and others) really do believe that their tesla (whose driving software drives like a freshly licensed 16 year old per your comment) is a better driver? Isn't intervening when the tesla does something stupid/dangerous more difficult than just driving?

    > Even someone like me who loves doing road trips really appreciates the ability to relax and not have to be driving.

    Pardon the nitpick (and please excuse me if I'm interpreting your comment wrong here) but if someone is using whatever maximum capability self driving functionality is available, they are in fact still driving and should not be 'relaxed' as if they're a passenger.

    I would posit that your observation about your relative and the variance in driving skills is not commonly discussed because there's no self driving cars that can actually replace a driver yet, and an unsafe driver relying on 'self driving' software is still an unsafe driver who should not drive.

    I realise that there's many understandable reasons people can't just give up their cars and carry on like normal. Helping people to stay mobile, connected, and independent is important, but an unsafe driver is, well, unsafe. It kinda terrifies me that people might be encouraging the elderly people (for example) in their lives to get teslas to keep them driving when they aren't capable of driving safely any more because it's still unsafe.

  • lowbloodsugar 17 hours ago

    You are a living example of survivorship bias. One day your car will kill you or someone else, and then maybe you’ll be able to come back here and tell us how wrong you were. How, with your new experience, you can see how the car only “seemed” competent, how it was that very seeming competence that got someone killed, because you trusted it.

Aeolun 2 days ago

I love how the image in the article has a caption that says it tells you to pay attention to the road, but I had to zoom in all the way to figure out where that message actually was.

I’d expect something big and red with a warning triangle or something, but it’s a tiny white message in the center of the screen.

  • valine 2 days ago

    It gets progressively bigger and louder the longer you ignore it. After 30ish seconds it sounds an alarm and kicks you out.

    • FireBeyond 2 days ago

      > After 30ish seconds it sounds an alarm and kicks you out.

      That's much better. When AP functionality was introduced, the alarm was fifteen MINUTES.

  • taspeotis 2 days ago

    Ah yes, red with a warning like “WARNING: ERROR: THE SITUATION IS NORMAL!”

    Some cars that have cruise control but an analog gauge cluster that can’t display WARNING ERRORs even hide stuff like “you still have to drive the car” in a manual you have to read yet nobody cares about that.

    Honestly driving a car should require some sort of license for a bare minimum of competence.

jqpabc123 3 days ago

By now, most people have probably heard that Tesla's attempt at "Full Self Driving" is really anything but --- after a decade of promises. The vehicle owners manual spells this out.

As I understand it, the contentious issue is the fact that unlike most others, their attempt works mostly from visual feedback.

In low visibility situations, their FSD has limited feedback and is essentially driving blind.

It appears that Musk may be seeking a political solution to this technical problem.

  • whamlastxmas 3 days ago

    It’s really weird how much you comment about FSD being fake. My Tesla drives me 10+ miles daily and the only time I touch any controls is pulling in and out of my garage. Literally daily. I maybe disengage once every couple days just to be on the safe side in uncertain situations, it I’m sure it’d likely do fine there too.

    FSD works. It drives itself fine 99.99% of the time. It is better than most human drivers. I don’t know how you keep claiming it doesn’t or doesn’t exist.

    • sottol 3 days ago

      The claim was about _full_ driving being anything but, ie not _fully_ self-driving, not being completely fake. Disengaging every 10-110 miles is just not "full", it's partial.

      And then the gp went into details in which specific situations fsd is especially problematic.

    • dham 13 hours ago

      It's similar to when DHH said they were not bundling code in production and all the Javascript bros said "No you can't do that it won't work". DHH was like "yes but I'm doing it"

      That's how it feels in FSD land right now. Everyone's saying FSD doesn't work and it'll never be here, but I'm literally using it every day lol.

    • peutetre 3 days ago

      The problem is Tesla and Musk have been lying about full self-driving for years. They have made specific claims of full autonomy with specific timelines and it's been a lie every time: https://motherfrunker.ca/fsd/

      In 2016 a video purporting to show full self-driving with the driver there purely "for legal reasons" was staged and faked: https://www.reuters.com/technology/tesla-video-promoting-sel...

      In 2016 Tesla said that "as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver." That was a lie: https://electrek.co/2024/08/24/tesla-deletes-its-blog-post-s...

      Musk claimed there would be 1 million Tesla robotaxis on the road in 2020. That was a lie: https://www.thedrive.com/news/38129/elon-musk-promised-1-mil...

      Tesla claimed Hardware 3 would be capable of full self-driving. When asked about Hardware 3 at Tesla's recent robotaxi event, Musk didn't want to "get nuanced". That's starting to look like fraud: https://electrek.co/2024/10/15/tesla-needs-to-come-clean-abo...

      Had Tesla simply called it "driver assistance" that wouldn't be a lie. But they didn't do that. They doubled, tripled, quadrupled down on the claim that it is "full self-driving" making the car "an appreciating asset" that it would be "financially insane" not to buy:

      https://www.cnbc.com/2019/04/23/elon-musk-any-other-car-than...

      https://edition.cnn.com/2024/03/03/cars/musk-tesla-cars-valu...

      It's not even bullshit artistry. It's just bullshit.

      Lying is part of the company culture at Tesla. Musk keeps lying because the lies keep working.

      • whamlastxmas 3 days ago

        Most of this is extreme hyperbole and it’s really hard to believe this is a genuine good faith attempt at conversation instead of weird astroturfing, bc these tired inaccurate talking points are what come up in literally every single even remotely associated to Elon. It’s like there’s a dossier of talking points everyone is sharing

        The car drives itself. This is literally undeniable. You can test it today for free. Yeah it doesn’t have the last 0.01% done yet and yeah that’s probably a lot of work. But commenting like the GP is exhausting and just not reflective of reality

        • peutetre 3 days ago

          > bc these tired inaccurate talking points are what come up in literally every single even remotely associated to Elon

          You understand that the false claims, the inaccuracies, and the lies come from Elon, right? They're associated with him because he is the source of them.

          They're only tired because he's been telling the same lie year after year.

        • jqpabc123 3 days ago

          ... not reflective of reality

          Kinda like repeated claims of "Full Self Driving" for over a decade.

  • enslavedrobot 3 days ago

    Here's a video of FSD driving the same route as a waymo 42% faster with zero interventions. 23 min vs 33. This is my everyday. Enjoy.

    https://youtu.be/Kswp1DwUAAI?si=rX4L5FhMrPXpGx4V

    • ck2 3 days ago

      There are also endless videos of teslas driving into pedestrians, plowing full speed into emergency vehicles parked with flashing lights, veering wildly from strange markings on the road, etc. etc.

      "works for me" is a very strange response for someone on Hacker News if you have any coding background - you should realize you are a beta tester unwittingly if not a full blown alpha tester in some cases

      All it will take is a non-standard event happening on your daily drive. Most certainly not wishing it on you, quite the opposite, trying to get you to accept that a perfect drive 99 times out of 100 is not enough.

      • enslavedrobot 3 days ago

        Those are Autopilot videos this discussion is about FSD. FSD has driven ~2 billion miles at this point and had potentially 2 fatal accidents.

        The US average is 1.33 deaths/100 million miles. Tesla on FSD is easily 10x safer.

        Every day it gets safer.

        • hilux 2 days ago

          Considering HN is mostly technologists, the extent of Tesla-hate in here surprises me. My best guess is that it is sublimated Elon-hate. (Not a fan of my former neighbor myself, but let's separate the man from his creations.)

          People seem to be comparing Tesla FSD to perfection, when the more fair and relevant comparison is to real-world American drivers. Who are, on average, pretty bad.

          Sure, I wouldn't trust data coming from Tesla. But we have government data.

          • lowbloodsugar 17 hours ago

            That seems an odd take. This is a technologist website, and a good number of technologists believe in building robust systems that don’t fail in production. We don’t stand for demos, and we have to fight off consultants peddling crapware that demos well but dies in production. I own a Tesla, despite my dislike of Musk, because it is an insanely fun car. I will never enable FSD, did not even do so when it was free. I see even the best teams have production outages. Until Tesla legally accepts, and the laws allows them to, legal responsibility, and until it’s good enough that it doesn’t disengage, ever, then I’m never using it and nobody else should.

            • hilux 7 hours ago

              > ... systems that don’t fail in production.

              I'll say it again: "compared to what?"

        • diggernet 3 days ago

          How many miles does it have on the latest software? Because any miles driven on previous software are no longer relevant. Especially with that big change in v12.

          • enslavedrobot 3 days ago

            The miles driven are rising exponentially as the versions improve according to company filings. If the miles driven on previous versions are no longer relevant how can the NHTSA investigation of previous versions impact FSD regulation today?

            Given that the performance has improved dramatically over the last 6 months, it is very reasonable to assume that the miles driven to fatality ratio also improving.

            Using the value of 1.33 deaths per 100 million miles driven vs 2 deaths in 2 billion miles driven, FSD has saved approximately 24 lives so far.

    • jqpabc123 3 days ago

      Can it drive the same route without a human behind the wheel?

      Not legally and not according to Tesla either --- because Tesla's FSD is not "Fully Self Driving" --- unlike Waymo.

23B1 2 days ago

"Move fast and kill people"

Look, I don't know who needs to hear this, but just stop supporting this asshole's companies. You don't need internet when you're camping, you don't need a robot to do your laundry, you don't need twitter, you can find more profitable and reliable places to invest.

  • CrimsonRain a day ago

    Nobody needs to hear your nonsense rants. A 50k model 3 makes almost all offerings up to 80k (including electrics) from legacy automakers look like garbage.

    • leoh a day ago

      I read their "nonsense rant" and I appreciated it.

      >A 50k model 3 makes almost all offerings up to 80k (including electrics) from legacy automakers look like garbage.

      This is a nonsense rant in my opinion.

    • 23B1 a day ago

      Found the guy who bought $TSLA at its ATH

  • hnburnsy 2 days ago

    Move slow and kill peo

frabjoused 2 days ago

I don't understand why this debate/probing is not just data driven. Driving is all big data.

https://www.tesla.com/VehicleSafetyReport

This report does not include fatalities, which seems to be the key point in question. Unless the above report has some bias or is false, Teslas in autopilot appear 10 times safer than the US average.

Is there public data on deaths reported by Tesla?

And otherwise, if the stats say it is safer, why is there any debate at all?

  • jsight 2 days ago

    The report from Tesla is very biased. It doesn't normalize for the difficulty of the conditions involved, and is basically for marketing purposes.

    IMO, the challenge for NHTSA is that they can get tremendous detail from Tesla but not from other makes. This will make it very difficult for them to get a solid baseline for collisions due to glare in non-FSD equipped vehicles.

  • JTatters 2 days ago

    Those statistics are incredibly misleading.

    - It is safe to assume that the vast majority of autopilot miles are on highways (although Tesla don't release this information).

    - By far the safest roads per mile driven are highways.

    - Autopilot will engage least during the most dangerous conditions (heavy rain, snow, fog, nighttime).

  • notshift 2 days ago

    Without opening the link, the problem with every piece of data I’ve seen from Tesla is they’re comparing apples to oranges. FSD won’t activate in adverse driving conditions, aka when accidents are much more likely to occur. And/or drivers are choosing not to use it in those conditions.

  • bastawhiz 2 days ago

    Autopilot is not FSD.

    • frabjoused 2 days ago

      That's a good point. Are there no published numbers on FSD?

  • FireBeyond 2 days ago

    > Unless the above report has some bias or is false

    Welcome to Tesla.

    The report measures accidents in FSD mode. Qualifiers to FSD mode: the conditions, weather, road, location, traffic all have to meet a certain quality threshold before the system will be enabled (or not disable itself). Compare Sunnyvale on a clear spring day to Pittsburgh December nights.

    There's no qualifier to the "comparison": all drivers, all conditions, all weather, all roads, all location, all traffic.

    It's not remotely comparable, and Tesla's data people are not that stupid, so it's willfully misleading.

    > This report does not include fatalities

    It also doesn't consider any incident where there was not airbag deployment to be an accident. Sounds potentially reasonable until you consider:

    - first gen airbag systems were primitive: collision exceeds threshold, deploy. Currently, vehicle safety systems consider duration of impact, speeds, G-forces, amount of intrusion, angle of collision, and a multitude of other factors before deciding what, if any, systems to fire (seatbelt tensioners, airbags, etc.) So hit something at 30mph with the right variables? Tesla: "this is not an accident".

    - Tesla also does not consider "incident was so catastrophic that airbags COULD NOT deploy*" to be an accident, because "airbags didn't deploy". This umbrella could also include egregious, "systems failed to deploy for any reason up to and including poor assembly line quality control", as also not an accident and also "not counted".

    > Is there public data on deaths reported by Tesla?

    They do not.

    They also refuse to give the public much of any data beyond these carefully curated numbers. Hell, NHTSA/NTSB also mostly have to drag heavily redacted data kicking and screaming out of Tesla's hands.

metabagel 17 hours ago

Cruise control with automatic following distance and lane-keeping are such game changers, that autonomous driving isn’t necessary for able drivers.

OK, the lane-keeping isn’t quite there, but I feel like that’s solvable.

siliconc0w 20 hours ago

Traffic jams and long monotonous roads are really where these features, getting to level 3 on those should be the focus over trying to maintain a fiction of level 5 everywhere. (And like other comments, >2 should automatically mean liability)

xvector 2 days ago

My Tesla routinely tries to kill me on absolutely normal California roads in normal sunny conditions, especially when there are cars parked on the side of the road (it often brakes thinking I'm about to crash into them, or even swerves into them thinking that's the "real" lane).

Elon's Unsupervised FSD dreams are a good bit off. I do hope they happen though.

  • jrflowers 2 days ago

    > My Tesla routinely tries to kill me

    > Elon's Unsupervised FSD dreams are a good bit off. I do hope they happen though.

    It is very generous that you would selflessly sacrifice your own life so that others might one day enjoy Elon’s dream of robot taxis without steering wheels

    • massysett 2 days ago

      Even more generous to selflessly sacrifice the lives and property of others that the vehicle "self-drives" itself into.

    • judge2020 2 days ago

      If the data sharing checkboxes are clicked, OP can still help send in training data while driving on his own.

  • left-struck 2 days ago

    That’s hilariously ironic because I have a pretty standard newish Japanese petrol car (I’m not mentioning the brand because my point isn’t that brand x is better than brand y), and it has no ai self driving functions just pretty basic radar adaptive cruise control and emergency brake assist where it will stop if there’s a car brake hard in front of you… and it does a remarkable job at rejecting cars which are slowing down or stopped in other lanes, even when you’re going around a corner and the car is pointing straight towards the other cars but not actually heading towards them since it’s turning. I assume they are using the steering input to help reject other vehicles and dopler effects to detect differences in speed, but it’s remarkable how accurate it is at matching the speed of the car in front of you and only the car in front of you, even when that car is over 15 seconds in front of you. If teslas can’t beat that, it’s sad

  • gitaarik 2 days ago

    I wonder, how are you "driving"? Are you sitting behind the wheel doing nothing except watch really good everything the car does so you can take over when needed? Isn't that a stressful experience? Wouldn't it be more comfortable to just do everything yourself so you know nothing weird can happen?

    Also, if the car does something crazy, how much time do you have to react? I can imagine in some situations you might have too little time to prevent the accident the car is creating.

    • xvector a day ago

      > Isn't that a stressful experience?

      It's actually really easy and kind of relaxing. For long drives, it dramatically reduces cognitive load leading to less fatigue and more alertness on the road.

      My hand is always on the wheel so I can react as soon as I feel the car doing something weird.

  • bogantech 2 days ago

    > My Tesla routinely tries to kill me

    Why on earth would you continue to use it? If it does succeed someday that's on you

    • newdee 2 days ago

      > that’s on you

      They’d be dead, doubt it’s a concern at that point.

  • delichon 2 days ago

    Why do you drive a car that routinely tries to kill you? That would put me right off. Can't you just turn off the autopilot?

    • ddingus 2 days ago

      My guess is the driver tests it regularly.

      How does it do X, Y, ooh Z works, etc...

    • xvector 2 days ago

      It's a pretty nice car when it's not trying to kill me

  • Renaud 2 days ago

    And what if the car swerves, and you aren't able to correct in time and end up killing someone?

    Is that your fault or the car's?

    I would bet that since it's your car, and you're using a knowingly unproven technology, it would be your fault?

    • ra7 2 days ago

      The driver’s fault. Tesla never accepts liability.

      • LunicLynx 2 days ago

        And they have been very clear about that

  • dzhiurgis 2 days ago

    [flagged]

    • dyauspitr 2 days ago

      Emergency breaking will sure as hell kill you or at least the person behind you when you’re going 75 or 80 in the first lane.

      • BobaFloutist 2 days ago

        I'm not a fan of Elon, Tesla, or "FSD", but for what it's worth, that's absolutely the fault of the person behind you for not maintaining appropriate stopping distance.

        • dyauspitr a day ago

          I would say less than 5% of people maintain the required 30-50ft of recommended distance needed to stop safely at those speeds.

          • Schiendelman a day ago

            The stopping distance required at 75mph is at least 250-300 feet, in optimal conditions.

            • dyauspitr 16 hours ago

              Yeah that’s definitely not happening.

          • BobaFloutist 19 hours ago

            Then I guess they're not skilled enough to be granted the privilege of driving at those speeds ¯\_(ツ)_/¯

            • Schiendelman 17 hours ago

              You can work backwards from this - the vast majority of people in the US don't agree with that statement. You might be able to get them there, if Americans didn't have to be as car dependent due to their built environment. But because so many people are already car dependent, it's hard to make those changes.

              The one theory of change I think is approachable suggests allowing dramatic increases in density in places that are not car dependent - the people who live there are much more likely to agree with us, so letting the number of people who live there 10x or even 100x could lead to this kind of change you propose.

graeme 2 days ago

Will the review assess overall mortality of the vehicles compared to similar cars, and overall mortality while FSD is in use?

  • bbor 2 days ago

    I get where you’re coming from and would also be interested to see, but based on the clips I’ve seen that wouldn’t be enough in this case. Of course the bias is inherent in what people choose to post (not normal and not terrible/litigable), but I think there’s enough at this point to perceive a stable pattern.

    Long story short, my argument is this: it doesn’t matter if you reduce serious crashes from 100PPM to 50PPM if 25PPM of those are new crash sources, speaking from a psychological and sociological perspective. Everyone should know that driving drunk, driving distracted, driving in bad weather, and in rural areas at dawn or dusk is dangerous, and takes appropriate precautions. But what do you do if your car might crash because someone ahead flashed their high beams, or because the sun was reflecting off another car in an unusual way? Could you really load up your kids and take your hands off the wheel knowing that at any moment you might hit an unexpected edge condition?

    Self driving cars are (presumably!) hard enough to trust already, since you’re giving away so much control. There’s a reason planes have to be way more than “better, statistically speaking” — we expect them to be nearly flawless, safety-wise.

    • dragonwriter 2 days ago

      > But what do you do if your car might crash because someone ahead flashed their high beams, or because the sun was reflecting off another car in an unusual way?

      These are -- like drunk driving, driving distract, and driving in bad weather -- things that actually do cause accidents with human drivers.

      • hunter-gatherer 2 days ago

        The point is the choice of taking precaution part that you left out of the quote. The other day I was taking my kid to school, and when we turned east the sun was in my eyes and I couldn't see anything, so I pulled over as fast as I could and changed my route. Had I chosen to press forward and been in an accident, it is explainable (albeit still unfortunate and often unnecessary!). However, if I'm under the impression that my robot car can handle such circumstances because it does most of the time and then it glitches, that is harder to explain.

      • paulryanrogers 2 days ago

        Indeed, yet humans can anticipate such things and rely on their experience to reason about what's happening and how to react. Like slow down or shift lanes or just move ones head for a different perfective. A Tesla with only two cameras ("because that's all humans need") is unlikely to provably match that performance for a long time.

        Tesla could also change its software without telling the driver at any point.

      • dfxm12 2 days ago

        This language is a bit of a sticking point for me. If you're drunk driving or driving distracted, there's no "accident". You're intentionally doing something wrong and committing a crime.

  • dekhn 2 days ago

    No, that is not part of a review. They may use some reference aggregated industry data, but it's out of scope to answwer the question I think you're trying to imply.

  • akira2501 2 days ago

    Fatalities per passenger mile driven is the only statistic that would matter. I actually doubt this figure differs much, either way, from the overall fleet of vehicles.

    This is because "inattentive driving" is _rarely_ the cause of fatalities on the road. The winner there is, and probably always will be, Alcohol.

  • FireBeyond 2 days ago

    If you're trying to hint at Tesla's own stats, then at this point those are hopelessly, and knowingly, misleading.

    All they compare is "On the subsets of driving on only the roads where FSD is available, active, and has not or did not turn itself off because of weather, road, traffic or any other conditions" versus "all drivers, all vehicles, all roads, all weather, all traffic, all conditions".

    There's a reason Tesla doesn't release the raw data.

    • rblatz 2 days ago

      I have to disengage FSD multiple times a day and I’m only driving 16 miles round trip. And routinely have to stop it from doing dumb things like stopping at green traffic lights, attempting to do a u turn from the wrong turn lane, or switching to the wrong lane right before a turn.

      • rad_gruchalski 2 days ago

        Why would you even turn it on at this point…

  • infamouscow 2 days ago

    Lawyers are not known for their prowess in mathematics, let alone statistics.

    Making these arguments from the standpoint of an engineer is counterproductive.

    • fallingknife 2 days ago

      Which is why they are the wrong people to run the country

      • paulryanrogers 2 days ago

        Whom? Because math is important and so is law, among a variety of other things.

        s/ Thankfully the US presidential choices are at least rational, of sound mind, and well rounded people. Certainly no spoiled man children among them. /s

  • johnthebaptist 2 days ago

    Yes, if tesla complies and provides that data

TeslaCoils 18 hours ago

Works most of the time, Fails at the worst time - Supervision absolutely necessary...

aanet 3 days ago

About damn time NHTSA opened this full scale investigation. Tesla's "autonowashing" has gone on for far too long.

Per Reuters [1] "The probe covers 2016-2024 Model S and X vehicles with the optional system as well as 2017-2024 Model 3, 2020-2024 Model Y, and 2023-2024 Cybertruck vehicles. The preliminary evaluation is the first step before the agency could seek to demand a recall of the vehicles if it believes they pose an unreasonable risk to safety."

Roughly 2.4 million Teslas in question, with "Full Self Driving" software after 4 reported collisions and one fatality.

NHTSA is reviewing the ability of FSD’s engineering controls to "detect and respond appropriately to reduced roadway visibility conditions."

Tesla has, of course, rather two-facedly called its FSD as SAE Level-2 for regulatory purposes, while selling its "full self driving" but also requiring supervision. ¯\_(ツ)_/¯ ¯\_(ツ)_/¯

No other company has been so irresponsible to its users, and without a care for any negative externalities imposed on non-consenting road users.

I treat every Tesla driver as a drunk driver, steering away whenever I see them on highways.

[FWIW, yes, I work in automated driving and know a thing or two about automotive safety.]

[1] https://archive.is/20241018151106/https://www.reuters.com/bu...

  • buzzert a day ago

    > I treat every Tesla driver as a drunk driver, steering away whenever I see them on highways.

    Would you rather drive near a drunk driver using Tesla's FSD, or one without FSD?

  • ivewonyoung 3 days ago

    > Roughly 2.4 million Teslas in question, with "Full Self Driving" software after 4 reported collisions and one fatality.

    45000 people die yearly just in the US in auto accidents. Those numbers and timeline you quoted seem insignificant at first glance magnified by people with an axe to grind like that guy running anti Tesla superbowl ads, who makes self driving software like you.

lrvick a day ago

All these self driving car companies are competing to see whose proprietary firmware and sensors kill the fewest people. This is insane.

I will -never- own a self driving car unless the firmware is open source, reproducible, remotely attestable, and built/audited by several security research firms and any interested security researchers from the public before all new updates ship.

It is the only way to avoid greedy execs from cutting corners to up profit margins like VW did with faking emissions tests.

Proprietary safety tech is evil, and must be made illegal. Compete with nicer looking more comfortable cars with better miles-to-charge, not peoples lives.

  • boshalfoshal a day ago

    You are conflating two seperate problems (security vs functionality).

    "Firmware" can be open source and secure, but how does this translate to driving performance at all? Why does it matter if the firmware is validated by security researchers, who presumably don't know anything about motion planning, perception, etc? And this is even assuming that the code can be reasonably verified statically. You probably need to to run that code on a car for millions of miles (maybe in simulation) in an uncoutable number of scenarios to run through every edge case.

    The other main problem with what you're asking is that most of the "alpha" of these self driving companies is in proprietary _models_, not software. No one is giving up their models. That is a business edge.

    As someone who has been at multiple AV companies, no one is cutting corners on "firmware" or "sensors" (apart from making it reasonably cost effective so normal people can buy their cars). Its just that AV is a really really really difficult problem with no closed form solution.

    Your normal car has all the same pitfalls of "unverified software running on a safety critical system," except that its easier to verify that straightforward device firmware works vs a very complex engine whose job is to ingest sensor data and output a trajectory.

wg0 2 days ago

In all the hype of AI etc, if you think about it then the foundational problem is that even Computer Vision is not a solved problem at the human level of accuracy and that's at the heart of the issue of both Tesla and that Amazon checkout.

Otherwise as thought experiment, imagine just a tiny 1 Inch tall person glued to the grocery trolley and another sitting on each shelf - just these two alone are all you need for "automated checkout".

  • vineyardmike 2 days ago

    > Otherwise as thought experiment, imagine just a tiny 1 Inch tall person glued to the grocery trolley and another sitting on each shelf - just these two alone are all you need for "automated checkout".

    I don’t think this would actually work, as silly a thought experiment as it is.

    The problem isn’t the vision, it’s state management and cost. It was very easy (but expensive) to see and classify via CV if a person picked something up, it just requires hundreds of concurrent high resolution streams and a way to stitch the global state from all the videos.

    A little 1 inch person on each shelf needs a good way to communicate to every other tiny person what they say, and come to consensus. If 5 people/cameras detect person A picking something up, you need to differentiate between every permutation within 5 discrete actions and 1 seen 5 times.

    In case you didn’t know, Amazon actually hired hundreds of people in India to review the footage and correct mistakes (for training the models). They literally had a human on each shelf. And they still had issues with the state management. With people.

    • wg0 2 days ago

      Yeah - that's exactly is my point that humans were required to recognize and computer vision is NOT a solved problem regardless of tech bros misleading techno optimism.

      Distributed communication and state management on the other hand is a solved problem already mostly with known parameters. How else do you think thousand and thousands of Kubernetes work in the wild.

      • Schiendelman a day ago

        I think you're missing the point GP made: humans couldn't do it. They tried to get humans to do it, and humans had an unacceptable error rate.

        This is important. The autonomous driving problem and the grocery store problem are both about trade-offs, one isn't clearly better than the other.

JTbane 19 hours ago

How can you possibly have a reliable self-driving car without LIDAR?

  • dham 13 hours ago

    Returning to this post in 5 years when FSD has been solved with just vision.

DoesntMatter22 2 days ago

Each version has improved. FSD is realistically the hardest thing humanity as ever tried to do. It involves an enormous amount of manpower, compute power and human discoveries, and has to work right in billions of scenarios.

Building a self flying plane is comically easy by comparison. Building Starship is easier by comparison.

  • gitaarik 2 days ago

    Ah ok, first it is possible within 2 years, and now it is humanity's hardest problem? If it's really that hard I think we better put our resources into something more useful, like new energy solutions, seems we have an energy crisis.

    • DoesntMatter22 12 hours ago

      It's the hardest thing humans have ever tried to do yes. It took less time to go to the moon.

      There are tons of companies and governments working on energy solutions, there is ample time for Tesla to work on self driving.

      Also, do we really have an energy crisis? Are you experiencing rolling blackouts?

gnuser 2 days ago

I worked in 18 a wheeler automation unicorn.

Never rode in one once for a reason.

  • akira2501 2 days ago

    Automate the transfer yards, shipping docks, and trucking terminals. Make movement of cargo across these limited use areas entirely automated and as smooth as butter. Queue drivers up and have their loads automatically placed up front so they can drop and hook in a few minutes and get back on the road.

    I honestly think that's the _easier_ problem to solve by at least two orders of magnitude.

    • porphyra 2 days ago

      There are a bunch of companies working on that. So far off the top of my head I know of:

      * Outrider: https://www.outrider.ai/

      * Cyngn: https://www.cyngn.com/

      * Fernride: https://www.fernride.com/

      Any ideas what other ones are out there?

      • akira2501 2 days ago

        Promising. I'm actually more familiar with the actual transportation and logistics side of the operation and strictly within the USA. I haven't seen anything new put into serious operation out here yet but I'll definitely be watching for them.

    • dylan604 2 days ago

      Did you miss the news about the recent strike by the very people you are suggesting to eliminate? This automation was one of the points of contention.

      Solving the problem might not be as easy as you suggest as long as their are powerful unions involved

      • akira2501 2 days ago

        This automation is inevitable. The ports are a choke point created by unnatural monopoly and a labor union is the incorrect solution. Particularly because their labor actions have massive collateral damage to other labor interests.

        I believe that if trucking were properly unionized the port unions would be crushed. They're not that powerful they've just outlived this particular modernization the longest out of their former contemporaries.

        • dylan604 2 days ago

          So a union is okay for the trucking industry, but not for the dock workers?

          And what exactly will the truckers be trucking if the ports are crushed?

ivewonyoung 2 days ago

> NHTSA said it was opening the inquiry after four reports of crashes where FSD was engaged during reduced roadway visibility like sun glare, fog, or airborne dust. A pedestrian was killed in Rimrock, Arizona, in November 2023 after being struck by a 2021 Tesla Model Y, NHTSA said. Another crash under investigation involved a reported injury

> The probe covers 2016-2024 Model S and X vehicles with the optional system as well as 2017-2024 Model 3, 2020-2024 Model Y, and 2023-2024 Cybertruck vehicles.

This is good, but also for context 45 thousand people are killed in auto accidents in just the US every year, making 4 report crashes and 1 reported fatality for 2.4 million vehicles over 8 years look miniscule by comparison, or even better than many human drivers.

  • enragedcacti 2 days ago

    > making 4 report crashes and 1 reported fatality for 2.4 million vehicles over 8 years look miniscule by comparison, or even better than many human drivers.

    This is exactly what people were saying about the NHTSA Autopilot investigation when it started back in 2021 with 11 reported incidents. When that investigation wrapped earlier this year it had identified 956 Autopilot related crashes between early 2018 and August 2023, 467 of which were confirmed the fault of autopilot and an inattentive driver.

    • fallingknife 2 days ago

      So what? How many miles were driven and what is the record vs human drivers? Also Autopilot is a standard feature that is much less sophisticated than and has nothing to do with FSD.

  • dekhn 2 days ago

    Those numbers aren't all the fatalities associated with tesla cars; IE, you can't compare the 45K/year (roughly 1 per 100M miles driven) to the limited number of reports.

    What they are looking for is whether there are systematic issues with the design and implementation that make it unsafe.

    • moduspol 2 days ago

      Unsafe relative to what?

      Certainly not to normal human drivers in normal cars. Those are killing people left and right.

      • AlexandrB 2 days ago

        No they're not. And if you do look at human drivers you're likely to see a Pareto distribution where 20% of drivers cause most of the accidents. This is completely unlike something like FSD where accidents would be more evenly distributed. It's entirely possible that FSD would make 20% of the drivers safer and ~80% less safe even if the overall accident rate was lower.

      • dekhn 2 days ago

        I don't think the intent is to compare it to normal human drivers, although having some level of estimate of accident/injury/death rates (to both the driver, passenger, and people outside the car) with FSD enabled/disabled would be very interesting.

        • moduspol 2 days ago

          > I don't think the intent is to compare it to normal human drivers

          I think our intent should be focused on where the fatalities are happening. To keep things comparable, we could maybe do 40,000 studies on distracted driving in normal cars for every one or two caused by Autopilot / FSD.

          Alas, that's not where our priorities are.

      • llamaimperative 2 days ago

        Those are good questions. We should investigate to find out. (It'd be different from this one but it raises a good question. What is FSD safe compared to?)

      • Veserv 2 days ago

        What? Humans are excellent drivers. Humans go ~70 years between injury-causing accidents and ~5,000 years between fatal accidents even if we count the drunk drivers. If you started driving when the Pyramids were still new, you would still have half a millennium until you reach the expected value between fatalities.

        The only people pumping the line that human drivers are bad are the people trying to sell a dream that they can make a self-driving car in a weekend, or "next year", if you just give them a pile of money and ignore all the red flags and warning signs that they are clueless. The problem is shockingly hard and underestimating it is the first step to failure. Reckless development will not get you there safely with known technology.

  • tapoxi 2 days ago

    I don't agree with this comparison. The drivers are licensed, they have met a specific set of criteria to drive on public roads. The software is not.

    We are not sure when FSD is engaged with all of these miles driven, and if FSD is making mistakes a licensed human driver would not. I would at the very least expect radical transparency.

    • fallingknife 2 days ago

      I too care more about bureaucratic compliance than what the actual chances of something killing me are. When I am on that ambulance I will be thinking "at least that guy met the specific set of criteria to be licensed to drive on public roads."

      • tapoxi 2 days ago

        Are we really relegating drivers licenses to "bureaucratic compliance"?

        If FSD is being used in a public road, it impacts everyone on that road, not just the person who opted-in to using FSD. I absolutely want an independent agency to ensure it's safe and armed with the data that proves it.

        • fallingknife 2 days ago

          What else are they? You jump through hoops to get a piece of plastic from the government that declares you "safe." And then holders of those licenses go out and kill 40,000 people every year just in the US.

          • tapoxi a day ago

            And you're comparing that against what? That's 40,000 with regulation in place. Imagine if we let anyone drive without training.

            • fallingknife 16 hours ago

              We do. Nobody crazy enough to drive without knowing how is going to not be crazy enough to drive without a piece of plastic from the government.

  • throwup238 2 days ago

    > The agency is asking if other similar FSD crashes have occurred in reduced roadway visibility conditions, and if Tesla has updated or modified the FSD system in a way that may affect it in such conditions.

    Those four crashes are just the ones that sparked the investigation.

  • insane_dreamer 2 days ago

    > making 4 report crashes and 1 reported fatality for 2.4 million vehicles over 8 years look miniscule by comparison,

    that's the wrong comparison

    the correct comparison is the number of report crashes and fatalities for __unsupervised FSD__ miles driven (not counting Tesla pilot tests, but actual customers)

    • jandrese 2 days ago

      That seems like a bit of a chicken and egg problem where the software is not allowed to go unsupervised until it racks up a few million miles of successful unsupervised driving.

      • AlotOfReading 2 days ago

        There's a number of state programs to solve this problem with testing permits. The manufacturer puts up a bond and does testing in a limited area, sending reports on any incidents to the state regulator. The largest of these, California's, has several dozen companies with testing permits.

        Tesla currently does not participate in any of these programs.

      • insane_dreamer 2 days ago

        Similar to a Phase 3 clinical trial (and for similar reasons).

  • whiplash451 2 days ago

    Did you scale your numbers in proportion of miles driven autonomously vs manually?

    • josephg 2 days ago

      Yeah, that’d be the interesting figure: How many deaths per million miles driven? How does Tesla’s full self driving stack up against human drivers?

      • gostsamo 2 days ago

        Even that is not good enough, because the "autopilot" usually is not engaged in challenging conditions making any direct comparisons not really reliable. You need similar roads in simila weather and similar time of the day for approximating good comparison.

        • ivewonyoung 2 days ago

          How many of the 45,000 deaths on US roads( and an order of magnitude more injuries) occur due to 'challenging conditions' ?

dzhiurgis 2 days ago

What is FSD uptake rate. I bet it’s less than 1% since in most countries it’s not even available…

FergusArgyll 20 hours ago

Are the insurance prices different if you own a Tesla with FSD? if not, why not?

quitit 2 days ago

"Full Self-Driving" but it's not "full" self-driving, as it requires active supervision.

So it's marketed with a nod and wink, as if the supervision requirement is just a peel away disclaimer to satisfy old and stuffy laws that are out of step with the latest technology. When in reality it really does need active supervision.

But the nature of the technology is this approach invites the driver to distraction, because what's the use in "full self driving" if one needs to have their hands on the wheel and feet near the pedals ready to take control at a moments notice? Worsening this problem is that the Teslas have shown themselves to drive erratically at unexpected times such as phantom braking or misidentifying natural phenomena for traffic lights.

One day people will look back on letting FSD exist in the market and roll their eyes in disbelief of the recklessness.

kjkjadksj 17 hours ago

One thing thats a little weird with the constant tesla framing of fsd being better than the average driver, is this assumption that a tesla owner might be an average driver. The “average” driver includes people who total their cars, who kill pedestrians, who drive drunk, who go 40 over. Meanwhile I’ve never been in an accident. For me and probably for many other drivers, their own individual average performance is much better than the average of all drivers. And given that its a possibility that relying on fsd is much worse for you than not in terms of rate of risk.

Animats 2 days ago

If Trump is elected, this probe will be stopped.

  • leoh a day ago

    Almost certainly would happen and very depressing.

lowbloodsugar 13 hours ago

I'm not turning FSD on until it is a genuine autonomous vehicle that requires no input from me and never disengages. Until Tesla is, under the law, the legal driver of the vehicle, and suffers all the legal impact, you'd have to be mental to let it drive for you. It's like asking, "Hey, here's a chauffeur who has killed several people so far, all over the world. You want him to drive?" Or "Hey, here's a chauffeur. You're fine, you can read a book. But at some point, right when something super dangerous is about to happen, he's going to just panic and stop driving, and then you have to stop whatever you're doing and take over." That's fucking mental.

knob 3 days ago

Didn't Uber have something similar happen? Ran over a woman in Phoenix?

  • BugsJustFindMe 3 days ago

    Yes. And Uber immediately shut down the program in the entire state of Arizona, halted all road testing for months, and then soon later eliminated their self driving unit entirely.

    • leoh a day ago

      Elon is taking the SBF approach of double or nothing -- he hopes Trump will win, in which case Elon can continue to do as much harm as he likes.

whiplash451 2 days ago

Asking genuinely: is FSD enabled/accessible in EU?

  • AlotOfReading 2 days ago

    FSD is currently neither legal nor enabled in the EU. That may change in the future.

masto 19 hours ago

I have such a love-hate relationship with this thing. I don't think Tesla's approach will ever be truly autonomous, and they do a lot of things to push it into unsafe territory (thanks to you know who at the helm). I am a tech enthusiast and part of the reason I bought this car (before you know who revealed himself to be you know what) is that they were the furthest ahead and I wanted to experience it. If they had continued on the path I'd hoped, they'd have put in more sensors, not taken them out for cost-cutting and then tried to gaslight people about it. And all this hype about turning your car into a robotaxi while you're not using it is just stupid.

On the other hand, I'd hate for the result of all this to be to throw the ADAS out with the bathwater. The first thing I noticed even with the early "autopilot" is that it made long road trips much more bearable. I would arrive at my destination without feeling exhausted, and I attribute a lot of that to not having to spend hours actively making micro adjustments to speed and steering. I know everyone thinks they're a better driver than they are, and it's those other people who can't be trusted, but I do feel that when I have autopilot/FSD engaged, I am paying attention, less fatigued, and actually have more cognitive capacity freed up to watch for dangerous situations.

I had to pick someone up at LaGuardia Airport yesterday, a long annoying drive in heavy NYC-area traffic. I engaged autosteer for most of the trip both ways (and disengaged it when I didn't feel it was appropriate), and it made it much more bearable.

I'm neither fanboying nor apologizing for Tesla's despicable behavior. But I would be sad if, in the process of regulating this tech, it got pushed back too far.

nemo44x 21 hours ago

I’m a Tesla fan but I have to say anecdotally that it seems like Teslas represent an outsize number of bad drivers in my observations. Is it the FSD that’s a bit too aggressive and sporadic? Lots of lane changing, etc?

They’re up there with Dodge Ram drivers.

bastloing 2 days ago

It was way safer to ride a horse and buggy

JumpinJack_Cash 2 days ago

Unpopular take: Even with perfect FSD which is much better than the average human driver (say having the robotic equivalent of a Lewis Hamilton in every car) the productivity and health gains won't be as great as people anticipate.

Sure way less traffic deaths but the spike in depression especially among males would be something very big. Life events are much outside of our control, having a 5000lbs thing that can get to 150mph if needed and responds exactly to the accelerator, brake and steering wheel input...well that makes people feel in control and very powerful while behind the aforementioned steering wheel.

Also productivity...I don't know...people think a whole lot and do a whole lot of self reflection while they are driving and when they arrive at destination they just implement the thoughts they had while driving. The ability to talk on the phone has been there for quite some time now too, so thinking and communicating can be done while driving already, what would FSD add?

  • HaZeust 2 days ago

    As a sports car owner, I see where you're coming from -- but MANY do not. We are the 10%, the other 90% see their vehicle as an A-B tool, and you can clearly see that displayed with the average, utilitarian car models that the vast majority of the public buy. There will be no "spike" in depression; simply put, there's not enough people that care about their car, how it gets from point A to point B, or what contribution they give, if any, into that.

    • JumpinJack_Cash 2 days ago

      Maybe they don't care about their car to be a sports car but they surely enjoy some pleasure out of the control of being at the helm of something powerful like a car (even though it's not a sports car)

      Also even people in small cars they think a lot while driving already, and they also communicate, how much more productive they could be with FSD?

      • HaZeust 2 days ago

        I really don't think you're right about the average person, or even a notable size of people, believing in the idea of their car being their "frontier of freedom" as was popular in the 70-80's media. I don't think that many people care about driving nowadays.

Rebuff5007 2 days ago

Tesla testing and developing FSD with normal consumer drivers frankly seems criminal. Test drivers for AV companies get advanced driver training, need to filed detailed reports about the cars response to various driving scenarios, and generally are paid to be as attentive as possible. The fact that any old tech-bro or un-assuming old lady can buy this thing and be on their phone when the car could potentially turn into oncoming traffic is mind boggling.

  • trompetenaccoun 19 hours ago

    >Test drivers for AV companies get advanced driver training, need to filed detailed reports about the cars response to various driving scenarios, and generally are paid to be as attentive as possible.

    Like this one, who ran over and killed a woman?

    https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

    While not fully being the driver's fault, during the investigation they were found to have watched TV on their smartphone at the time of the accident and the driver-facing camera clearly showed them not even looking at the road! Such attentiveness.

  • buzzert a day ago

    > can buy this thing and be on their phone when the car could potentially turn into oncoming traffic is mind boggling

    This is incorrect. Teslas have driver monitoring software, and if the driver is detected using a phone while driving, will almost immediately give a loud warning and disable FSD.

jgalt212 2 days ago

The SEC is clearly afraid of Musk. I wonder what the intimidation factor is at NHTSA.

  • leoh a day ago

    Not meaningful enough.

fortran77 2 days ago

I have FSD in my Plaid. I don't use it. Too scary.

sanp a day ago

This will go away once Trump wins

  • greenie_beans 21 hours ago

    why as a consumer would you want that? sounds extremely against your interest. i doubt you're a billionaire who less regulation would benefit

xqcgrek2 2 days ago

[flagged]

  • jsight 2 days ago

    In this case, I do not think so. NHTSA generally does an excellent job of looking at the big picture without bias.

    Although I must admit that their last investigation felt like an exception. The changes that they enforced seemed to be fairly dubious.

wnevets 3 days ago

[flagged]

  • smt88 3 days ago

    If no one from Boeing is going to jail, Musk certainly isn't either (at least not for anything related to vehicle unsafety).

    • wnevets 3 days ago

      > (at least not for anything related to vehicle unsafety).

      is that also true if the investigation determines elon saying tesla's having full self-driving capabilities is fraud?

xqcgrek2 2 days ago

[flagged]

  • leoh a day ago

    This is your second comment saying the exact same thing.

yieldcrv 2 days ago

Come on US, regulate interstate commerce and tell them to delete these cameras

Lidar is goated and if tesla didn’t want that they can pursue a different perception solution, allowing for innovation

But just visual cameras aiming to replicate us, ban that

soerxpso 2 days ago

For whatever it's worth, Teslas with Autopilot enabled crash about once every 4.5M miles driven, whereas the overall rate in the US is roughly one crash every 70K miles driven. Of course, the selection effects around that stat can be debated (people probably enable autopilot in situations that are safer than average, the average tesla owner might be driving more carefully or in safer areas than the average driver, etc), but it is a pretty significant difference. (Those numbers are what I could find at a glance; DYOR if you'd like more rigor).

We have a lot of traffic fatalities in the US (in some states, an entire order of magnitude worse than in some EU countries), but it's generally not considered an issue. Nobody asks, "These agents are crashing a lot; are they really competent to drive?" when the agent is human, but when the agent is digital it becomes a popular question even with a much lower crash rate.

  • deely3 2 days ago

    > Gaps in Tesla's telematic data create uncertainty regarding the actual rate at which vehicles operating with Autopilot engaged are involved in crashes. Tesla is not aware of every crash involving Autopilot even for severe crashes because of gaps in telematic reporting. Tesla receives telematic data from its vehicles, when appropriate cellular connectivity exists and the antenna is not damaged during a crash, that support both crash notification and aggregation of fleet vehicle mileage. Tesla largely receives data for crashes only with pyrotechnic deployment, which are a minority of police reported crashes.3 A review of NHTSA's 2021 FARS and Crash Report Sampling System (CRSS) finds that only 18 percent of police-reported crashes include airbag deployments.

lopkeny12ko 18 hours ago

> NHTSA said it was opening the inquiry after four reports of crashes where FSD was engaged during reduced roadway visibility like sun glare, fog, or airborne dust. A pedestrian was killed in Rimrock, Arizona, in November 2023 after being struck by a 2021 Tesla Model Y, NHTSA said.

This is going to be another extremely biased investigation.

1. A 2021 Model Y is not on HW4.

2. FSD in November 2023 is not FSD 12.5, the current version. Any assessment of FSD on such outdated software is not going to be representative of the current experience.

  • sashank_1509 17 hours ago

    HW4 is a ridiculous requirement, it’s only post 2023 and even then except Model Y, there’s no HW4.

    FSD in Nov 2023 is not latest but it’s not that old, I guess it’s not in the 12 series which is much better but no need to not investigate this.

    • lopkeny12ko 17 hours ago

      That is literally the entire point. The whole investigation is moot because both the hardware and software are out of date, and no longer used for any current Model Ys off the production line.

  • ra7 17 hours ago

    The perfect setup. By the time an incident in the “current” software is investigated, it will be outdated. All Tesla has to do is a rev a software version and ignore all incidents that occurred prior to it.

    • lopkeny12ko 17 hours ago

      You are welcome to conjure whatever conspiracy theories you like but the reality is FSD 12.5 is exponentially better than previous versions. Don't just take it from me, this is what all Tesla owners are saying too.

Teknomancer 18 hours ago

This is just an opinion. The only way forward with automated and autonomous vehicles is through industry cooperation and standardization. The Tesla approach to the problem is inadequate, lacking means for interoperability, and relying on inferior detection mechanisms. Somebody who solves these problems and does it by offering interoperability and standards applied to all automakers wins.

Sold Tesla investments. The company is on an unprofitable downward spiral trajectory. The CEO is a total clown. Reinvested on advise in Diamler, after Mercedes-Benz and Diamler Trucks North America demonstrated their research and work into creating true autonomous technology and safe global industry standardizations.