General Motors’ Cruise says it’s suspending its driverless operations nationwide as the robotaxi service works to rebuild public trust.

  • LibertyLizard@slrpnk.net
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    8 months ago

    Cruise has also previously maintained that its record of driverless miles have outperformed comparable human drivers in terms of safety, notably crash rates.

    But is this actually true? I hate that they just printed this without any attempt to verify it. Surely some independent body has looked into this by now.

    • Dr. Dabbles@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      3
      ·
      8 months ago

      It’s not true, this is not the first time Cruise has been caught lying, and at some point an adult needs to step up and tell them to stop putting people in danger.

      Even Waymo has commented on the past about Cruise playing fast and loose with the definitions of things that needed to be reported.

      • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        8 months ago

        Waymo cars seem to operate much more sensibly than Cruise ones from what I’ve watched and read… although IMO that is mainly down to the car calling it quits much sooner and asking for an operator to take control, and driving in a different environment in general.

        Cruise on the other hand seems to just carry on anyway, unless its lidar is blocked 😳

        • Dr. Dabbles@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          8 months ago

          Yeah, I mean, some food for thought here is that Waymo started out as a research project and has been doing this since 2009 and they’re ultra conservative with their behaviors. Before starting in 2009, the beginnings of the team were recruited from DARPA Grand Challenge participants. And even they have major mishaps.

          Cruise, on the other hand, started out trying to sell retrofit hardware right away. Then tried convincing people they could do city driving right away. Now GM has revenue targets for them, like any adult business would, and they have no hope of ever accomplishing them. So, they’re back to their old tricks, cutting down the number of miles driven for training models, rushing vehicles into service with no monitoring operators in them, deceiving investors and regulators about remote operations.

          One is a slow, methodical money furnace that attempts to solve the larger problem set. The other is a fast moving money furnace that tries to get people to pay them for half measures.

          • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
            link
            fedilink
            English
            arrow-up
            4
            ·
            8 months ago

            Damn, Waymo has been around for that long? TIL

            Waymo’s progress is probably a good indicator as to how far along we are with self driving cars IMO. Given that Waymo has their cars pretty thoroughly trained on set routes (well, even us humans need to learn or try various routes before we’re fully confident on them sometimes), Cruise cheaping out on the whole training process is only going to accelerate their demise… especially when it’s at the expense of pedestrians’ safety

            • Dr. Dabbles@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 months ago

              If you really want your mind, blown the first autonomous vehicle to drive coast to coast in the US happened in 1989. A vehicle from Carnegie millen University called NavLab. It used lidar, cameras, radar, and ultrasonics. Literally the same stuff we’re using today.

    • MotoAsh@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      3
      ·
      8 months ago

      Pretty sure it IS 1v1 factually true, but the real question is, “why?”. Is it because everyone is weary around a car with a huge-ass camera and sensor system on top that doesn’t have a driver? Or because the system is good?

      • LibertyLizard@slrpnk.net
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        edit-2
        8 months ago

        I am sure it is true in at least some sense because they would be called on an outright lie but there are many ways you can deceive with true numbers. And I don’t trust them to be fully honest.

        But if it is accurate I’d like to see an independent analysis rather than the company’s spin on it.

        • MotoAsh@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          8 months ago

          Oh definitely. It could be to no quality of the cars themselves if their driving record is from everyone else avoiding them on the road.

        • marietta_man@yall.theatl.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 months ago

          From their privacy policy:

          No Sale. We do not sell your personal data for money.

          Our AVs are able to operate safely because, like human drivers, they are able to assess the driving environment with their senses - or rather, sensors - that continuously gather data about things happening all around the vehicle. We treat this sensor data with care, restricting access to and disclosure of this data to the purposes of providing and improving Cruise Services and user experiences, or for security, safety, development, research, and legal reasons.

          https://getcruise.com/legal/us/privacy-policy/

    • Nougat@kbin.social
      link
      fedilink
      arrow-up
      17
      arrow-down
      3
      ·
      8 months ago

      Driverless cars are certainly less error-prone overall than human operated ones. Distraction, sleepiness, intoxication, hubris, and other common “human error” causes of accidents are eliminated. Now we’re seeing, though, that human beings - even pretty average ones - are still able to make better judgments in unique situations.

      Because the recent incidents have been so laughably stupid from a human perspective, the instinct is to doubt the accuracy of driverless cars in all situations. The robots are able to do the comparatively simple things extremely well. It’s just the more complex things they still have trouble with - so far. They’re still safer than human operators, and will only continue to get better.

      • snooggums@kbin.social
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        8 months ago

        Humans make the same mistakes though. Backup cameras were added to cars because humans kept running over people, especially kids. People block emergency vehicles all the time.

        Yes, the automation will always have room for improvement, but the current ‘newsworthy’ incidents are rarely in the news when humans do the exact same thing.

        • DoomBot5@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          8 months ago

          I would be pretty confused as well if someone ran up to my car and stuck a traffic cone on it.

    • Hypx@kbin.socialOP
      link
      fedilink
      arrow-up
      14
      arrow-down
      6
      ·
      8 months ago

      I suspect there is something more to this than just that. After all, the car in question did this:

      Earlier this month, a Cruise robotaxi notably ran over a pedestrian who had been hit by another vehicle driven by a human. The pedestrian became pinned under a tire of the Cruise vehicle after it came to a stop — and then was pulled for about 20 feet (six meters) as the car attempted to move off the road.

      It seems like there are unsolvable safety problems going on.

      • snooggums@kbin.social
        link
        fedilink
        arrow-up
        13
        arrow-down
        2
        ·
        8 months ago

        Yes, the car does not appear to have safety features that let it know a body is caught underneath, but it did try to get out of traffic after the collision.

        Since this never happens to human drivers that means autonomous cars are unfeasible.

        Or it is an opportunity to add some additional sensors underneath that will make it miles better than human drivers.

        Really the main problem with autonomous cars at this point in time is a combination of the co panes hiding issues and the public expecting perfection. More transparency and a 3rd party comparison to human drivers would be the best way to both improve automation and gain public trust when they actually see how bad human drivers can be.

        • MotoAsh@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          Also charge corporations for betatesting on the fucking public… they’re using tax payer funded roads and putting our lives at risk for their profits. They should share those profits far, far more than they do.

      • DeathsEmbrace@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        8 months ago

        There is no logic for this issue and I swear everyone will see this problem reoccur for every scenario that hasn’t been accounted for. This will happen in almost every self driving car because it just hasn’t been accounted for.

      • dan1101@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        You would think a self driving car could have 360 degrees of vision and not run into things, whether it’s a firetruck or a cardboard box or a person. That should be job 1 for self driving.

    • APassenger@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      It would be nice if we had a Ralph Nader for AI driving.

      He did a lot for safety decades ago. Feels like we need similar now.

  • odbol@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    8 months ago

    Ironically, the accident that caused them to decide Cruise is unsafe was started by a human driver hit-and-run. So the humans are still more unsafe, but they are punishing the robots for it

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    8 months ago

    This is the best summary I could come up with:


    (tldr: 14 sentences skipped)

    In a Tuesday statement, Cruise said it cooperating with regulators investigating the Oct. 2 accident — and that its engineers are working on way for its robotaxis to improve their response “to this kind of extremely rare event.”

    (tldr: 1 sentences skipped)

    Bryant Walker Smith, a University of South Carolina law professor who studies automated vehicles, wants to know “who knew what when?” at Cruise, and maybe GM, following the accident.

    (tldr: 3 sentences skipped)

    In December of last year, the NHSTA opened a separate probe into reports of Cruise’s robotaxis that stopped too quickly or unexpectedly quit moving, potentially stranding passengers.

    (tldr: 1 sentences skipped)

    According to an Oct. 20 letter that was made public Thursday, since beginning this probe the NHSTA has received five other reports of Cruise AVs unexpectedly breaking with no obstacles ahead.

    (tldr: 3 sentences skipped)

    Cruise has also previously maintained that its record of driverless miles have outperformed comparable human drivers in terms of safety, notably crash rates.

    (tldr: 1 sentences skipped)

    Walker Smith notes that there are several possibilities — including distinguishing Cruise’s prospects from its competitors, particularly those who haven’t expanded as aggressively, or a “Tesla scenario” where initial outrage may not amount to prompt, significant changes.

    (tldr: 7 sentences skipped)


    The original article contains 885 words, the summary contains 213 words. Saved 76%. I’m a bot and I’m open source!