• ShadowRam@kbin.social
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    1 year ago

    challenges in self driving are not with data acquisition.

    What?!?! Of course it is.

    We can already run all this shit through a simulator and it works great, but that’s because the computer knows the exact position, orientation, velocity of every object in a scene.

    In the real world, the underlying problem is the computer doesn’t know what’s around it, and what those things around doing or going to do.

    It’s 100% a data acquisition problem.

    Source? I do autonomous vehicle control for a living. In environments much more complicated than a paved road with accepted set rules.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      You’re confusing data acquisition with interpretation. A LIDAR won’t label the data for your AD system and won’t add much to an existing array of visible spectrum cameras.

      You say the underlying problem is that the computer doesn’t know what’s around it. But its surroundings are reliably captured by functional sensors. Therefore it’s not a matter of acquisition, but processing of the data.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        4
        ·
        edit-2
        1 year ago

        won’t add much to an existing array of visible spectrum cameras.

        You do realize LIDAR is just a camera, but has an accurate distance per pixel right?

        It absolutely adds everything.

        But its surroundings are reliably captured by functional sensors

        No it’s not. That’s the point. LIDAR is the functional sensor required.

        You can not rely on stereoscopic camera’s.
        The resolution of distance is not there.
        It’s not there for humans.
        It’s not there for the simple reason of physics.

        Unless you spread those camera’s out to a width that’s impractical, and even then it STILL wouldn’t be as accurate as LIDAR.

        You are more then welcome to try it yourself.
        You can be even as stupid as Elon and dump money and rep into thinking that it’s easier or cheaper without LIDAR.

        It doesn’t work, and it’ll never work as good as a LIDAR system.
        Stereoscopic Camera’s will always be more expensive than LIDAR from a computational standpoint.

        AI will do a hell of a lot better recognizing things via a LIDAR Camera than a Stereoscopic Camera.

        • Eager Eagle@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          1 year ago

          This assumes depth information is required for self driving, I think this is where we disagree. Tesla is able to reconstruct its surroundings from visual data only. In biology, most animals don’t have explicit depth information and are still able to navigate in their environments. Requiring LIDAR is a crutch.

          • Geek_King@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            1 year ago

            I disagree with you, I don’t think visual camera’s alone are up to the task. There was an instance of a Tesla in auto pilot mode driving at night with the driver being drunk. This took place in Texas on the high way, the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing as a stationary obstacle. Instead it didn’t realize there was a car in the way around 1 second before the 55 mph impact, and it turned of autopilot that 1 second before.

            Having multiple layers of sensors, some being good at actually sensing a stationary obstacle, plus accurate range finding, plus visual analysis to pick out people and animal, thats the way to go.

            Visual range only cameras were just reported to have a harder time recognizing people of color and children.

            • Eager Eagle@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing

              If the obstacle was visible in the footage, the incident could have been avoided with visible spectrum cameras alone. Once again, a problem with the data processing, not acquisition.

              • Geek_King@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                If we’re talking about the safety of the driver and people around them, why not both types of sensors? LIDAR has things it excels at, and visual spectrum cameras have things they do well too. That way the data processing side has more things to rely on, instead of all the eggs in one basket.

                • Eager Eagle@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  why not both types of sensors

                  Cost seems to be a pretty good reason. Admittedly, until I looked it up 5 minutes ago I thought it was just 100-200% more expensive than cameras, but it seems to be much more than that.

                  On top of that, there are the problems of weather and high energy usage. This is more of a problem than just “not working on rain”: if the autonomous driving system is designed to rely on data from a sensor that stops working when it rains, this can be worse than not having that sensor in the first place. This is what I refer to by saying that LIDAR is a crutch.

                  • Geek_King@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    That’s a pretty good point, the part about if it’s raining or snowing, LIDAR can’t be used, which could leave the system in a much worst spot. It’s getting to the point where I’m beginning to think that fully self driving cars just won’t be 100% possible in all conditions in all locations.

                    For instance, where I live, we can have some bad winters, snow, ice, slippy conditions. People have a tough time with these conditions, and I’d imagine it’d be even harder for a self driving car, especially given how the sensor suites work. My car has that intelligent cruise control where it’ll slow down when it senses a car ahead of me, then match it’s speed. That feature stops working if too much snow accumulates on the sensors.

                  • degrix@lemmy.hqueue.dev
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    arrow-down
                    1
                    ·
                    edit-2
                    1 year ago

                    Optical cameras alone have issues as well that can’t be handled though. It’s the combination of the two along with other things like ultrasonic sensors that makes them safe. More sensors in general are better because they reduce the computational burden and provide redundancy - even if that redundancy is to safely stop.

                    Cost is certainly an issue, but on $40k+ vehicles it’s cheap enough for other EV makes to include it in the cost. Volvo for instance is using Luminars version at a cost of about $500 (https://www.wired.com/story/sleeker-lidar-moves-volvo-closer-selling-self-driving-car/).

                    Image processing is expensive even with dedicated hardware and LiDAR provides enough extra information to avoid needing to make make certain calculations off of images alone (like deltas between image series to calculate distance). Those calculations are further amplified by conditions where images alone don’t provide enough information - similar to how there are conditions where the LiDAR data alone wouldn’t be sufficient.