Making sense of software systems that drive autonomous cars

Internet of Things | November 15, 2018

Making sense of software systems that drive autonomous cars

On May 11, another Tesla Model S, rear-ended a fire truck while travelling at 60 mph, while the driver was looking at her phone.

Although the passenger sustained relatively minor injuries, this does reopen the discussion on the system architecture o autonomous cars. The previous crash Tesla encountered was at night but the May 11 crash occurred in broad daylight conditions, where the ensemble of camera systems should have corrected for any potential malfunction by the rest of the sensor suite.

However when we peeled the onion and looked at the data, the collection of Tesla vehicles is assumed

to have driven over 200 million autonomous miles (as per 2016), and to have less than a handful of crashes with only a few fatal ones. This is far lower, compared to human drivers, and hence a key value proposition of the autonomous driving technology.

Furthermore, disengagements, or the number of times the driver had to take over control from the self-driving system appears to be decreasing over time, suggesting that the systems are rapidly improving their self-driving capabilities

In addition to the sensors that we discussed in our earlier post, the enabling software systems that are used in a semi or fully autonomous system also play a big role in enhancing system accuracy, reducing chances of crashes, and decreasing the need for driver takeover from the self-driving system. In this second post we will discuss key software enablers that are deployed in autonomous cars.

While the sensing hardware ensures accurate sensing and input of the vehicles surrounding, it is the job of the self-driving software to ensure accurate decision making as well as perform the heavy lifting for a range of supplementary functions. These may include, but are not limited to: object detection, lane-keep assistance, collision detection and mitigation, multi-actor inclusive route planning, lane-change steering etc. These software systems rely on a collection of real-time as well as pre-collected spatial data to enable the decision making processes. Furthermore, these software systems need to keep updating and communicating with their home servers for the latest updates, as a lot of these cars operate as swarms, learning from each other’s mistakes.

Kognetics defines the Connected Car software market as follows:

Here we discuss each of these components, the changes and improvements happening in the space. We will also look at some of the notable companies, acquisitions, technology iterations and how the investment landscape looks like for each of these technologies.

1. Autonomous Driving Solutions include the core software that may include image processing, machine learning models and any other autonomous driving enabling software. Autonomous driving solutions primarily build on top of machine learning methods, including both supervised and unsupervised learning methods. A big part of building these software models involves real-world driving data, vehicle environmental data and in some cases virtual driving data. Companies like Tesla leverage their large on-road fleets to actively train their self-driving algorithms using what is known as “Shadow-Mode”, mimicking what the driver does in real-world driving situations and feeding that as an input into their machine learning models. Furthermore, Tesla‘s autopilot also behaves as a swarm, where a certain car can learn of situational responses based on what a collection of cars did in similar situations and road locations. Waymo/Google on the other hand are relies on the algorithm to figure out the safest driving algorithm, relying lesser on driver input as their cars are designed from the ground up to be self driving and in many cases not have any driver controls. To make such systems increasingly robust, Waymo also let’s its algorithms spend a large time driving virtual worlds, an approach which massively reduces real world crashes while providing the system with exponentially more training data. The approach to all of these autonomous driving solution, is thus extremely varied, depending on the companies capabilities, end-customer and available computing power. The only common denominator is the race to increase the number of self driven miles without accidents. The leading companies in the space are a combination of automotive manufacturers, component suppliers as well as standalone autonomous software developers/internet giants. Leading companies include Tesla, GM/Cruise, Google/Waymo, Pilot Automotive Labs, Comma.ai, Baidu, Nutonomy, Aimotive, Ottomatika (now Aptiv), Drive.AI, Oxbotica
. Consolidation of promising vendors is high in the space with many high profile acquisitions/ JVs happening in the last 5 years, including Delphi‘s acquisition of Nutonomy, General Motors acquiring Cruise Automation and Ford acquiring SAIPS.The rapid investments and acquisitions in the space clearly suggest that whoever cracks the perfect self-driving algorithm will leapfrog the industry growth curve, even though statistically the technology has a much lower accident rate compared to human drivers. We at Kognetics believe, that the company who masters the ability to handle “untrained for incidents”while statistically increasing the miles driven without human intervention will be the winner in this race

2. High Resolution Maps are one of the core enablers for self driving vehicles. These solutions map in three dimensions and upto centimetre level accuracy, all the three dimensional objects present on the road and roadside, that a vehicle making driving decisions need to be aware of. These are used in combination with on-board sensors for a much better spatial understanding of the surrounding. This technology saw initial use by Google/Waymo, but was rapidly disseminated and taken up other vendors. The technology is like an assistive data input to the self driving algorithms and leverages the static nature of the transport infrastructure. For example, it is useful for a car to know that a certain stretch of road has lane dividers along a certain section, even though they may be too tiny to be picked up by on-board sensors. This additional information drives critical decision making of not switching lanes in such conditions, even though on-board sensors may detect a flat stretch of road with lane-markers. Given the massive technology infrastructure requirements to build such centimetre accurate views of the world, there are relatively few companies in the space with Kognetics lists ~15 such companies. Some of the leading companies in the space include TomTom, Sanborn, Here, DeepMap and NavmiiThe industry maturity stands at “Emerging – Early Adoption” given that the technology has just begun to see uptake by companies trying to build self-driving vehicles. Average Funding stands at $4 Million with “High” Funding Momentum. No major acquisitions have been announced in the space and most manufacturers are relying on subscription or API access from these vendors, similar to how Google Maps provides mapping services as an API for the Android ecosystem

3. Real-Time High Resolution Maps are solutions that use a combination of 3d mapping using low cost hardware and crowdsourced data sharing to develop and update High Resolution 3d maps in near or actual real time. An iterative improvement to High-Resolution Maps, Real-Time High Resolution Maps are essentially trying to crowd source the entire process of developing high-res maps of the driving infrastructure, while also accounting for minor or major changes that keep happening on roads like new roadblocks, potholes, dividers etc. As High-Resolution maps are still not the go-to solution for many autonomous driving vehicle manufacturers, Real-Time High Resolution Maps are yet to gain traction in the industry. Hence, the Funding Momentum for the industry is “Low”, however it is expected to pickup in the future given the wide range of use cases that this technology enables, in addition to autonomous driving.  Companies like Mapper and Carmera have been working on developing aftermarket solutions that fit on existing personal and commercial fleet vehicles to develop near real-time three dimensional maps. While primarily developed for the autonomous driving market, these vendors are also selling their solutions adjacent markets like city planning, building architecture designing etc. The primary selling point of the technology remains its extremely low cost sensing architecture that couples regular stereoscopic vision devices mounted on vehicles with image processing and real-time map generation occurring in the cloud, significantly reducing deployment cost per installation.

Here are companies in the three industry segments as covered by Kognetics, they are sized and coloured on the basis of their funding.

Kognetics is an Intelligence Augmentation (IA) platform for enterprise decision making. Our platform and solutions are designed to dramatically transform M&A and investments decision making. If you would like us to showcase the platform, or if you would like to know more details about Kognetics, please do reach out to us here

Leave a Reply

Your email address will not be published. Required fields are marked *

You may be interested in

Your homes are about to get a whole lot smarter in 2019

Making sense of sensors that drive autonomous cars

Investment Banking 2.0 - predicting M&A opportunities through AI

AI Platform for Mergers & Acquisitions Investments Strategy