Self-driving car startup Zoox is raising $500 million at a $3.2 billion valuation

Zoox, a once-secretive self-driving car startup, is closing a $500 million at a $3.2 billion post-money valuation, Bloomberg Businessweek reports. Prior to the deal, Zoox was valued at $2.7 billion, Zoox confirmed to TechCrunch. The round, led by Mike Cannon-Brookes of Grok Ventures, brings its total amount of funding to $800 million.

Zoox’s plan, according to Bloomberg, is to publicly deploy autonomous vehicles by 2020 in the form of its own ride-hailing service. The cars themselves will be all-electric and fully autonomous. Meanwhile, ride-hail companies like Uber and Lyft are also working on autonomous vehicles, as well as a number of other large players in the space.

Zoox, which turned four years old this month, is a 500-person company founded by Tim Kentley-Klay and Jesse Levinson. In the meantime, head over to Bloomberg for the full rundown.

Audi taps Israeli startup Cognata to accelerate AV ambitions

Audi is turning to Israeli startup Cognata to help the automaker validate its autonomous vehicles in the virtual world before they head out on the road for testing.

Autonomous Intelligent Driving, Audi’s self-driving unit led by a team of former Microsoft, Tesla and internal Audi veterans, says it will use Cognata’s autonomous vehicle simulation platform to test and develop its technology.

AID says the multi-year partnership will help it bring its self-driving vehicles to market faster. The partnership illustrates the demand for advanced simulation technology as companies race to safely develop and deploy autonomous vehicles.

“At AID, we are convinced that simulation is a key tool to increase our development speed and a necessary one for the validation of our product and for proving it is safe,” according to AID CTO Alex Haag, who had a brief stint at secretive self-driving startup Zoox and as a senior manager on Tesla’s semi-autonomous Autopilot team.

The deal also highlights the growing ecosystem of Israeli startups, many of which developed technology initially designed for military use, such as drones and other defense applications, only to find a hungry customer base within the autonomous vehicle industry.

Cognata, which raised $5 million last year from Airbus Ventures, Emerge and Maniv Mobility, recreates cities in its 3D simulation platform to give customers a variety of testing scenarios. The platform pulls in layers of data to help build these virtual environments. It starts with recreating real cities, then adds AI-based traffic models to simulate real-world conditions, as well as data from the vehicle’s sensors.

Automakers are burning through billions in EV, AV race

The race to develop autonomous and electric vehicles could be a race to the bottom for the automotive industry — at least in the near-term.

A new global study by consulting firm AlixPartners paints an ominous forecast for automakers in the next few years, a toxic cocktail of big spending and lots of competition mixed with softening sales in some markets and an unwillingness of consumers to fork over money for the tech. AlixPartners’ Global Automotive Outlook is based on an analysis of data from public and proprietary sources and two online consumer surveys of Americans age 18 and older possessing driver’s licenses.

Last year, automakers spent $226 million — a 47 percent increase from 2012 — on electrification and autonomous vehicle technology. And AlixPartners predicts companies will spend $255 billion in R&D and capital expenditures globally by 2023 on electric vehicles. Some 207 electric models are set to hit the market by 2022.

That’s good news for prospective EV consumers. But AlixPartners predicts many of those impending EVs will be unprofitable due to currently high systems costs, low volumes and intense competition. For instance, automakers will likely offer high incentives to buy EVs, which would depress used-vehicle residual values and allow the spiral of lower new-vehicle sales to continue.

The situation is just as tenuous on the AV front. Some 55 percent of the 175 mergers and acquisition deals in the past two years have been related to electric and autonomous vehicles, with another pileup in AVs coming.

“A pile-up of epic proportions awaits this industry as hundreds of players are spending hundreds of billions of dollars on electric and autonomous technologies as they rush to stake a claim on the biggest change to hit this industry in a hundred years,” said John Hoffecker, global vice chairman at AlixPartners, adding that “billions will be lost by many.”

An additional $61 billion has been earmarked for autonomous-vehicle technologies, according to the study. However, a separate survey conducted by AlixPartners found consumers are only willing to pay $2,300 for autonomy, compared with the industry costs of around $22,900 to provide that technology in a vehicle. That’s a 10-fold misalignment on cost.

Of course, there is evidence that consumers, such as Tesla fans, are willing to pay more than $2,300 for a system far short of fully autonomous. An enhanced version of Tesla’s Autopilot feature, which promises to match traffic speeds, keep within a lane, change lanes and exit the freeway, costs $5,000. For another $3,000, Tesla sells “full self-driving” packages for even more automation sometime in the future. This feature doesn’t yet exist for public use, although customers can — and have — purchased it in advance.

Meanwhile, the AlixPartners study predicts weakening vehicles sales. The study forecasts that the global auto market will grow at an annual rate of 2.4 percent through 2025, lagging expected worldwide GDP growth of 3.3 percent. The U.S. market will continue its cyclical downturn this year, absorbing 16.8 million units, down from 17.2 million in 2017, the study predicts. U.S. sales will hit a trough of around 15.1 million in 2020, in part because autonomous ride-hailing fleets will cannibalize traditional sales.

Amid the chaos and lost billions, there are some rosier outlooks in the AlixPartners study. Full battery-electric vehicles will reach about 20 percent of the U.S. market, about 30 percent of the European market and about 35 percent of the Chinese market by 2030, the study predicts. A separate AlixPartners’ consumer survey found 22.5 percent of Americans are “likely” to purchase a plug-in electric vehicle as their next car. And autonomous vehicles will account for 3 million in sales in the U.S. by 2030.

The well-funded startups driven to own the autonomous vehicle stack

At some point in the future, while riding along in a car, a kid may ask their parent about a distant time in the past when people used steering wheels and pedals to control an automobile. Of course, the full realization of the “auto” part of the word — in the form of fully autonomous automobiles — is a long way off, but there are nonetheless companies trying to build that future today.

However, changing the face of transportation is a costly business, one that typically requires corporate backing or a lot of venture funding to realize such an ambitious goal. A recent funding round, some $128 million raised in a Series A round by Shenzhen-based Roadstar.ai, got us at Crunchbase News asking a question: Just how many independent, well-funded autonomous vehicles startups are out there?

In short, not as many as you’d think. To investigate further, we took a look at the set of independent companies in Crunchbase’s “autonomous vehicle” category that have raised $50 million or more in venture funding. After a little bit of hand filtering, we found that the companies mostly shook out into two broad categories: those working on sensor technologies, which are integral to any self-driving system, and more “full-stack” hardware and software companies, which incorporate sensors, machine-learned software models and control mechanics into more integrated autonomous systems.

Full-stack self-driving vehicle companies

Let’s start with full-stack companies first. The table below shows the set of independent full-stack autonomous vehicle companies operating in the market today, as well as their focus areas, headquarter’s location and the total amount of venture funding raised:

Note the breakdown in focus area between the companies listed above. In general, these companies are focused on building more generalized technology platforms — perhaps to sell or license to major automakers in the future — whereas others intend to own not just the autonomous car technology, but deploy it in a fleet of on-demand taxi and other transportation services.

Making the eyes and ears of autonomous vehicles

On the sensor side, there is also a trend, one that’s decidedly more concentrated on one area of focus, as you’ll be able to discern from the table below:

Some of the most well-funded startups in the sensing field are developing light detection and ranging (LiDAR) technologies, which basically serve as the depth-perceiving “eyes” of autonomous vehicle systems. CYNGN integrates a number of different sensors, LiDAR included, into its hardware arrays and software tools, which is one heck of a pivot for the mobile phone OS-maker formerly known as Cyanogen.

But there are other problem spaces for these sensor companies, including Nauto’s smart dashcam, which gathers location data and detects distracted driving, or Autotalks’s DSRC technology for vehicle-to-vehicle communication. (Back in April, Crunchbase News covered the $5 million Series A round closed by Comma, which released an open-source dashcam app.)

And unlike some of the full-stack providers mentioned earlier, many of these sensor companies have established vendor relationships with the automotive industry. Quanergy Systems, for example, counts components giant Delphi, luxury carmakers Jaguar and Mercedes-Benz and automakers like Hyundai and Renault-Nissan as partners and investorsInnoviz supplies its solid-state LiDAR technology to the BMW Group, according to its website.

Although radar and even LiDAR are old hat by now, there continues to be innovation in sensors. According to a profile of Oryx Vision’s technology in IEEE Spectrum, its “coherent optical radar” system is kind of like a hybrid of radar and LiDAR technology in that “it uses a laser to illuminate the road ahead [with infrared light], but like a radar it treats the reflected signal as a wave rather than a particle.” Its technology is able to deliver higher-resolution sensing over a longer distance than traditional radar or newer LiDAR technologies.

Can startups stack up against big corporate competitors?

There are plenty of autonomous vehicle initiatives backed by deep corporate pockets. There’s Waymo, a subsidiary of Alphabet, which is subsidized by the huge amount of search profit flung off by Google . Uber has an autonomous vehicles initiative too, although it has encountered a whole host of legal and safety issues, including holding the unfortunate distinction of being the first to kill a pedestrian earlier this year.

Tesla, too, has invested considerable resources into developing assistive technologies for its vehicles, but it too has encountered some roadblocks as its head of Autopilot (its in-house autonomy solution) left in April. The company also deals with a rash of safety concerns of its own. And although Apple’s self-driving car program has been less publicized than others, it continues to roll on in the background. Chinese companies like Baidu and Didi Chuxing have also launched fill-stack R&D facilities in Silicon Valley.

Traditional automakers have also jumped into the fray. Back in 2016, for the price of a cool $1 billion, General Motors folded Cruise Automation into its R&D efforts in a widely publicized buyout. And, not to be left behind, Ford acquired a majority stake in Argo AI, also for $1 billion.

That leaves us with a question: Do even the well-funded startups mentioned earlier stand a chance of either usurping market dominance from corporate incumbents or at least joining their ranks? Perhaps.

The reason why so much investor cash is going to these companies is because the market opportunity presented by autonomous vehicle technology is almost comically enormous. It’s not just a matter of the car market itself — projected to be over 80 million car sales globally in 2018 alone — but how we’ll spend all the time and mental bandwidth freed up by letting computers take the wheel. It’s no wonder that so many companies, and their backers, want even a tiny piece of that pie.

The AI in your non-autonomous car

Sorry. Your next car probably won’t be autonomous. But, it will still have artificial intelligence (AI).

While most of the attention has been on advanced driver assistance systems (ADAS) and autonomous driving, AI will penetrate far deeper into the car. These overlooked areas offer fertile ground for incumbents and startups alike. Where is the fertile ground for these features? And where is the opportunity for startups?

Inside the cabin

Inward-facing AI cameras can be used to prevent accidents before they occur. These are currently widely deployed in commercial vehicles and trucks to monitor drivers to detect inebriation, distraction, drowsiness and fatigue to alert the driver. ADAS, inward-facing cameras and coaching have shown to drastically decrease insurance costs for commercial vehicle fleets.

The same technology is beginning to penetrate personal vehicles to monitor driver-related behavior for safety purposes. AI-powered cameras also can identify when children and pets are left in the vehicle to prevent heat-related deaths (on average, 37 children die from heat-related vehicle deaths in the U.S. each year).

Autonomous ridesharing will need to detect passenger occupancy and seat belt engagement, so that an autonomous vehicle can ensure passengers are safely on board a vehicle before driving off. They’ll also need to identify that items such as purses or cellphones are not left in the vehicle upon departure.

AI also can help reduce crash severity in the event of an accident. Computer vision and sensor fusion will detect whether seat belts are fastened and estimate body size to calibrate airbag deployment. Real-time passenger tracking and calibration of airbags and other safety features will become a critical design consideration for the cabin of the future.

Beyond safety, AI also will improve the user experience. Vehicles as a consumer product have lagged far behind laptops, tablets, TVs and mobile phones. Gesture recognition and natural language processing make perfect sense in the vehicle, and will make it easier for drivers and passengers to adjust driving settings, control the stereo and navigate.

Under the hood

AI also can be used to help diagnose and even predict maintenance events. Currently, vehicle sensors produce a huge amount of data, but only spit out simple codes that a mechanic can use for diagnosis. Machine learning may be able to make sense of widely disparate signals from all the various sensors for predictive maintenance and to prevent mechanical issues. This type of technology will be increasingly valuable for autonomous vehicles, which will not have access to hands-on interaction and interpretation.

AI also can be used to detect software anomalies and cybersecurity attacks. Whether the anomaly is malicious or just buggy code, it may have the same effect. Vehicles will need to identify problems quickly before they can propagate on the network.

Cars as mobile probes

In addition to providing ADAS and self-driving features, AI can be deployed on vision systems (e.g. cameras, radar, lidar) to turn the vehicle into a mobile probe. AI can be used to create high-definition maps that can be used for vehicle localization, identifying road locations and facades of addresses to supplement in-dash navigation systems, monitoring traffic and pedestrian movements and monitoring crime, as well as a variety of new emerging use cases.

Efficient AI will win

Automakers and suppliers are experimenting to see which features are technologically possible and commercially feasible. Many startups are tackling niche problems, and some of these solutions will prove their value. In the longer-term, there will be so many features that are possible (some cataloged here and some yet unknown) that they will compete for space on cost-constrained hardware.

Making a car is not cheap, and consumers are price-sensitive. Hardware tends to be the cost driver, so these piecewise AI solutions will need to be deployed simultaneously on the same hardware. The power requirements will add up quickly, and even contribute significantly to the total energy consumption of the vehicle.

It has been shown that for some computations, algorithmic advances have outpaced Moore’s Law for hardware. Several companies have started building processors designed for AI, but these won’t be cheap. Algorithmic development in AI will go a long way to enabling the intelligent car of the future. Fast, accurate, low-memory, low-power algorithms, like XNOR.ai* will be required to “stack” these features on low-cost, automotive-grade hardware.

Your next car will likely have several embedded AI features, even if it doesn’t drive itself.

* Full disclosure: XNOR.ai is an Autotech Ventures portfolio company.

Waymo van involved in serious collision in Arizona

A Waymo self-driving vehicle was involved in a serious accident in Chandler, Arizona earlier this afternoon. Local police  said there were minor injuries from the incident after a sedan swerved into the Waymo van to avoid another collision.

Although Waymo has said it will be testing vehicles without safety drivers in Arizona, this was not one of them. An operator was in the driver’s seat at the time of the crash, though the car was in autonomous mode, police said.

Aerial footage and images posted online by onlookers show that this was no fender-bender. The sedan’s front crumple zone is wrecked and glass is broken; the van is in better shape, though its front right tire is crushed in. Both vehicles have since been towed.

Reportedly the sedan was traveling eastbound and swerved to avoid another car at an intersection, straying into the westbound lanes and hitting the Waymo van. What actions if any the latter took to avoid the collision are unknown at this time, though an analysis by the company would of course provide that info. I’ve asked the company for comment and will update if I hear back.

Update: The police provided me with the following statement, which doesn’t add much but confirms the above:

We are currently investigating a minor injury collision involving two vehicles, one of which is a Waymo autonomous vehicle. This afternoon around noon a vehicle (Honda sedan) traveling eastbound on Chandler Blvd. had to swerve to avoid striking a vehicle traveling northbound on Los Feliz Dr. As the Honda swerved, the vehicle continued eastbound into the westbound lanes of Chandler Blvd. & struck the Waymo vehicle, which was traveling at a slow speed and in autonomous mode. There was an occupant in the Waymo vehicle sitting in the driver’s seat, who sustained minor injuries. Both the Waymo vehicle & the Honda were towed from the scene. This incident is still under investigation

Waymo has also posted video of the accident from the van’s point of view:

Waymo reportedly applies to put autonomous cars on California roads with no safety drivers

Waymo has become the second company to apply for the newly-available permit to deploy autonomous vehicles without safety drivers on some California roads, the San Francisco Chronicle reports. It would be putting its cars — well, minivans — on streets around Mountain View, where it already has an abundance of data.

The company already has driverless driverless cars in play over in Phoenix, as it showed in a few promotional videos last month. So this isn’t the first public demonstration of its confidence.

California only just made it possible to grant permits allowing autonomous vehicles without safety drivers on April 2; one other company has applied for it in addition to Waymo, but it’s unclear which. The new permit type also allows for vehicles lacking any kind of traditional manual controls, but for now the company is sticking with its modified Chrysler Pacificas. Hey, they’re practical.

The recent fatal collision of an Uber self-driving car with a pedestrian, plus another fatality in a Tesla operating in semi-autonomous mode, make this something of an awkward time to introduce vehicles to the road minus safety drivers. Of course, it must be said that both of those cars had people behind the wheel at the time of their crashes.

Assuming the permit is granted, Waymo’s vehicles will be limited to the Mountain View area, which makes sense — the company has been operating there essentially since its genesis as a research project within Google. So there should be no shortage of detail in the data, and the local authorities will be familiar with the people necessary for handling any issues like accidents, permit problems, and so on.

No details yet on what exactly the cars will be doing, or whether you’ll be able to ride in one. Be patient.

Mobileye chastises Uber by detecting struck pedestrian in footage well before impact

A self-driving vehicle fatally striking a pedestrian is a tasteless venue for self-promotion, but it’s also an important time to discuss the problems that created the situation. Mobileye CEO and CTO Amnon Shashua seems to do a little of both in this post at parent company Intel’s blog, running the company’s computer vision software on Uber’s in-car footage and detecting the person a full second before impact.

It first must be said that this shouldn’t be taken to demonstrate the superiority of Mobileye’s systems or anything like that. This type of grainy footage isn’t what self-driving — or even “advanced driver assistance” — systems are meant to operate on. It’s largely an academic demonstration.

But the application of a competent computer vision system to the footage and its immediate success at detecting both Elaine Herzberg and her bike show how completely the Uber system must have failed.

Even if this Mobileye object detection algorithm had been the only thing running in that situation, it detected Herzberg a second before impact (on highly degraded data at that). If the brakes had been immediately applied, the car may have slowed enough that the impact might not have been fatal; even a 5 MPH difference might matter. Remember, the Uber car reportedly didn’t even touch the brakes until afterwards. It’s exactly these types of situations in which we are supposed to be able to rely on the superior sensing and reaction time of an AI.

We’re still waiting to hear what exactly happened that the Uber car, equipped with radar, lidar, multiple optical cameras and a safety driver, any one of which should have detected Herzberg, completely failed to do so. Or if it did, failed to take action.

This little exhibition by Mobileye, while it should be taken with a grain of salt, at least gives a hint at what should have been happening inside that car’s brain.

Here’s how Uber’s self-driving cars are supposed to detect pedestrians

A self-driving vehicle made by Uber has struck and killed a pedestrian. It’s the first such incident and will certainly be scrutinized like no other autonomous vehicle interaction in the past. But on the face of it it’s hard to understand how, short of a total system failure, this could happen, when the entire car has essentially been designed around preventing exactly this situation from occurring.

Something unexpectedly entering the vehicle’s path is pretty much the first emergency event that autonomous car engineers look at. The situation could be many things — a stopped car, a deer, a pedestrian — and the systems are one and all designed to detect them as early as possible, identify them and take appropriate action. That could be slowing, stopping, swerving, anything.

Uber’s vehicles are equipped with several different imaging systems which work both ordinary duty (monitoring nearby cars, signs and lane markings) and extraordinary duty like that just described. No less than four different ones should have picked up the victim in this case.

Top-mounted lidar. The bucket-shaped item on top of these cars is a lidar, or light detection and ranging, system that produces a 3D image of the car’s surroundings multiple times per second. Using infrared laser pulses that bounce off objects and return to the sensor, lidar can detect static and moving objects in considerable detail, day or night.

This is an example of a lidar-created imagery, though not specifically what the Uber vehicle would have seen.

Heavy snow and fog can obscure a lidar’s lasers, and its accuracy decreases with range, but for anything from a few feet to a few hundred feet, it’s an invaluable imaging tool and one that is found on practically every self-driving car.

The lidar unit, if operating correctly, should have been able to make out the person in question, if they were not totally obscured, while they were still more than a hundred feet away, and passed on their presence to the “brain” that collates the imagery.

Front-mounted radar. Radar, like lidar, sends out a signal and waits for it to bounce back, but it uses radio waves instead of light. This makes it more resistant to interference, since radio can pass through snow and fog, but also lowers its resolution and changes its range profile.

Tesla’s Autopilot relies mostly on radar.

Depending on the radar unit Uber employed — likely multiple in both front and back to provide 360 degrees of coverage — the range could differ considerably. If it’s meant to complement the lidar, chances are it overlaps considerably, but is built more to identify other cars and larger obstacles.

The radar signature of a person is not nearly so recognizable, but it’s very likely they would have at least shown up, confirming what the lidar detected.

Short and long-range optical cameras. Lidar and radar are great for locating shapes, but they’re no good for reading signs, figuring out what color something is and so on. That’s a job for visible-light cameras with sophisticated computer vision algorithms running in real time on their imagery.

The cameras on the Uber vehicle watch for telltale patterns that indicate braking vehicles (sudden red lights), traffic lights, crossing pedestrians and so on. Especially on the front end of the car, multiple angles and types of camera would be used, so as to get a complete picture of the scene into which the car is driving.

Detecting people is one of the most commonly attempted computer vision problems, and the algorithms that do it have gotten quite good. “Segmenting” an image, as it’s often called, generally also involves identifying things like signs, trees, sidewalks and more.

That said, it can be hard at night. But that’s an obvious problem, the answer to which is the previous two systems, which work night and day. Even in pitch darkness, a person wearing all black would show up on lidar and radar, warning the car that it should perhaps slow and be ready to see that person in the headlights. That’s probably why a night-vision system isn’t commonly found in self-driving vehicles (I can’t be sure there isn’t one on the Uber car, but it seems unlikely).

Safety driver. It may sound cynical to refer to a person as a system, but the safety drivers in these cars are very much acting in the capacity of an all-purpose failsafe. People are very good at detecting things, even though we don’t have lasers coming out of our eyes. And our reaction times aren’t the best, but if it’s clear that the car isn’t going to respond, or has responded wrongly, a trained safety driver will react correctly.

Worth mentioning is that there is also a central computing unit that takes the input from these sources and creates its own more complete representation of the world around the car. A person may disappear behind a car in front of the system’s sensors, for instance, and no longer be visible for a second or two, but that doesn’t mean they ceased existing. This goes beyond simple object recognition and begins to bring in broader concepts of intelligence such as object permanence, predicting actions and the like.

It’s also arguably the most advanced and closely guarded part of any self-driving car system and so is kept well under wraps.

It isn’t clear what the circumstances were under which this tragedy played out, but the car was certainly equipped with technology that was intended to, and should have, detected the person and caused the car to react appropriately. Furthermore, if one system didn’t work, another should have sufficed — multiple failbacks are only practical in high-stakes matters like driving on public roads.

We’ll know more as Uber, local law enforcement, federal authorities and others investigate the accident.