Self Driving Car Weekly Highlights | Week 08/10/17 - 15/10/17 [DE]

| Comments

So, as you can see it’s long time I don’t write anything on this blog. Despite the intention to write short tutorials, personal opinions, and other technical stuff, this blog was always at the bottom of my always-increasing to-do stack. Why am I then committing myself to write a weekly update on the status of self-driving car development and research? Well, as always there is both a practical argument and a leisure reason: I love talking about this field, which is continuously evolving with exciting changes every day, and I am currently learning German, thus I want to exercise my writing skills daily (which, at the moment, are really bad. But fear not: the posts will be published in English too.) This series of post will be mostly based on news published on other websites or forums. So don’t expect to find the latest rumors but, instead, a review of most important events happened during the week.

General Motors buys Strobe, solid-state lidar company

This is definitely the biggest news of the week. We know quite well that many cars manufacturers have high expectations on solid-state lidars: While rotating laser scanner works for research or prototypes (high performance and accuracy, yet high costs) it is unthinkable to use them in actual user mass-production: Moving an array of laser around accurately requires high manufacturing costs and, most importantly continuously high frequency rotating components do not last long for accurate applications. You don’t want users to bring their car back every year for replacing their faulty lidar.

Solid state lidars look really interesting instead: no moving parts, everything fits in a single chip, lower costs and comparable accuracy and range. Only problem: we are not quite there yet. Despite being known for a few years, solid-state lidars are quite hard to implement. Of course, now that money is poured into, companies are improving their lidars year by year and the development of off-the-shelf consumer solution looks near. You may remember the same happened to ToF cameras: cost decreased from thousands of dollars to a couple 100$ in a few years (think about the Xbox One Kinect today).

There are just a few companies like Strobe around. They are still in their start-up phase and thus it make sense to buy them now. We can agree that the decision of GM’s board seems quite good considering that other companies will probably do something similar in the future: Ford invested in Velodyne last year, Continental bought ASC, Tesla…

…Well, Tesla says “Fuck, we don’t use lidars, we have cameras”.

And they are not alone:

Comma.ai shares cool and shiny deep learning videos

George Hotz’s company loves cameras. They are cheap, they can be found on a mobile phone and most importantly they are the closest thing to an human eye. And if a human can drive a car with his eyes, why shouldn’t a computer? This is the same approach of Tesla, expect that Comma.ai seems to invest most of his resources on deep learning. We know that the Line Keeping Assistant algorithm of openpilot is based on an end-to-end deep neural network, but lately the company is looking to exploit DNN for other scopes:

For instance image segmentation:


… or, you know, replacing the previously cited Lidars for depth data



It is interesting to note how the training data was respectively generated by manually annotated images (thanks to the adult coloring books project ) and simulations.

Comments