Friday, July 1, 2016

Fist Fatality of Autonomous Driving

In a Tesla blog post published on June 30, 2016, we learned about the first fatal crash of a Model S while on AutoPilot mode.
What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.
Still a lot of unknowns, but here is some thoughts
  • Tesla uses cameras, not Lidars, as primary sensing. Cameras are known to issues with white object against bright sky and other scenarios. Elon Musk replied on his twitter that the radar ignored it to avoid false alarm for high object. This should have been discovered in the FMEA. What is the mitigation of Tesla during the FMEA? Did they test the driving in such scenarios? Is disabling autopilot in such scenario an option? With current Tesla sensing, whether it is possible to detect it with algorithm improvement?
  • Testing. This accident provide a real case for other auto driving applications to test against, probably in simulation, also in controlled road. In simulation or in controlled environment, we can test the scenario leading to the accident, and add variations to the testing. The miles logged in testing is important but testing against known weakness is more important. As noted in the Tesla blog, "rare circumstances", but testable rare circumstances. We need to test such rare circumstances in simulations and controlled environment. We can discover such circumstances by FMEA, simulation, and logged near missed. 
  • Communication. Disappointed to see Tesla's blog post still emphasizing the fatality per mile is better than US average, and boasting that the safety features could have saved life had the situation been different. No. A life was lost. Let's focus on what we can improve first.

Tuesday, April 12, 2016

Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

 One proposal to assess safety is to test-drive autonomous vehicles in real traffic, observe their performance, and make statistical comparisons to human driver performance. This approach is logical, but it is practical? In this report, we calculate the number of miles that would need to be driven to provide clear statistical evidence of autonomous vehicle safety. Given that current traffic fatalities and injuries are rare events compared with vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their safety in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate performance prior to releasing them for consumer use. Our findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability. And yet, it may still not be possible to establish with certainty the safety of autonomous vehicles. Therefore, it is imperative that autonomous vehicle regulations are adaptive — designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.

However, the myth of i.i.d (independent identical distribution) is clearly there.
We can answer this question by reframing failure rates as reliability rates and using success run statistics based on the binomial distribution (O’Connor and Kleyner, 2012).
Easiest way for Google to pass the test, is to build a miles long circle in no-man's land, and line dozens AVs bumper to bumper. Then like them run and log the miles. Of course, in the report Rand mentioned:
Perhaps the most logical way to assess safety is to test-drive autonomous vehicles in real traffic and observe their performance.
However, what is "real" traffic. Is the empty high way at 4:00 am real? Is the icy road in Rocky mountain real?
A good Texas driver is almost certainly road hazard close to Ski resort. (Just speaking for myself :) )

It frustrates me that the RAND report does not even bother to list i.i.d as their fundamental assumption.

Will write another longer post to discuss this topic. Some previous thoughts:

Monday, March 14, 2016

How to replace a pie chart?

Pie charts have a bad reputation (among statisticians), but they are widely used. But ... how can we oppose a pie chart on Pie day?

David Robinson at Variance Explained asked:

Which communicates more to you? And can you think of a plot that communicates this data even more clearly?
I would think that the questionnaire is not designed right to begin with. Such data would be clearly presented by six box plot, and we make an easy comparison across tasks. However a percentage points is not exactly the metric we want to compare, no matter it is presented by pie or bar.

2016 Pi Day Art Posters

Martin Krzywinski celebrates Pi day:

the detection of gravitational waves at the LIGO lab and simulate the effect of gravity on masses created from the digits of pipi."

Wednesday, March 9, 2016

Frank Chen on Twitter: "1/ Why are people so fired up about a computer winning yet-another-board-game?"

1/ Why are people so fired up about a computer winning yet-another-board-game?

2/ We saw similar media mania around Deep Blue beating Kasparov in 1996-97, but that didn't lead to general #AI breakthroughs

3/ But here are some reasons AI researchers are fired up about AlphaGo beating Lee Se-dol and how it might presage much more general

4/ #AI breakthroughs with applications far beyond board games

5/ First, Go is much more complicated than tic-tac-toe or checkers or chess. GOOG points out Go has a googol more possible positions

6/ compared with chess. Because yo can’t bruce force search space, whole new set of learning algorithms were needed

7/ Second, it shows off how an ensemble of learning techniques (deep learning + reinforcement + Monte Carlo tree search) beats any single

8/ technique in isolation. Monte Carlo tree search is unsung hero (since deep learning gets all the glory), as it’s great way of exploring

9/ very large search space without brute force exploring every node

10/ Third, it perfectly illustrates the pendulum switch in AI away from expert systems (capturing human expertise in algorithms) to

11/ algorithms which learn themselves through experience. A delightful example if this class of algorithms called genetic algorithms

12/ figured out how to play and beat Super Mario World …

13/ Finally, as Google writes:

14/ So the oh-so-tantalizing prospect is that we can program computers to mimic human intuition and flow and imagination

15/ Rather than just solve problems with  brute force computational power and reliable, large memories

16/ So is this #AlphaGo win the beginning of real progress towards generalized intelligence compared to the false start of winning chess?

Thursday, March 3, 2016

More on replication crisis

By Andrew Gelman:

If these researchers knew ahead of time that their data would be analyzed correctly, and that outside teams would be preparing replications, they might be less willing to stake their reputations on shaky findings.
Words worth bearing in mind, even for those outside the psychology studies.

Bad Policy Could Cripple Energy Innovation

Via Scientific American:

The best science and engineering are no match for the whims of lawmakers, so policy and innovation need to work hand in hand, Obama administration officials said.
Not Even News.
Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.

Tuesday, March 1, 2016

Google Self-Driving Car May Have Caused an Accident

Via IEEE Spectrum' First ever!

As Urmson noted at the time, none of those near-misses ever threatened to cause serious damage, and the rate at which they occurred had fallen, with only 5 near-misses taking place over a total of about 600,000 kilometers of testing done during the first 11 months of 2011. It really does suggest that Google cars drive more safely than the average person does.

Saturday, February 27, 2016

Why You Shouldn’t Be Allowed to Drive | TIME

Wow, this is earlier than I thought. My inner contrarian is yelling.


Spreadsheets: The Original Analytics Dashboard · Simply Statistics

Steven Levy wrote the following about the original granddaddy of spreadsheets, VisiCalc. 

Already, the spreadsheet has redefined the nature of some jobs; to be an accountant (statistician) in the age of spreadsheet (big data) program is — well, almost sexy. And the spreadsheet (big data) has begun to be a forceful agent of decentralization, breaking down hierarchies in large companies and diminishing the power of data processing.

There has been much talk in recent years about an “entrepreneurial renaissance” and a new breed of risk-taker who creates businesses where none previously existed. Entrepreneurs and their venture-capitalist backers are emerging as new culture heroes, settlers of another American frontier. Less well known is that most of these new entrepreneurs depend on their economic spreadsheets as much as movie cowboys depend on their horses.
Simply Statistics suggested that you replace "accountant" with "statistician" and "spreadsheet" with "big data" and "you are magically teleported into 2016."

Of course, the combining of presentation with computation comes at a cost of reproducibility and perhaps quality control. Seeing the description of how spreadsheets were originally used, it seems totally natural to me. It is not unlike today's analytic dashboards that give you a window into your business and allow you to "model" various scenarios by tweaking a few numbers of formulas. Over time, people took spreadsheets to all sorts of extremes, using them for purposes for which they were not originally designed, and problems naturally arose.
So now, we are trying to separate out the computation and presentation bits a little. Tools like knitr and R and shiny allow us to do this and to bring them together with a proper toolchain. The loss in interactivity is only slight because of the power of the toolchain and the speed of computers nowadays. Essentially, we've brought back the Data Processing department, but have staffed it with robots and high speed multi-core computers.
Other tools include IPython Notebook, and GitHub. Also importantly, we may need an alternative to the ubiquitous Excel. More and more computational powers are added to it in each update. What we need is a strip-down spreadsheet editor with some basic data validation functionalities, but no data analysis / visualization  built-in. A software package force us to make distinction between data input and processing/analysis/presentation.
VisiCalc running on Apple IIc, 1983. Photo by Mark Mathosian.

Job opening - for a data graphics editor!

Agree with Andrew Gelman that there should be more this sort of jobs. More elegant visualizations, kill the junk charts.

Quote from the job posting:

We have a job opening for a new position we’re calling “data graphics editor.” I’ve been having trouble attracting the right kind of candidate. In my search, I may have been leaning too heavily toward graphic design skill sets when perhaps I should be looking in your world of statistical graphics skills. My search led me to your door. Would you have any advice for me on where I might go to get this job posting in front of the right audience of students, grads, or practitioners?
Some of there recent graphs.

Friday, February 26, 2016

Autonomous cars in the snow... and more

“The maps we create contain useful information about the 3D environment around the car, allowing it to localize even with a blanket of snow covering the ground.”

The autonomous vehicles create the maps while driving the test environment in favorable weather. Technologies automatically annotate features like traffic signs, trees, and buildings later. Then, when the vehicles cannot see the ground, they detect above-ground landmarks to pinpoint themselves on the map, which they then use to drive successfully."

via Michigan Today

An average Homo Sapiens driver from Texas would perform horribly on the snowy mountain road in Colorado? However, there is no law to keep him/her off the road. How about a robot that has passed the safety test in Sunny California. (My adviser, Dr. Mosleh, loved to tell us how he was saved by an 18-wheeler on snowy road in Maryland when he first took the job at UMD. I secretly think that is why he moved back to UCLA.)

I support safety regulation that hold a high bar for AV, especially in the beginning years. True unknown risks are introduced by new technology. ( I hate to say this, but still, unknown unknown.) Also the public's perception of the risk would be high, only because they are not familiar with the technology. As discussed in previous post, the regulation/test should address the risky scenarios, not just how many miles traveled. The methodology discussed in my Ph.D. dissertation might be expanded to generate the risky scenarios.

Thursday, February 25, 2016

Boston Dynamics’ Marc Raibert on Next-Gen ATLAS: “A Huge Amount of Work” - IEEE Spectrum

Yes. And poking future overlord?

via IEEE Spectrum

I sincerely hope the future self-aware robots won't have access to these videos.

Saturday, February 20, 2016

Robot Art Raises Questions about Human Creativity

via MIT Technology Review

I am more concerned about the AI vs. Homo Sapiens in the survival game. It has nothing to do with sense of self, creativity or Turning Test. To survive, the only thing that counts.

Friday, February 19, 2016

Tufte in R!

Thank Lukasz Piwek, we can create Tufte-like graphs in R

Wednesday, February 17, 2016

Tuesday, February 16, 2016

Deep Learning Makes Driverless Cars Better at Spotting Pedestrians - IEEE Spectrum

No previous algorithms have been capable of optimizing the trade-off between detection accuracy and speed for cascades with stages of such different complexities. In fact, these are the first cascades to include stages of deep learning. The results we're obtaining with this new algorithm are substantially better for real-time, accurate pedestrian detection.

Deep Learning Makes Driverless Cars Better at Spotting Pedestrians - IEEE Spectrum

Sunday, February 14, 2016

Andrew Ng: Driverless Shuttle Bus - IEEE Spectrum

We believe the approach of creating a car that can autonomously drive everywhere and be safe everywhere is beyond today’s technology. Instead, we are looking initially at shuttle routes and bus routes, routes that are, perhaps, a modest 20 miles, driven in a big circle, or back and forth.
We think if all you are doing is driving a 20-mile route, the technology is indeed within striking distance of making that safe. We plan to commercialize this in three years and will be moving aggressively to get this to market.

Checking in with Andrew Ng at Baidu’s Blooming Silicon Valley Research Lab - IEEE Spectrum:

It is an (un)interesting approach that use shuttle bus as a start point. The environment is more predictable, and cost saving is fairly easy to materialize. Just think about the mass transit that happens daily in the giant manufacturing facilities, such as Foxconn campus.

Several such tests already started. The first self-driving bus to operate on fully public roads debuted in the Netherlands in Feb 2016. A pilot project by French Easymile is scheduled to bring two driverless shuttles to an office park in Bay Area in summer 2016.

I took a lot courses from Coursera, which Andrew Ng co-founded, including the famous Machine Learning he taught. Best wishes to him, and no matter who wins the race to get first truly autonomous driving application, it is win for the technology, and a major advancement on the journey of Homo Sapiens.

Friday, February 12, 2016

Gravitational Waves Discovered from Colliding Black Holes - Scientific American

The LIGO experiment has confirmed Albert Einstein’s prediction of ripples in spacetime and promises to open a new era of astrophysics

Tuesday, February 9, 2016

With Driverless Cars, How Safe Is Safe Enough? - Thoughts with Bayesian flavor [Draft]

 Well, that's the news from Lake Wobegon, where all the women are strong, all the men are good looking, and all the drivers are above average, including the robots driving the driverless. 

The Myth of i.i.d

And yet, it could be impossible to accurately gauge safety until many, many autonomous vehicles hit the roads. In the U.S., approximately one fatality occurs for every 100 million miles driven. To prove with 95% confidence that a driverless car achieves, at least, this rate of reliability by driving them around to see, it would require they be driven 275 million miles without a fatality. With a fleet of 100 autonomous vehicles (larger than any known existing fleet) driving 24/7, it would take more than 12 years to drive these miles. But with 10,000 such vehicles, it would take just six weeks. Regulators will have to find other ways of estimating safety, but widespread deployment will be the true test. If safety standards are too strict, this might never happen
The assumption behind the above calculation is that every mile the every test vehicle travels has the identical independent probability of accidents. In chapter 4 of the report Kalra co-authored, many different cases were discussed, where the AVs outperform human in some, and human drivers outperform in some others, and some are challenging to both human and robotic drivers. 

Conditional Probability

There is one thing I learned from the excellent E. T. Jayens, every probability is a conditional probability. So, P(A|AV, H0) = Sigma(P(A|AV, Ci, H0)*P(Ci|AV, H0)
Human drivers' accident rate P(A|Human, H0) are available, and there are models to predict improvement without AVs.  

For regulation to setup the accepting criteria, 

  • Baseline of human reliability
  • a comprehensive list of test cases (Ci), and passing rate for each Ci
  • the modelling of likelihood of each Ci. Also the government can influence the likelihood, and make the AV and human drivers much safer.
  • ???


  • If we can train a pilot using simulator, can we train a robot with simulator? 
  • Can we test/validate a robot with simulator?
  • Open database for all autonomous vehicles developers?

Can the Robot Get a Driver's License

When I first went to the driving school, the instructor told me that everyone could get a driver's license, except the legally blind. The robotic driver can easily beat me in the road test, and get the driver's license. 


  • Ci's are not independent. (Need to pull out my math book...)
  • Scalibility: the outcome of Ci, and also the P(Ci|H0) can vary depending on how many AVs are on the road.
  •  X-ware: hardware (vehicle, sensor, etc.), software, driver/rider, environment, 
  • Interactions: AV2AV, AV2HD, AV2I, AV2Pedestrian
  • Software upgrade
  • AI learning, general learning, and adaptive learning toward the specific environment,  
  • ...

Nidhi Kalra, "With Driverless Cars, How Safe Is Safe Enough?"; Retrieved from
James M. Anderson, Nidhi Kalra, Karlyn D. Stanley, Paul Sorensen, Constantine Samaras, Oluwatobi A. Oluwatola, "Autonomous Vehicle Technology: A Guide for Policymakers"; Retrieved from