Tuesday, March 21, 2017

survminer 0.3.0 - Easy Guides - Wiki - STHDA

Will try it out.



survminer 0.3.0 - Easy Guides - Wiki - STHDA

Friday, July 1, 2016

Fist Fatality of Autonomous Driving

In a Tesla blog post published on June 30, 2016, we learned about the first fatal crash of a Model S while on AutoPilot mode.
What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.
Still a lot of unknowns, but here is some thoughts
  • Tesla uses cameras, not Lidars, as primary sensing. Cameras are known to issues with white object against bright sky and other scenarios. Elon Musk replied on his twitter that the radar ignored it to avoid false alarm for high object. This should have been discovered in the FMEA. What is the mitigation of Tesla during the FMEA? Did they test the driving in such scenarios? Is disabling autopilot in such scenario an option? With current Tesla sensing, whether it is possible to detect it with algorithm improvement?
  • Testing. This accident provide a real case for other auto driving applications to test against, probably in simulation, also in controlled road. In simulation or in controlled environment, we can test the scenario leading to the accident, and add variations to the testing. The miles logged in testing is important but testing against known weakness is more important. As noted in the Tesla blog, "rare circumstances", but testable rare circumstances. We need to test such rare circumstances in simulations and controlled environment. We can discover such circumstances by FMEA, simulation, and logged near missed. 
  • Communication. Disappointed to see Tesla's blog post still emphasizing the fatality per mile is better than US average, and boasting that the safety features could have saved life had the situation been different. No. A life was lost. Let's focus on what we can improve first.

Tuesday, April 12, 2016

Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

From RAND
 One proposal to assess safety is to test-drive autonomous vehicles in real traffic, observe their performance, and make statistical comparisons to human driver performance. This approach is logical, but it is practical? In this report, we calculate the number of miles that would need to be driven to provide clear statistical evidence of autonomous vehicle safety. Given that current traffic fatalities and injuries are rare events compared with vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their safety in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate performance prior to releasing them for consumer use. Our findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability. And yet, it may still not be possible to establish with certainty the safety of autonomous vehicles. Therefore, it is imperative that autonomous vehicle regulations are adaptive — designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.

TL;DR.
However, the myth of i.i.d (independent identical distribution) is clearly there.
We can answer this question by reframing failure rates as reliability rates and using success run statistics based on the binomial distribution (O’Connor and Kleyner, 2012).
Easiest way for Google to pass the test, is to build a miles long circle in no-man's land, and line dozens AVs bumper to bumper. Then like them run and log the miles. Of course, in the report Rand mentioned:
Perhaps the most logical way to assess safety is to test-drive autonomous vehicles in real traffic and observe their performance.
However, what is "real" traffic. Is the empty high way at 4:00 am real? Is the icy road in Rocky mountain real?
A good Texas driver is almost certainly road hazard close to Ski resort. (Just speaking for myself :) )

...
It frustrates me that the RAND report does not even bother to list i.i.d as their fundamental assumption.

Will write another longer post to discuss this topic. Some previous thoughts:
http://blogs.riskpredictions.com/2016/02/with-driverless-cars-how-safe-is-safe.html
http://blogs.riskpredictions.com/2016/02/autonomous-cars-in-snow.html

Monday, March 14, 2016

How to replace a pie chart?

Pie charts have a bad reputation (among statisticians), but they are widely used. But ... how can we oppose a pie chart on Pie day?



David Robinson at Variance Explained asked:

Which communicates more to you? And can you think of a plot that communicates this data even more clearly?
I would think that the questionnaire is not designed right to begin with. Such data would be clearly presented by six box plot, and we make an easy comparison across tasks. However a percentage points is not exactly the metric we want to compare, no matter it is presented by pie or bar.

2016 Pi Day Art Posters

Martin Krzywinski celebrates Pi day:

the detection of gravitational waves at the LIGO lab and simulate the effect of gravity on masses created from the digits of pipi."

Wednesday, March 9, 2016

Frank Chen on Twitter: "1/ Why are people so fired up about a computer winning yet-another-board-game?"

1/ Why are people so fired up about a computer winning yet-another-board-game?

2/ We saw similar media mania around Deep Blue beating Kasparov in 1996-97, but that didn't lead to general #AI breakthroughs

3/ But here are some reasons AI researchers are fired up about AlphaGo beating Lee Se-dol and how it might presage much more general

4/ #AI breakthroughs with applications far beyond board games

5/ First, Go is much more complicated than tic-tac-toe or checkers or chess. GOOG points out Go has a googol more possible positions

6/ compared with chess. Because yo can’t bruce force search space, whole new set of learning algorithms were needed

7/ Second, it shows off how an ensemble of learning techniques (deep learning + reinforcement + Monte Carlo tree search) beats any single

8/ technique in isolation. Monte Carlo tree search is unsung hero (since deep learning gets all the glory), as it’s great way of exploring

9/ very large search space without brute force exploring every node

10/ Third, it perfectly illustrates the pendulum switch in AI away from expert systems (capturing human expertise in algorithms) to

11/ algorithms which learn themselves through experience. A delightful example if this class of algorithms called genetic algorithms

12/ figured out how to play and beat Super Mario World https://www.youtube.com/watch?v=qv6UVOQ0F44 …

13/ Finally, as Google writes:



14/ So the oh-so-tantalizing prospect is that we can program computers to mimic human intuition and flow and imagination

15/ Rather than just solve problems with  brute force computational power and reliable, large memories

16/ So is this #AlphaGo win the beginning of real progress towards generalized intelligence compared to the false start of winning chess?

https://twitter.com/withfries2/status/707625167288029184