Tuesday, May 22, 2018

Simmer - DES for R

It is exciting to learn about the new open-source Discrete Event Simulation (DES) implementation for R - Simmer.

I tried it out, and it is impressive.

The Code is available at GitHub.

It is similar to Simpy, with some nice monitoring features. I am working to implement some basic Reliability Block Diagram simulation functionalities with it, inspired by Seymour Morris

Will report back ...

Wednesday, March 21, 2018

世说新语

当年在天涯追着看过世说新语,曾不满天涯排版,正逢初学 markdown,于是剪刀浆糊重新排版一出。最近领导想听历史故事,才翻出来,发现,的确是很多年了 。。。
用正在玩的 bookdown 整理一下,可读性还是不好,需要继续改 。。。
https://huyunwei.github.io/reading/

Wednesday, February 28, 2018

Filenames to .txt

Should have known this long time ago ...
dir > filename.txt

https://support.microsoft.com/en-us/help/196158/how-to-create-a-text-file-list-of-the-contents-of-a-folder

Another good explanation of the parameters to use:

https://answers.microsoft.com/en-us/windows/forum/windows_7-files/how-do-i-copy-all-file-names-in-a-folder-to/5a4b8da9-123a-4c98-b4aa-1260b38409a2?auth=1

there is a way to sort the output in alphabetical order; for that the /o switch is needed.  I only mention this because, invariably whenever people ask for this kind of stuff they intend to do something with the filenames and it's much easier to deal with it if it's sorted. Another switch that I use is '/b'.  The /b switch will remove the heading information & summary so that you get just the file names.  What I normally do is:
dir "C:\some folder" > output.txt /b /o
Notice that the path for the folder name is wrapped in quotes.  You use quotes whenever the folder name or file name has a space in it.  You will also notice that there is just a single (>) as opposed to the two (>>) in Steve's answer.  The difference is that the former will overwrite the contents and the latter will append to the existing file named "output.txt".
Finally, I use the 'Open command window here' shortcut so that I don't have to navigate to the folder/directory that I want to work in.

To list files in subfolder too:
dir /b /s > filenames.txt

Or we can do it in python.
https://docs.python.org/3.6/library/os.html


os.listdir(path='.')
Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order, and does not include the special entries '.' and '..' even if they are present in the directory.
path may be a path-like object. If path is of type bytes (directly or indirectly through the PathLike interface), the filenames returned will also be of type bytes; in all other circumstances, they will be of type str.

Friday, July 1, 2016

Fist Fatality of Autonomous Driving

In a Tesla blog post published on June 30, 2016, we learned about the first fatal crash of a Model S while on AutoPilot mode.
What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.
Still a lot of unknowns, but here is some thoughts
  • Tesla uses cameras, not Lidars, as primary sensing. Cameras are known to issues with white object against bright sky and other scenarios. Elon Musk replied on his twitter that the radar ignored it to avoid false alarm for high object. This should have been discovered in the FMEA. What is the mitigation of Tesla during the FMEA? Did they test the driving in such scenarios? Is disabling autopilot in such scenario an option? With current Tesla sensing, whether it is possible to detect it with algorithm improvement?
  • Testing. This accident provide a real case for other auto driving applications to test against, probably in simulation, also in controlled road. In simulation or in controlled environment, we can test the scenario leading to the accident, and add variations to the testing. The miles logged in testing is important but testing against known weakness is more important. As noted in the Tesla blog, "rare circumstances", but testable rare circumstances. We need to test such rare circumstances in simulations and controlled environment. We can discover such circumstances by FMEA, simulation, and logged near missed. 
  • Communication. Disappointed to see Tesla's blog post still emphasizing the fatality per mile is better than US average, and boasting that the safety features could have saved life had the situation been different. No. A life was lost. Let's focus on what we can improve first.

Tuesday, April 12, 2016

Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

From RAND
 One proposal to assess safety is to test-drive autonomous vehicles in real traffic, observe their performance, and make statistical comparisons to human driver performance. This approach is logical, but it is practical? In this report, we calculate the number of miles that would need to be driven to provide clear statistical evidence of autonomous vehicle safety. Given that current traffic fatalities and injuries are rare events compared with vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their safety in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate performance prior to releasing them for consumer use. Our findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability. And yet, it may still not be possible to establish with certainty the safety of autonomous vehicles. Therefore, it is imperative that autonomous vehicle regulations are adaptive — designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.

TL;DR.
However, the myth of i.i.d (independent identical distribution) is clearly there.
We can answer this question by reframing failure rates as reliability rates and using success run statistics based on the binomial distribution (O’Connor and Kleyner, 2012).
Easiest way for Google to pass the test, is to build a miles long circle in no-man's land, and line dozens AVs bumper to bumper. Then like them run and log the miles. Of course, in the report Rand mentioned:
Perhaps the most logical way to assess safety is to test-drive autonomous vehicles in real traffic and observe their performance.
However, what is "real" traffic. Is the empty high way at 4:00 am real? Is the icy road in Rocky mountain real?
A good Texas driver is almost certainly road hazard close to Ski resort. (Just speaking for myself :) )

...
It frustrates me that the RAND report does not even bother to list i.i.d as their fundamental assumption.

Will write another longer post to discuss this topic. Some previous thoughts:
http://blogs.riskpredictions.com/2016/02/with-driverless-cars-how-safe-is-safe.html
http://blogs.riskpredictions.com/2016/02/autonomous-cars-in-snow.html