Failure Mode Effects – What I learned from a failed car engine:

The 2010 Dodge Charger HEMI is the epitome of a modern muscle car. It is powered by an absolute monster of an engine. The V8 engine displacement is at 6.1L, produces 317kW of power and 569Nm torque. The engine is manufactured at Saltillo engine plant and has a great track record for being low maintenance high displacement naturally aspirated North American engines. In-fact, there is nothing to complain about the engineering or quality of the engine. This is the same with almost all modern cars and car engines. All the cars in the market right now are very well developed and mature products.

Unlike yesteryear’s car models, like the 90’s Mercedes with moody electronics or Lanicas that rot away faster than unrefrigerated cheese or Triumphs that leak oil like a rain soaked umbrella and Jaguars that hibernate whenever there is a light shower; cars these days are reliable, reliable like an appliance. The reliability masquerades another interesting problem; the problem of ‘what if something goes wrong’.

I have known folks who obsess about tire pressure. The same individuals who pop open the engine-bay to check the oil dipstick to see if there is enough engine oil. These are the folks who had the reluctant and unpleasant experience of living with one of the yesteryear’s unreliable vehicle engineering samples. They know very well that things can go wrong and they will go wrong. So, their life’s journey has made them curious and cautious about every single little mechanical clatter.

As for me, I am the nineties kid. My first vehicular experience was on a universal Japanese motorcycle or commonly abbreviated as the UJM. It was a bright red Yamaha. It was a reliable, fun piece of engineering which has made me a Yamaha fan for life. My first car was a Hyundai. Again, reliable piece of Korean engineering and a sensible one too. Some might say soul-less, but I had great fun with it. Even though the car I had was stereotypically reliable, recent history suggests Hyundai has some serious issues with their engines. Recently Hyundai issued a recall of 1.2 million vehicles, which covers both of its brands Hyundai and Kia motors in North America. So, I am fortunate that at no part of my life did I ever experience the horrible vehicular gremlins of electrical issues, oil leaks or anything of sorts.

The story is similar to almost everyone whom I grew up with or for any of those who had their early adult driving years with a well engineered car. This brings to attention, the problem of what do we do when things go wrong. The reason I talked about the HEMI is because one my friends own one. He got the car from his dad. So there is a family connection and major emotional value to it. The problem was a rogue oil pressure sensor. It happens even to the best of us. When we see a rogue oil pressure sensor, we are optimistic that we won’t have any oil issues. We procrastinate to fix the wonky sensor.

But, the problem is, even-though, being optimistic in life is a brilliant quality, it is not so much when it comes to dealing with engineered systems. The oil pressure sensor is there for a reason. It is a two layer protection. The first layer is to engineer something that works great in the first place. The second layer is to prepare for the malfunctions. Most of the time even I, ignore the oil pressure warnings. I know it is a bad thing, but at the same time, I also know the odds of it being something really bad is very low. Also, all the advertisements about how reliable all the modern cars are; add to our sense of highly optimistic thinking.

There are occasions when these non-zero odds turn out to be something really bad. Something like the oil line had a leak and the engine has no oil. This is exactly what happened to my friend. The one time he decided to ignore a warning light that was going off for months on his dashboard turned out to be something really really bad. While he was cruising along, the HEMI engine shuts down, since it has no oil to lubricate all the moving part. The optimistic us, whose life experience has been with reliably engineered devices, suddenly become flummoxed. It is not a pretty sight when an engine runs out of oil. Here is a video of a mechanic describing the horrors of dealing with an engine that ran out of oil.

I really admire the folks who are meticulous with their cars. Those who take the warning lights seriously every single time. It is an approach that will prevent disasters like engine running out of oil. But, then I realized, this story has more to it. When we are dealing with any systems, warning signals need to be taken seriously. Especially when it comes to engineered systems. These warning lights are there for a purpose. Our optimistic self and the modern life we are surrounded with are filled with highly reliable devices. It has subconsciously taught us to ignore these warning signs. But, one day one of these warning signs flashing in front of us can turn it into a really messy problem.

So, how can a regular person like you and me, avoid a situation similar to the HEMI engine running out of oil from happening? My idea to solve this important issue is to create better reminders and warning signals for cars. Since we are all addicted and tethered to our smartphones, a simple way will be to connect the sensor readings from the engine to an app. The app warns us through a notification event, may be periodically, perhaps every time one starts a car. A simple LED light flashing on the dash isn’t enough for me or anyone I know of, to be motivated to see a mechanic and figure-out what is wrong with the car engine. Since, most of my decision makings are surrounding the phone, the computer and everything linked to the internet, I am firmly in support of a better car: a better internet connected car. It will make the warning lights appear more serious and also well prepared for the #applife.

I am confident the next car of mine will be a connected car, that sends notifications to my phone.

The Dodge HEMI Charger has been very important for our continued work on cancer research as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.


Fine tuning the cloud – Making machines learn faster.

In a previous post I discussed how easy it was to setup machine learning libraries in python using virtual machines (vm) hosted in the cloud. Today, I am going to add more details to it. This post covers how to make machine learning code run faster. This post will help any user compile a tensorflow installer to make it run faster in the cloud. The speed advantages are due to installer optimizations to take advantage of processor instruction set extensions that power the cloud vm.

The steps  described below will help us compile a tensorflow installer with accelerated floating point and integer operations using instruction set extensions for the AMD64 architecture . The CPU we will be dealing with today is an Intel Xeon processor running at 2.3GHz. It is a standard server CPU that supports AVX, AVX2, SSE + floating point math and SSE4.2.

In the previous post, we were using a simple pip installer for tensorflow. The pip installer is a quick and easy way to set-up tensorflow. But, this installation method for tensorflow is not optimized for the extended instruction sets that are present in these advanced CPUs powering the vm in Google Cloud Platform.

If we compile an installer optimized for these instruction set extensions, we can speed-up many of the computation tasks. When tensorflow is used as a back-end in keras for machine learning problems, the console reminds you constantly, to optimize your tensorflow installation. These instructions below will also help you get rid of those warning messages in your console.

First step to compile an optimized tensorflow installer is to complete all the linux package dependencies. Run the line below to complete the missing packages needed to compile tensorflow installer.

To compile tensorflow, we need a build tool called Bazel. Bazel is an opensource tool developed by Google. It is used to automate software development and testing. Since tensorflow is at the leading edge of machine learning world, features are added, bugs are fixed and progress is made at a dizzying speed than the relatively pedestrian pace of traditional software development. In this rapid development, testing and deployment environment; Bazel help users manage this process more efficiently. Here is the set of code to install Bazel.

Once Bazel is installed in your computer, next step is to clone the source files of tensorflow from GitHub.

After the source files are copied from GitHub repository to your local machine, we have to do some housekeeping. We need to ensure the python environment to run tensorflow has all the necessary libraries installed. To fulfill the library dependencies in python, we need to install numpy, development environment, pip and wheel using the code below:

The next step is to configure the build process for the installer. Before we do the actual configuration, we are going to use a preview of the configuration dialog. This will help us understand what parameters we should know before hand to complete the configuration process successfully. The configuration dialog for building the installer is as follows:

We need four important pieces of information before we go ahead and configure the build process. They are: 1) location of the python installation, 2) python library path, 3) g++ location and 4) gcc location. The last two are optional and only needed to enable OpenCL. If your cloud vm supports OpenCL and CUDA, the instructions to compile tensorflow are slightly different, which I will not cover in this post. Identifying python installation location and library paths can be done using the following code. I have also included the steps for finding locations for gcc and g++ compilers in the code below:

We have all the information needed for configuring build process. We can proceed to configure the build process using the following line:

If you encounter the following error:

Purge the openjdk-9 and reinstall jdk-8 version. Use the instructions below:

Now, try ./configure again. Once the build process is configured properly, we can go ahead and build the installer using the following commands:

Everything should proceed smoothly and the build process is going to take some serious time. An octacore 2.3GHz Intel Xeon powered virtual machine needs around 30 minutes to complete this process. So, plan this process ahead of time. A short notice deployment is impossible, if one is looking to build the installer from scratch.

If the last step above threw a file not found error, it can be resolved by peeking into the build directory for the correct installer name.

Once you have the correct installer name, append the last line of code above with the correct installer name and the installation process should finish without any error messages. If we manage to finish all the steps above, it means: we have successfully installed an optimized installer for tensorflow. This installed tensorflow library is compiled to take advantage of the processor instruction set extensions.

Finally, to check and see if tensorflow can be imported into python, use the next few lines of code. The code follows these steps: first, we have to exit the tensorflow install directory, then invoke python and import tensorflow.

The import tensorflow line in python environment should proceed with no errors. Hurrah, we have a working tensorflow library that can be imported in python. We will need to do some tests to ensure that everything is in order, but, no errors up to this point means easy sailing ahead.

I have described a step-by-step process of building and installing tensorflow based on my logical reasoning of progression of things. These steps have worked extremely well for me, so far. The optimized tensorflow installation has cut-down run-time for some of the tasks by a factor of ten. As an added advantage, I longer have error messages that ask me to do optimizations to tensorflow installation in the python console.

Happy machine learning in the cloud everyone.

This work is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.


In the cloud – Geography agnostic enterprise using Google Cloud Platform.

The concept of geography/location agnostic enterprise is very simple. Whether I am in Beijing or Boston, Kent or Kentucky, New Delhi or New York, Silicon Valley or Silicon Wadi, Qatar or Quebec; I should have access to a standard set of tools to run the company. Moving around geographic locations are hard challenge for any enterprise. One of the first problems we wanted to solve is how quickly we can deploy some of our tools to countries around the world. Part of the reason why I envision my mission with nanøveda to be a geography/location agnostic enterprise is because the problem we are trying to solve is universal. We want people around the world to have uniform access to our technology. It will also help us become better in business too.

To solve this problem I am in search for answers. I found a brilliant solution a few days back. Recently, I got an invite for a trial of Google Cloud Platform (GCP). Google was kind enough to give me a $300 credit towards creating applications in their cloud platform. I was very excited to try this cloud computing platform from one of the leaders in computing. Finally, last Friday, I decided to explore GCP. For me cloud computing brings two advantages: 1) Zero time and cost overhead of maintaining a set of in-house linux servers; 2) Creating a truly geographic agnostic business. I am running an Ubuntu 16.10 workstation for running machine learning experiments. Most of the tasks handled by this workstation has already started to overwhelm its intended design purpose. Therefore, I was actively looking for solutions to expand the total compute infrastructure available for me. It was right at this moment, I turned to Google to help solve our compute infrastructure problem.

I have never used GCP before. Therefore, I had to go through a phase of experimentation and learning, which took approximately a day. Once I learned how to create a virtual machine, everything started to fall in place. To check if the vm is seen properly by the guest os, I ran some basic diagnostic tests.

The GCP has an interesting flat namespace Object storage. The feature is called Buckets. The buckets feature allow the virtual machine to share data from a remote computer, very conveniently over a web interface. Google has a command-line tool called gsutil, to help its users streamline the management of their cloud environment. One of the first commands I learned was to transfer files from my local computer to the Object storage space. Here is an example:

Once I learned file transfer, the next step was to learn how to host a virtual machine. After I set-up an Ubuntu 16.10 virtual machine in the cloud, I needed to configure it properly. Installing necessary linux packages and python libraries were easy and fast.

After the vm was configured to run the code I wrote, the next step was to test file transfer to the vm itself. Since vm and the Object storage are beautifully integrated, file transfer was super quick and convenient.

After the code and all the dependent files were inside the cloud vm, the next step was to test the code in python. The shell command below executed the code in the file test.py.

Since some of the code execution takes hours to complete, I needed a way to create a persisting ssh connection. Google offers a web-browser based ssh client. The browser ssh client is simple, basic way of accessing a virtual machine. But, for longer sessions: creating a persistent session is ideal. Since my goal is to make most of the computing tasks as geography agnostic as possible, I found a great tool for linux called screen. Installing screen was very straightforward.

Once screen was installed, I created a screen session by typing screen in terminal. The screen session works like the terminal. But, if you are using ssh, it allows persistence of the commands being executed in the terminal even after the ssh terminal is disconnected. To quit screen just use the keyboard short cuts: ctrl+a followed by ctrl+d.

To resume a screen session, just type screen -r in vm terminal. If there are multiple screen sessions running, then one will have to specify the specific session that needs to be restored.

The ssh + screen option is a life saver for tasks that require routine management, and needs a lot of time to execute. It allows a vm administrator to convert any network connection into a highly persistent ssh connection.

The combination of Google cloud Object storage, ease of networking with the vm, ssh and screen has allowed me to transfer some of the complex computing tasks to the cloud in less than a day.The simplicity and lack of cognitive friction of the GCP has taken me by surprise. The platform is extremely powerful, sophisticated and yet; very easy to use. I have future updates planned for the progress and evolution of my usage of GCP for our start-ups computing needs.

I am still amazed by how easy it was for me to take one of the most important steps in creating a truly geography/location agnostic enterprise with the help of Google Cloud Platform. I have to thank the amazing engineering team at Google for this brilliant and intuitive cloud computing solution.

This work is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.


Pi day – Calculate 2017 digits of pi using Python3

Here is a short and elegant code to calculate 2017 digits of pi. The code is implemented in python3. In three lines of very simple python code; we are going to calculate the 2017 digits of pi.

The output is:

Happy 2017 pi day.

The code is available on GitHub.

This work is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.


Installation notes – OpenCV in Python 3 and Ubuntu 17.04.

These are my installation notes for OpenCV in Ubuntu 17.04 linux for python3. These notes will help start a computer vision project from scratch. OpenCV is one of the most widely used libraries for image recognition tasks. The first step is to fire-up the linux terminal and type in the following lines of commands. These first set of commands will fulfill the OS dependencies required for installing OpenCV in Ubuntu 17.04.

Step1:

Update the available packages by running:

Step2:

Install developer tools for compiling OpenCV 3.0

Step3:

Install packages for handling image and video formats:

Add installer repository from earlier version of Ubuntu to install libjasper and libpng and install these two packages:

Install libraries for handling video files:

Step4:

Install GTK for OpenCV GUI and other package dependencies for OpenCV:

Step5:

The next step is to check if our python3 installation is properly configured. To check python3 configuration, we type in the following command in terminal:

The output from the terminal for the command above indicates the location pointers to target directory and current directory for the python configuration file. These two locations must match, before we should proceed with installation of OpenCV.

Example of a working python configuration location without any modification will look something like this:

The output from the terminal has a specific order, with the first pointer indicating the target location and the second pointer indicating the current location of the config file. If the two location pointers don’t match, use cp shell command to copy the configuration file to the target location. We are using the sudo prefix to give administrative rights to the linux terminal to perform the copying procedure. If your linux environment is password protected, you will very likely need to use sudo prefix. Otherwise, one can simply the skip the sudo prefix and execute the rest of the line in terminal. I am using sudo here because it is required for my linux installation environment.

Step6 (Optional):

Set-up a virtual environment for python:

The next step is to update ~/.bashrc file. Open the file in text editor. Here we are using nano.

At the end of the file, past the following text to update the virtual environment parameters:

Now, either open a new terminal window or enforce the changes made to bashrc file by:

Create a virtual environment for python named OpenCV:

Step7:

Add python developer tools and numpy to the python environment we want to run OpenCV. Run the following code below:

Step8:

Now we have to create a directory to copy the OpenCV source files from GitHub and download the source-files from GitHub to our local directory. This can be achieved using mkdir and cd commands along with git command. Here is an example:

We also need OpenCV contrib-repo for access to standard keypoint detectors and local invariant descriptors (such as SIFT, SURF, etc.) and newer OpenCV 3.0 features like text detection in images.

Step9:

The final step is to build, compile and install the packages from the files downloaded from GitHub. One thing to keep in mind here is that: we are now working in the newly created folder in terminal and not in the home directory. An example of linux terminal code to perform the installation of OpenCV is given below. Again, I am using sudo prefix to allow the terminal to have elevated privileges while executing the commands. It may not be necessary in all systems, depending on the nature of the linux installation and how system privileges are configured.

Step10:

The final step of compiling and installing from the source is a very time consuming process. I have tried to speed this process up by using all the available processors to compile. This is achieved using the argument nproc –all for the make command.

Here are the command line instructions to install OpenCV from the installer we just built:

For the OpenCV to work in python, we need to update the binding files. Go to the install directory and get the file name for the OpenCV file installed. It is located in either dist-packages or site-packages.

The terminal command lines are the following:

Now we need to update the bindings to the python environment we are going to use.  We also need to rename the symbolic link to cv2 to ensure we can import OpenCV in python as cv2. The terminal commands are as follows:

Once OpenCV is installed, type cd / to return to home directory in terminal. Then type python3 to launch the python environment in your OS. Once you have launched python3 in your terminal, try importing OpenCV to verify the installation.

Let us deactivate the current environment by:

First we need to ensure, we are in the correct environment. In this case we should activate the virtual environment called OpenCV and then launch the python terminal interpreter. Here are the terminal commands:

Let us try to import OpenCV and get the version number of the installed OpenCV version. Run these commands in the terminal:

If the import command in python3 console returned no errors, then it means python3 can successfully import OpenCV. We will have to do some basic testing to verify if OpenCV installation has been successfully installed in your Ubuntu 17.04 linux environment. But, if one manages to reach up to this point, then it is a great start. Installing OpenCV is one of the first steps in preparing a linux environment to solve machine vision and machine learning problems.

A more concise version of the instructions is also available on GitHub. Using the Moad Ada dev-ops server for deep-learning, the linux environment comes pre-installed with OpenCV. This makes it easier for artificial intelligence & deep learning application developers to develop machine vision applications. The dev-ops server can be ordered here: Moad online store.


Fair trade – How hard can it be?

This year’s United States Presidential address to the congress featured an impassioned plea by Pres. Donald J. Trump towards focusing more on fair trade. His reasoning was clear: a fair global trade will generate less disruptive effect on societies. His concerns were directed toward the ordinary American tax payers, and his interest in protecting their livelihood. But, his calls for incorporating fair trade as part of globalization has another beneficiary: the entire world itself. This is an unintended consequence for a US president who has admitted to eliminating any pretensions of him acting as “the leader of the free world”. American policies will be a powerful force in dictating the direction of the global economy for decades to come. It may not be a deliberate attempt to change the world, but a mere result of being the most powerful and richest country on earth.

To tackle fair trade, a key issue the president raised was: creating a better system of taxation across nation states. The goal of his proposed revised taxation structure is to make trade more equitable between nation states. The example he used to reiterate his logic on fair trade and taxation focused on Milwaukee, Wisconsin based Harley Davidson. Harley Davidson has been part of the American journey for the past 114 years. Yet, it has difficulty competing in some of the world’s largest motorcycle markets, including India.

North American motorcycle market accounts for only 2% of the global motorcycle sales volume. It is tiny compared to the rest of the global motorcycle marketplace. The largest motorcycle manufacturer in the world is Minato, Tokyo, Japan based Honda. India is one of the largest volume markets for Honda. In India, Honda has managed to establish a large local manufacturing facility to produce the popular mass market low displacement motorcycles. The sheer volume of monthly sales of Honda motorcycles overshadow the annual sales of Harley Davidson, not just in India, but around the world.

The reason for Harley Davidson being overshadowed by motorcycle manufacturers from the rest of the world is partly due to their strategy of catering to an exclusive set of customers. They position themselves as a lifestyle brand more than a motorcycle brand. Most of the sales volume in Asia is for commodity, commuter, low displacement motorcycles. Harley Davidson has no products to compete in this segment. For European markets, Harley Davidson again forgets to cater to the sports bike segment. The issues of Harley Davidson’s struggle in global markets are not just due to taxation and duties on motorcycles.

If the interest in making global trade fairer is genuine, one has to consider another key component: the human factor. Even the world’s most powerful democracy starts crying foul play when it comes to global trade, makes one wonder the real consequences of global trade in this world.

Recently, the privately held food manufacturer Cargill came under microscope for their questionable environmental practices in bringing food to the table for millions of Americans. Cargill and its local suppliers involved in circumventing local Brazilian laws to prevent deforestation, is another great example for the desperate need to incorporate fair trade in globalization. The Brazilian suppliers of Cargill are empowered by the capital and resources of the North American market, which even local governments can’t fight against.

The Brazilian soybeans could have been replaced by produce sourced from North American farmers themselves, who adhere to more stringent environmental standards than Cargill’s Brazilian counterparts. Instead Cargill’s decision to cut upfront costs for livestock feeds, explicitly demonstrate the flaws in global trade. A call for fair free trade also means placing restrictions for companies like Cargill. The current trade practices allow unhinged environmental damages in the process of bringing fast-food burgers to the American market. Therefore, a call for fairer traded world also means better protection of Brazilian rain-forests.

Global trading of coffee beans is another great example to illustrate the difficulty of implementing fair trade. Coffee is one of the largest volume traded commodity in the world. Yet, only 30% of the coffee beans produced worldwide meet the current definition of fair trade. Even the millennial favorite coffee chain: Starbucks, has managed to source only about 8% of their coffee beans through fair trade.  The current mechanisms to create fair traded coffee beans is riddled with massive economic and social problems. An important issue that comes to my mind is: despite the coffee chains marketing the fair traded coffee at a premium price; only a fraction of the added price paid by the customer, reaches the producer.

The discussion on fair global free trade is a great running start to create a more equitable world. Creating uniform taxation rules across nation states is the first logical step towards this goal. But, the concept of fair trade extends way beyond mere taxation. I am excited by the fact that the US president has initiated a conversation on fair trade. It is an important topic that has more substance to it than just the mettle of selling motorcycles made in Milwaukee to the milkmen in Mumbai.

Descriptions for photographs featured in this post from top to bottom: 1) Photograph of the Whitehouse, 1600 Pennsylvania Ave NW, Washington, DC 20500, United States, at dusk, via Wikipedia. 2) A vintage style Harley Davidson advertisement poster featuring model Marisa Miller, obtained from public domain via Pinterest and reproduced under fair usage rights. 3) Jaguar (Panthera onca) photographed at the Toronto zoo, obtained via Wikipedia. Jaguar is a near endangered apex predator. Its natural habitat include the Brazilian rain-forests, that are currently under threat of massive deforestation due to unfair trade practices followed by food suppliers like Cargill. 4) Commercial coffee farm in Jinotega, Nicaragua, Source: Lillo Caffe, obtained through Dartmouth college blog-post on subsistence agriculture in Nicaragua.


Formula One 2017 – Exciting changes for team Mercedes.

Today is the first day of the pre-season testing for 2017 formula one season. I am very excited about the 2017 season. The new regulations are expected to make the cars go faster. One of the criticisms for the hybrid era formula one races have been the lack of excitement. Faster cars are definitely going to make this sport very exciting indeed.

Along with car changes there is also a change in driver lineup. Last year’s world champion: Nico Rosberg retired from the sports. To fill-in his spot is Finnish driver Valtteri Bottas. Bottas moved from Williams-Martini Racing, a Mercedes powered team, to the factory team. Lewis Hamilton is the veteran driver for the team. I am expecting Hamilton to be faster of the two. Unlike last year, I also hope Hamilton to have a better luck with reliability. Mercedes AMG Petronas have made some significant changes to the 2017 car.

This video below explains all the regulation changes in place for 2017 season.

Here are my thoughts on some of the interesting details I found out about the Mercedes AMG Petronas W08 2017 car.

One of the important changes to the 2017 season W08 EQ+ powered car is the delta shaped front wings with 12.5 degree back sweep angle. It features cascade winglets similar to 2016 season. The delta front wings are a regulation change for the 2017 season.

The W08 has a slender nose, shorter camera pods and a new S duct design with deeper grooves and embedded antenna in the middle. The four element turning vanes attached to the new nose design features their own individual footplates that are again divided into a total of seven individual aerofoils. There is also a bommerang shaped fin sitting above the barge-boards.

The car also features a taller front suspension by placing the upper wishbone higher. A taller wishbone frees-up space between suspension elements and create cleaner and increased airflow into aerodynamic components behind it.

The primary barge-board occupies almost all of the box area set out by the FIA regulations. It also features a number of perforations along the base to help optimize the airflow and seal off the splitter behind. The perforated body work is designed for creating elongated vortices and optimized surrounding free-flow of air. There is also a more detailed out-sweep floor board design to optimize airflow underneath the car. Another interesting addition to the floor board is a simpler design to displace turbulent airflow by featuring nine perforations ahead of the rear tires. This approach is a relatively simpler design compared to previous years. These floor-board perforations will allow cleaner airflow over the rear tires by preventing the formation of vortices.

The overall length of the car has been increased by 15 cm. It also features a very big, complexly sculpted vane tower design on either sides of the barge-boards. The side pods are extended for better engine air intake. Since the 2017 design increases the airflow to the engine, I am expecting an increase in the power-output for the otherwise largely unchanged internal combustion design. The side-pods also feature a highly detailed three element flow conditioners for maximizing the deflection of the wake from the side-pod undercuts.

 

The rear of the car features a FIA regulation mandated slanted, bow shaped rear wing, which is shallower at the tips than at the center. There is also an open leading edge slotted end-plate design for the rear wing. A narrow ‘T-shaped’ mini-wing is also placed ahead of the rear tires. The new wing design and the mini-wing is aimed at making the car more aerodynamically balanced. I am expecting slight changes to the wing design, depending on the nature of the race circuit.

Even though the initial reveal at Silverstone showed a car with a subtle ‘shark-fin’ element over the rear engine cover, the race car debuted at Barcelona has a more prominent ‘shark-fin’ aero-structure. The more prominent ‘shark-fin’ engine cover element also features a clever engine intake point, possibly to assist in cooling some of the kinetic energy recovery (MGU-K) components.

Another area of important changes are to the hybrid power train. Here is a video from the Mercedes team explaining all the significant engine changes for the 2017 season.

According to Andy Cowell, the chief of Mercedes High Performance Powertrains, the new engine features updated high-power switches for better efficiency. The new engine design is aimed at taking advantage of the added down-force and grip with the new aerodynamic design. Since, the full-throttle time is projected to increase for the 2017 races, both engine and motor generator units (MGUs) for energy recovery systems needed the upgrade. The drive cycle change expected for 2017 races have led to the development of a more efficient MGU-H and an updated MGU-K system. I am expecting an increased reliability for the power train. The removal of token system for engine development means there is some room for significant improvement in performance as the season evolves.

An interesting addition to the new wing mirrors is the integration of infrared cameras for tire temperature monitoring. Mercedes team has been partnering with mobile communication giant Qualcomm. This partnership has made some exciting and significant improvements in telemetry and data acquisition from the new car.

The tires have changed for the 2017 season. The new Pirellis on the W08 are wider than last years, with 305 mm front and 405 mm rear tires. These tires will improve the overall grip. It also allows better transfer of power to the road, which means increased cornering speeds.

With the wider tires, improved aerodynamics for better stability and down-force, and an updated power-train, I am expecting to see a 3 to 3.5 second improvement in lap-times with the new car over 2016 season.

Here is an awesome video from the Mercedes AMG Petronas team featuring their drivers: Lewis Hamilton and Valtteri Bottas.

From the looks of it, team Mercedes has another winning package. Hamilton and Bottas are the two very talented drivers currently in formula one. The 2017 race car has significant improvements from last years in areas that matter. The engineers at Mercedes have managed to retain all the key elements of the W07 car that worked very well for the team last year. The new W08 car features extensive updates, but remains an evolutionary design over last years car. Considering the dominance of W07 over its rivals in the last season, an evolutionary design should work very well for the Mercedes team.

If the early track testings are any indication, the 2017 formula one season is going to be hugely exciting. My bet is on a close Mercedes-Ferrari battle with Mercedes having a slight upper hand.

All photographs featured in this blog post were taken at the Silverstone race circuit 2017 Silver Arrows ‘Collateral Day’ session, Towcester NN12 8TN, UK and photographed by Steve Etherington. The first picture at the beginning of this post features from left to right: Lewis Hamilton, Toto Wolff and Valtteri Bottas, behind a Mercedes AMG Petronas Silver-arrows W08 formula one race car. 


New kid in the block – Homomorphic encryption.

Healthcare data has an important challenge from a cryptography standpoint. It has to be both private and useful. An initial review of these requirements may appear as  two completely contradictory elements. Encrypted data using traditional encryption techniques remove the usability factor. In a traditional encryption scheme, unless the end user with the encrypted data has a decryption key, the data is completely useless.

But, what if there is a new way of encrypting data? A technique where, the end user can perform certain relevant functions on the encrypted data without ever decrypting it. As it turns out, there is a mechanism of accomplishing this. It is called homomorphic encryption scheme.

This scheme was first proposed by Ronald L. Rivest, Leonard Adleman and Michael Dertouzos.  The general expression for a fully homomorphic encryption scheme is:

There are currently a few cryptographic libraries that can be classified as fully homomorphic encryption schemes.

The key advantage of a fully homomorphic encryption scheme is the ability to perform mathematical calculations on the cipher text. For healthcare data to be useful, one need to perform these calculations on the data. Using a fully homomorphic encryption scheme, these computations can be performed without ever decrypting the data.

Homomorphic encryption scheme is the next big step in big data and artificial intelligence. As more and more healthcare organizations are looking to reduce cost of their IT infrastructure by adopting cloud computing, a truly homomorphic encryption scheme will not only protect the data, but also provide useful insights into these massive data-sets without ever compromising privacy.

Some of this work is done as part of our startup project nanoveda. For continuing nanoveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanoveda.

(The picture of rows of protein data creating a colorful sequence display at the Genomic Research Center in Gaithersburg, Maryland. This image was created by © Hank Morgan – Rainbow/Science Faction/Corbis and obtained from public domain via nationalgeographic.com)


Research and development – When is a good time to invest?

Businesses have limited resources. How to efficiently manage them is an art. A key controversial area of spending is always research and development (R&D). As a start-up we are even more constrained than a regular well established business. Therefore the question I often encounter: Is that a great idea to invest in research and development.

I have given some thought to that question. My answer is a resounding yes. I am pitching this idea on top of Amar G. Bose’s vision of role of research and development in business. A trendy approach for most corporations is to keep the research and development efforts to a bare minimum, mostly in the name of shareholder or investor value. Most business doesn’t view themselves as flag bearers of innovation. They orient themselves to protect the status quo of their commercial enterprise. When going gets tough, these enterprises cut spending from R&D to shift the blame from having a poor product portfolio in the first place.

This is a counter-intuitive, yet, widely adopted practice in the world of business. Amar Bose had a very different take. According to Bose, when the economy is going through a recession or when a company is struggling to find a better place in the market, it is the best time to invest in research and development. His reasoning being, cutting money from R&D wouldn’t provide the oxygen for coming up with newer products and innovations that a company needs in the first place. By the time the recession is over, or when customers realize there is a gap in what their expectations are and what the product delivers, the company will no longer be in a position to deliver the increased expectations of the customers or the business environment. New competitors will fill-in the gap.

My suggestion is: always invest in R&D and be bullish about those investments. Even if the business is just a mom and pop store in a highly popular tourist neighborhood, R&D will work. Especially in an era when social media and data sciences have become the life blood of businesses, all types of businesses, whether small or large, need to invest in R&D. By R&D I don’t mean running a lab with a bunch of scientists with white coats. The research and development includes, how to improve supply chain efficiency, how to improve communication and PR, how to improve cash inflow, how to better develop a targeted marketing, so on and so forth.

Science and business goes hand in hand. Science has an empirical view of the world. Businesses need to have an empirical view of financial performance. When these two merge together, it is a recipe for growing into a great business. An approach of R&D heavy investments will help businesses understand the emerging blind-spots in the market place and solve them as quickly as possible.

A great example is Exxon-Mobil. Despite having a heavy investment in fossil fuels, the company invested billions of dollars into scientific research on climate science. When the results of the research started coming out, it turned out to be a completely unexpected outcome for the executives initially. But, it still provided a valuable tool to foresee the evolving energy market. How Exxon-Mobil dealt with the unexpected results is highly controversial, but, I admire the ability of an organization to fund scientific research that had far-reaching consequences to their traditional business model.

My view of R&D is: it is the stethoscope of the market place. It will allow us to listen into small shifts in rhythm, way before those shifts change into a disastrous event. This listening tool will help enterprises survive from being blind-sided by large scale disruptive changes in the market place.

This work is done as part of our startup project nanoveda. For continuing nanoveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanoveda.

(The picture of the International Space Station (ISS) taken on 19 Feb. 2010, backdropped by Earth’s horizon and the blackness of space. This image was photographed by a Space Transportation System (STS) -130 crew member on space shuttle Endeavour after the station and shuttle began their post-undocking relative separation. Undocking of the two spacecraft occurred at 7:54 p.m. (EST) on Feb. 19, 2010. The picture was obtained from public domain via nasa.gov)


Understanding brain imaging data – 65,000 shades of gray.

I wanted to explain how structural and functional MRI image format NIFTI works. NIFTI stands for Neuroimaging Informatics Technology Initiative. It is currently a 8 to 128 complete bit, signed or unsigned integer data storage format. The most common implementation of NIFTI is the 16 bit signed integer data storage. The NIFTI images usually have the file extensions *.nii or (*.hdr and *.img) pairs. This image format is important due to two reasons: 1) The format stores the spatio-temporal imaging details and 2) compression to allow better space management.

Anyone who underwent a brain scan knows, the picture of the brain from an MRI or CT scanner is usually a grayscale picture with 65,536 shades of gray. The raw files from the scanner are usually DICOM (Digital Imaging and Communications in Medicine) format with an extension of *.dcm. The DICOM format is similar to the RAW image format for cameras. Instead of pixel read out stored in RAW images, the DICOM images store scanner readouts.

Each scan of a subject usually contains several DICOM files. This is an advantage and a disadvantage. For sharing specific image slices, DICOM is therefore extremely useful. But, for most interpretation purposes, the analysis often requires full image sets. A few slices from the scanner becomes less useful. This is where NIFTI format comes to rescue.

Since the format stores the entire sequence in a single file, the issues of managing large number of files are eliminated. The interpretation of a specific image based on the image preceding and succeeding it becomes easier. This is due to the ordered arrangement of images.

There is another important advantage of NIFTI. Brain imaging data is most relevant from an analytical point of view, to be used as a 3D data structure. Even though the individual components of the NIFTI are 2D images, the interpretation of an image becomes more reproducible if we treat them as 3D images. For this purpose, the NIFTI format is the best format to work with.

An example is the use of a machine learning tool called 3D convolutional neural networks (cnn). 3dcnn’s provide the 3d spatial context to a voxel. For image sequences like brain scans, identification of various structures or any abnormalities require the 3d spatial context of a voxel. The 3d cnn approach is very similar to looking at a video and trying to identify what the scene is about. Instead of using it for video scene recognition, 3d cnn can be used to train and detect specific features in a brain scan.

This work is done as part of our startup project nanoveda. For continuing nanoveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanoveda.