Hardware for artificial intelligence – My wishlist.

We recently developed an incredible machine learning workstation. It was born out of necessity, when we were developing image recognition algorithms for cancer detection. The idea was so incredibly powerful, that we decided to market it as a product to help developers in implementing artificial intelligence tools. During this launch process, I came across an interesting article from Wired on Google’s foray into hardware optimization for AI called the Tensor processing unit (TPU) and the release of second generation TPU.
I became a little bit confused about the benchmark published by Wired. The TPU is an integer ops co-processor. But, Wired cites teraflops as a metric. In the article it is not clear what specific operations are they referring to? Whether it is tensor operations or FP64 compute, the article is unclear. If these metrics are really FP64 compute, then the TPU is faster than any current GPU available, which I suspect isn’t the case, give the low power consumption. If those numbers refer to tensor ops, then the TPU has only ~ 1/3rd the performance of latest generation GPUs like the recently announced Nvidia Volta.
Then I came across this very interesting and more detailed post from Nvidia’s CEO, Jensen Huang, that directly compared Nvidia’s latest Volta processor with Google’s Tensor Processing Unit. Due to the non-specificity to what the teraflops metrics stand for, the Wired article felt like a public relations bit, even if it is from fairly well established publication like Wired. Nvidia’s blog post puts into better context the performance metrics’ than a technology journalist’s article. I wish, there was a little bit more of specificity and context to some of the benchmarks that Wired cites, instead of just copy-pasting the marketing bit passed on to them by Google. From a market standpoint, I still think, TPUs are a bargaining chip that Google can wave at GPU vendors like AMD and Nvidia to bring the prices of their workstations GPUs down. I also think, Google isn’t really serious in building the processor. Like most Google projects, they want to invest the least amount of money to get the maximum profit. Therefore, the chip won’t be having any IP related to really fast FP64 compute.
Low power, many core processors have been under development for many years. The Sunway processor from China, is built on the similar philosophy, but optimized for FP64 compute. Outside of one supercomputer center, I don’t know any developer group working on Sunway. Another recent example in the US is Intel trying it with their Knights Bridge and Knights Landing range of products and landed right on their face. I firmly believe, Google is on the wrong side of history. It will be really hard to gain dev-op traction, especially for custom built hardware.
Let us see how this evolves, whether it is just the usual Valley hype or something real. I like the Facebook engineer’s quote in the Wired article. I am firmly on the consumer GPU side of things. If history is a teacher, custom co-processors that are hard to program never really succeeded to gain market or customer traction. A great example is Silicon Graphics (SGI). They were once at the bleeding edge of high performance computational tools and then lost their market to commodity hardware that became faster and cheaper than SGIs custom built machines.
More interests in making artificial intelligence run faster is always good news, and this is very exciting for AI applications in enterprise. But, I have another problem. Google has no official plans to market the TPU. For a company like ours, at moad, we rely on companies like Nvidia developing cutting edge hardware and letting us integrate it into a coherent marketable product. In Google’s case, the most likely scenario is: Google will deploy TPU only for their cloud platform. In a couple of years, the hardware evolution will make it faster or as fast as any other product in the market, making their cloud servers a default monopoly. I have problem with this model. Not only will these developments keep independent developers from leveraging the benefits of AI, but also shrink the market place significantly.
I only wish, Google published clear cut plans to market their TPU devices to third party system integrators and data center operators like us, so that the AI revolution will be more accessible and democratized, jut like what Microsoft did with the PC revolution in the 90s.

Automation – A completely different thinking.

I have been spending some time on thinking about leadership. Recently, I had the opportunity to sit down and listen to an Angel investor, who specializes in life science companies particularly in New York. I was excited because I have heard incredibly fluffy pieces about this individual. But, the moment this individual started to speak, I suddenly realized something. This individual has no interest in the area of work. It reflected in his presentation, where, in-front of a very young and eager bunch of aspiring entrepreneurs, showed up late for the talk and started going through slides of a presentation created nearly two decades back.

This disappointment reminded me of  the recent article I came across. This article was from Harvard Business Review and it portrayed a dismal view of the enterprise where executives are living in a bubble. I realized, it is not just the corporate executives, but, even the investors are living in a bubble. These individuals are speaking words, but, it makes sense only to a raving mad man. The layers and filters, these individuals have created between themselves and the world outside have created an alternate reality for themselves. The recent news cycles of an incompetent CEO is way more common than any other job known to human kind. This points to an interesting point in our social evolution where CEOs have become one of the rare jobs where incompetence is tolerated and yet highly reimbursed.

This brings me to another interesting article on what exactly is a job and how to value a job. This amazing article published by BBC Capital exposes an interesting conundrum of a capitalist economy. The jobs that are absolutely essential for human survival are the lowest paid ones. The jobs that we consider to be completely pointless are the ones that are extremely highly paid. So, the whole question of what is valuable to the society arises here. Do we prioritize nonsense over our own survival. Our current social structure seems to indicate exactly that. According to Peter Fleming, Professor of Business and Society at City, University of London, who is quoted in the article: “The more pointless a task, the higher the pay.”

So, almost all of our low wage employments cluster around human activities that are absolutely essential for the survival of a society. If we apply these two diverse set of thoughts to another interesting question on the future of jobs, then, I believe I have the answer already.  By implementing mass automation on all of the mission critical jobs for the survival of a human society, we eliminate all the low-wage jobs, and individuals who had these jobs previously are moved to a position where they are given a pointless job.

For the sake of argumentative clarity imagine a situation where automation has been implemented in highly important jobs. A great example is the restaurant worker. Imagine a future where all of the restaurant workers who prepare food are replaced by an automated supply chain and a set of robots. The role of individuals are just to manage these automated supply chain and robots. Imagine, our teenage daughters or sons,  who will apply for a job at one of these restaurants. Instead of these young individuals being hired just for cheap human labor, now they will be hired as a manager for the automated supply chain. Instead of years of school and college education to end-up being incompetent and pointless to the society, even a high school graduate will be able to do this job, because the society is run reliably and efficiently by a highly robust and resilient automated supply chain. All the rest of the management hierarchy will remain the same, due to corporate and regulatory reasons.

It seems to me: a highly automated society will create more value than a society that still relies on human labor for its mission critical jobs. This may seem counter-intuitive, but, it is not. The value of human labor is a more abstract idea than the one based on absolutes. In our current system of human labor dependent economies, only a handful individuals are given the free-reign on incompetence and risk taking. In an automated economy, the underlying resilience will help more individuals to take more risks and be unfazed by the outcomes, because none of these activities are absolutely essential for the survival of a human society.

This trend of automation will not only massively simplify our daily life activities, we will have more time in our hands to do what we want to do in the first place, without worrying about thirst, hunger, hygiene or any of the mission critical activities for our life. The friction of human existence is reduced by machines, which help us create more value to the society.

More individuals will have the opportunity to be at an executive level than what exists today, even with minimal training. It will be the democratization of the executive jobs that will happen with the automation, not the loss of jobs. More people can have the freedom to be employed as CEOs and still get highly paid, because, the society is run on a highly reliable automated schedule. It will be a very different society, just like the massive shift of agrarian economy to the industrial economy that happened during the industrial revolution, but a much more highly paid and with less number of unemployed, for sure.

This research on current dismal state of executives and managers in corporate sector is part of our mission to understand the overarching impact of automation and machine-learning in healthcare. It is done as part of our cancer therapeutics company: nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Image caption from top to bottom: 1) A screen shot from the television programme: The Thick Of It with Malcolm Tucker (played by Peter Capaldi) and photographed by Des Willie, a great artistic portrayal of human dysfunction at the core of some of the most point-less, yet, highly coveted jobs. (C) BBC and reproduced under fair usage rights, 2) A satirical portrayal of recent troubles of United Airlines, published by Bloomberg, retrieved from public domain through Google search and published under fair usage rights, 3) A quote that says: “It’s as if someone were out there making up pointless jobs just for the sake of keeping us all working.” posted inside a London tube train car, with the hashtag bullshitjobs, 4) Image of a burger flipping robot in action, obtained from public domain through Google image search and reproduced under fair usage rights.

Forced errors – Lessons from an accounting scandal.

In 2015 Toshiba corporation based in Minato, Tokyo, Japan, disclosed to its investors of a major corporate accounting malpractice. The accounting scandal dated back to the 2008 financial collapse. When the market forces became unfavorable, Toshiba resorted to the terrible art of creative accounting practices a.k.a cooking the books.

Toshiba created a very interesting mechanism to cook the books. Instead of a direct order to restate revenue and profits, the top level executives of the firm created a strategy called “Challenges”. The “Challenges” were quarterly financial performance targets handed over to the managers of various divisions. These targets were handed over just a few days ahead of the submission due dates for the quarterly financial reports. The system was designed specifically to pressure various division heads to finally embrace the incredibly stupid act of creative accounting practices, instead of aiming at real improvements in corporate performance. The top level executives were extremely confident that the performance pressure created by their “Challenges” system would eventually lead the mid-level managers of  the company to resort to these despicable accounting practices.

I am amazed by the level of top level executive creativity in implementing a pressure cooker situation to do all the wrong things. The fiscal situation would have leveled-off if Toshiba started making money after the financial crisis. But, the problem was, the ingeniously devious top-level executives also started to believe in these cooked books. Now, armed with extremely brilliant, yet fake profit results, the company unleashed their ambitions upon the world.

Toshiba went ahead with their ambitious, capital intensive endeavor of expanding the nuclear power generation in North America. Toshiba had already purchased the North American nuclear power generation behemoth Westinghouse. In 2006, two years before the impending global economic disaster that originated in North America, Toshiba purchased Westinghouse Electric Company, a manufacturer of nuclear power, from British taxpayer funded company: British Nuclear Fuel Limited (BNFL). At the time, economists and analysts questioned the wisdom of BNFL selling their supposedly profitable nuclear power generation business to Toshiba. These were due to the then widely held belief that the market for nuclear power generation was poised to grow rapidly in the next decade. These projections were based on growing global demand in electricity mostly from rapidly growing Asia, especially China and India.

The reasons for the sale were multi-factorial. Being a UK tax-payer funded operation, BNFL had very little leverage in Asian markets like China and India. The then British government was unwilling to take extreme financial, marketing and operational risks of continuing the operations of BNFL. Also, a few years later the biggest secret was out: BNFL was in huge financial mess and its operations were in turmoil, albeit invisible from the public eye back in 2006.

Behind the sale of Westinghouse, another key market force was in play too. Even in 2006, the emerging pattern in the world of power generation was the slow shift away from complex solutions like nuclear power to simpler and more reliable solutions like natural gas fired power-plants and solar-energy. Also, the business of building nuclear power has always been riddled with extremely high risks, including large scale cost-overruns and unanticipated delays.

Even before the 2008 North American financial crisis, the appetite for large-scale capital intensive risks like funding nuclear power plants were coming to a crawling stop. The companies that operate these plants return a profit only after forty years of commercial operation. The operational life of the plants are approximately sixty years. Therefore, for almost two thirds of a nuclear power plant’s life-cycle, it is operating at a loss. Very few investment firms have the resources and the expertise to handle such a complex long-term operation. In such a tough economic situation, an UK taxpayer funded operation like BNFL had limited options to survive: either sell its then widely considered to be lucrative nuclear power business or to significantly scale down the operations by limiting its interests only in the UK and lose a huge amount of market valuation in the process.

In conclusion: market realities, complexities of operating a nuclear power-plant and a risk averse UK government at the time, led to the sale of Westinghouse to Toshiba. In hindsight, BNFL exiting UK taxpayer funded nuclear power generation business was one of the best business decisions ever. BNFL made over 5 times the money it paid for to buy Westinghouse Electric Company in 1999. Sadly the sale of Westinghouse division and the fire-sale of its other assets that followed didn’t save the company. BNFL became defunct in 2010.


At a time when nuclear power has slowly fallen out of favor around the world, Toshiba energetically and optimistically forged new deals in North America. After a flurry of  new orders, it seemed like Toshiba acquired a winner with the Westinghouse Electric Company. Then, in 2011, the bad news hit the nuclear power industry in the form of Fukushima disaster. The Westinghouse Electric Company’s most advanced reactor design called the AP1000 shared similarities in design with the GE developed reactor at Fukushima. The US regulatory scrutiny that followed, revealed design flaws in its core shielding system, particularly the strength of the building structure that holds the nuclear reactor core.

Concurrent to these set-backs, the accounting wizards at Toshiba were creating an alternate reality in corporate finance and accounting. It became increasingly clear early on to the executives, that Toshiba had ruinously overpaid to acquire Westinghouse Electric Company. To mitigate the financial blow of having unknowingly bought a lemon and having to deal with a global financial meltdown, from 2008 onward Toshiba started the practice of misstating its revenue and deferred expenses.

Some of the accrued and real expenses were shown as assets instead of liability and also led to constant misstatement of profits across all of its divisions. Since the accounting malpractice was so sophisticated, extremely well engineered and spread over its sprawling business interests ranging from semiconductors to healthcare to social infrastructure, it took nearly a decade to reveal its ugly face. It makes me wonder, what if Westinghouse Electric Company turned a profit and didn’t encounter all the cost over-runs and delays. We may never have heard about this scandal at all.

The ingenuity behind this large scale corporate malpractice is based on human psychology. The C-suites at Toshiba, instead of handing over direct orders of misstating the profits, created a new system: a system of impossible expectations, where deceitful behavior was the only way to remain employed, run the business and climb the corporate ladder inside Toshiba. The bet made by these executives were one that was cynical: human ethics will fail to intervene if the entire management system surrounding everyone forces them to behave unethically.

It reminds me of the brilliant work on human psychology by Stanley Milgram on obedience and authority. In Toshiba’s case, they used it to create an alternate corporate financial reality. The problem with this behavior is: sometimes unforeseen business risks will expose the quick sand upon which a false empire is built. Here is the movie “The Experimenter”, based on Stanley Milgram’s groundbreaking work on human behavior under authority. I recommend this movie to anyone interested in understanding the complex human behavior of obedience and authority.

It became exceedingly clear by late 2016, that both Westinghouse Electric Company and Toshiba were in deep financial mess. All this creative magical thinking and accounting practices couldn’t solve the financial mess of dealing with regulatory issues, construction problems, constant delays, cost escalations, increasingly frustrated operators and suppliers.

The profit making healthcare and semiconductor business couldn’t carry all of the financial burden of supporting a clearly failing corporate parent company. The profitable healthcare division was sold to Canon corporation in a hurry to prevent rapid loss in its value due to a future bankruptcy of the parent company. It is very likely that even this sudden yet, large infusion of cash from their healthcare division sale came too late to prevent an imminent catastrophic collapse of Toshiba. The next in line for the fire-sale appears to be Toshiba’s semi-conductor division.

I have learned three lessons after studying about the imminent collapse of Toshiba due to its terrible accounting practices. I am sharing those three lessons:

  1. Always question the corporate culture and create an environment where employees, partners, suppliers and anyone directly or indirectly involved with the business is free to ask questions. In other words embrace nanøvedas philosophy of radical openness.
  2. Remember Murphy’s law: anything that can go wrong, it will go wrong, and the scarier cousin of Murphy’s law, the Finagle’s law: Anything that can go wrong, will—at the worst possible moment.
  3. Businesses are a human enterprise, which means businesses have a built in optimism bias. Always be aware of this bias. When things go wrong, human psychology will direct us to hide it rather than share it. Therefore cultivate a culture of sharing mistakes, such as a reporting session every month on all the SNAFUs or a satirical evening of profanity riddled tirade against the management overlords. Comedy is the best way to reveal the ugliest of our secrets.

This is an honest and heartfelt research on a corporation that I once admired and how it all fell apart. This is part of my journey to create a better healthcare and cancer therapeutics company, here at nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

(Image captions from top to bottom: 1) Toshiba TC4013BP microprocessor on a printed circuit board, obtained through Wikipedia, 2) A painting that accurately depicts fraud and deceit discussed in this blog post. The painting is William Hogarth‘s The Inspection, the third canvas in his Marriage à-la-mode (The Visit to the Quack Doctor), obtained through Wikipedia, 3) Double-sided Westinghouse sign that was once located at the intersection of Borden Avenue and 31st Street on the north side of the Long Island Expressway in New York City, dated 1972 from the collection of Richard Huppertz, obtained from Wikipedia, 4) A cutaway section of the pressurized water reactor that was used in Fukushima Daiichi, obtained from Wikipedia, 5) Image of an English toast with Murphy’s law engraved on the jelly, obtained from public domain and reused with permission through Flickr.)

Failure Mode Effects – What I learned from a failed car engine:

The 2010 Dodge Charger HEMI is the epitome of a modern muscle car. It is powered by an absolute monster of an engine. The V8 engine displacement is at 6.1L, produces 317kW of power and 569Nm torque. The engine is manufactured at Saltillo engine plant and has a great track record for being low maintenance high displacement naturally aspirated North American engines. In-fact, there is nothing to complain about the engineering or quality of the engine. This is the same with almost all modern cars and car engines. All the cars in the market right now are very well developed and mature products.

Unlike yesteryear’s car models, like the 90’s Mercedes with moody electronics or Lanicas that rot away faster than unrefrigerated cheese or Triumphs that leak oil like a rain soaked umbrella and Jaguars that hibernate whenever there is a light shower; cars these days are reliable, reliable like an appliance. The reliability masquerades another interesting problem; the problem of ‘what if something goes wrong’.

I have known folks who obsess about tire pressure. The same individuals who pop open the engine-bay to check the oil dipstick to see if there is enough engine oil. These are the folks who had the reluctant and unpleasant experience of living with one of the yesteryear’s unreliable vehicle engineering samples. They know very well that things can go wrong and they will go wrong. So, their life’s journey has made them curious and cautious about every single little mechanical clatter.

As for me, I am the nineties kid. My first vehicular experience was on a universal Japanese motorcycle or commonly abbreviated as the UJM. It was a bright red Yamaha. It was a reliable, fun piece of engineering which has made me a Yamaha fan for life. My first car was a Hyundai. Again, reliable piece of Korean engineering and a sensible one too. Some might say soul-less, but I had great fun with it. Even though the car I had was stereotypically reliable, recent history suggests Hyundai has some serious issues with their engines. Recently Hyundai issued a recall of 1.2 million vehicles, which covers both of its brands Hyundai and Kia motors in North America. So, I am fortunate that at no part of my life did I ever experience the horrible vehicular gremlins of electrical issues, oil leaks or anything of sorts.

The story is similar to almost everyone whom I grew up with or for any of those who had their early adult driving years with a well engineered car. This brings to attention, the problem of what do we do when things go wrong. The reason I talked about the HEMI is because one my friends own one. He got the car from his dad. So there is a family connection and major emotional value to it. The problem was a rogue oil pressure sensor. It happens even to the best of us. When we see a rogue oil pressure sensor, we are optimistic that we won’t have any oil issues. We procrastinate to fix the wonky sensor.

But, the problem is, even-though, being optimistic in life is a brilliant quality, it is not so much when it comes to dealing with engineered systems. The oil pressure sensor is there for a reason. It is a two layer protection. The first layer is to engineer something that works great in the first place. The second layer is to prepare for the malfunctions. Most of the time even I, ignore the oil pressure warnings. I know it is a bad thing, but at the same time, I also know the odds of it being something really bad is very low. Also, all the advertisements about how reliable all the modern cars are; add to our sense of highly optimistic thinking.

There are occasions when these non-zero odds turn out to be something really bad. Something like the oil line had a leak and the engine has no oil. This is exactly what happened to my friend. The one time he decided to ignore a warning light that was going off for months on his dashboard turned out to be something really really bad. While he was cruising along, the HEMI engine shuts down, since it has no oil to lubricate all the moving part. The optimistic us, whose life experience has been with reliably engineered devices, suddenly become flummoxed. It is not a pretty sight when an engine runs out of oil. Here is a video of a mechanic describing the horrors of dealing with an engine that ran out of oil.

I really admire the folks who are meticulous with their cars. Those who take the warning lights seriously every single time. It is an approach that will prevent disasters like engine running out of oil. But, then I realized, this story has more to it. When we are dealing with any systems, warning signals need to be taken seriously. Especially when it comes to engineered systems. These warning lights are there for a purpose. Our optimistic self and the modern life we are surrounded with are filled with highly reliable devices. It has subconsciously taught us to ignore these warning signs. But, one day one of these warning signs flashing in front of us can turn it into a really messy problem.

So, how can a regular person like you and me, avoid a situation similar to the HEMI engine running out of oil from happening? My idea to solve this important issue is to create better reminders and warning signals for cars. Since we are all addicted and tethered to our smartphones, a simple way will be to connect the sensor readings from the engine to an app. The app warns us through a notification event, may be periodically, perhaps every time one starts a car. A simple LED light flashing on the dash isn’t enough for me or anyone I know of, to be motivated to see a mechanic and figure-out what is wrong with the car engine. Since, most of my decision makings are surrounding the phone, the computer and everything linked to the internet, I am firmly in support of a better car: a better internet connected car. It will make the warning lights appear more serious and also well prepared for the #applife.

I am confident the next car of mine will be a connected car, that sends notifications to my phone.

The Dodge HEMI Charger has been very important for our continued work on cancer research as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Fine tuning the cloud – Making machines learn faster.

In a previous post I discussed how easy it was to setup machine learning libraries in python using virtual machines (vm) hosted in the cloud. Today, I am going to add more details to it. This post covers how to make machine learning code run faster. This post will help any user compile a tensorflow installer to make it run faster in the cloud. The speed advantages are due to installer optimizations to take advantage of processor instruction set extensions that power the cloud vm.

The steps  described below will help us compile a tensorflow installer with accelerated floating point and integer operations using instruction set extensions for the AMD64 architecture . The CPU we will be dealing with today is an Intel Xeon processor running at 2.3GHz. It is a standard server CPU that supports AVX, AVX2, SSE + floating point math and SSE4.2.

In the previous post, we were using a simple pip installer for tensorflow. The pip installer is a quick and easy way to set-up tensorflow. But, this installation method for tensorflow is not optimized for the extended instruction sets that are present in these advanced CPUs powering the vm in Google Cloud Platform.

If we compile an installer optimized for these instruction set extensions, we can speed-up many of the computation tasks. When tensorflow is used as a back-end in keras for machine learning problems, the console reminds you constantly, to optimize your tensorflow installation. These instructions below will also help you get rid of those warning messages in your console.

First step to compile an optimized tensorflow installer is to complete all the linux package dependencies. Run the line below to complete the missing packages needed to compile tensorflow installer.

To compile tensorflow, we need a build tool called Bazel. Bazel is an opensource tool developed by Google. It is used to automate software development and testing. Since tensorflow is at the leading edge of machine learning world, features are added, bugs are fixed and progress is made at a dizzying speed than the relatively pedestrian pace of traditional software development. In this rapid development, testing and deployment environment; Bazel help users manage this process more efficiently. Here is the set of code to install Bazel.

Once Bazel is installed in your computer, next step is to clone the source files of tensorflow from GitHub.

After the source files are copied from GitHub repository to your local machine, we have to do some housekeeping. We need to ensure the python environment to run tensorflow has all the necessary libraries installed. To fulfill the library dependencies in python, we need to install numpy, development environment, pip and wheel using the code below:

The next step is to configure the build process for the installer. Before we do the actual configuration, we are going to use a preview of the configuration dialog. This will help us understand what parameters we should know before hand to complete the configuration process successfully. The configuration dialog for building the installer is as follows:

We need four important pieces of information before we go ahead and configure the build process. They are: 1) location of the python installation, 2) python library path, 3) g++ location and 4) gcc location. The last two are optional and only needed to enable OpenCL. If your cloud vm supports OpenCL and CUDA, the instructions to compile tensorflow are slightly different, which I will not cover in this post. Identifying python installation location and library paths can be done using the following code. I have also included the steps for finding locations for gcc and g++ compilers in the code below:

We have all the information needed for configuring build process. We can proceed to configure the build process using the following line:

Once the build process is configured properly, we can go ahead and build the installer using the following commands:

Everything should proceed smoothly and the build process is going to take some serious time. An octacore 2.3GHz Intel Xeon powered virtual machine needs around 30 minutes to complete this process. So, plan this process ahead of time. A short notice deployment is impossible, if one is looking to build the installer from scratch.

If the last step above threw a file not found error, it can be resolved by peeking into the build directory for the correct installer name.

Once you have the correct installer name, append the last line of code above with the correct installer name and the installation process should finish without any error messages. If we manage to finish all the steps above, it means: we have successfully installed an optimized installer for tensorflow. This installed tensorflow library is compiled to take advantage of the processor instruction set extensions.

Finally, to check and see if tensorflow can be imported into python, use the next few lines of code. The code follows these steps: first, we have to exit the tensorflow install directory, then invoke python and import tensorflow.

The import tensorflow line in python environment should proceed with no errors. Hurrah, we have a working tensorflow library that can be imported in python. We will need to do some tests to ensure that everything is in order, but, no errors up to this point means easy sailing ahead.

I have described a step-by-step process of building and installing tensorflow based on my logical reasoning of progression of things. These steps have worked extremely well for me, so far. The optimized tensorflow installation has cut-down run-time for some of the tasks by a factor of ten. As an added advantage, I longer have error messages that ask me to do optimizations to tensorflow installation in the python console.

Happy machine learning in the cloud everyone.

This work is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

In the cloud – Geography agnostic enterprise using Google Cloud Platform.

The concept of geography/location agnostic enterprise is very simple. Whether I am in Beijing or Boston, Kent or Kentucky, New Delhi or New York, Silicon Valley or Silicon Wadi, Qatar or Quebec; I should have access to a standard set of tools to run the company. Moving around geographic locations are hard challenge for any enterprise. One of the first problems we wanted to solve is how quickly we can deploy some of our tools to countries around the world. Part of the reason why I envision my mission with nanøveda to be a geography/location agnostic enterprise is because the problem we are trying to solve is universal. We want people around the world to have uniform access to our technology. It will also help us become better in business too.

To solve this problem I am in search for answers. I found a brilliant solution a few days back. Recently, I got an invite for a trial of Google Cloud Platform (GCP). Google was kind enough to give me a $300 credit towards creating applications in their cloud platform. I was very excited to try this cloud computing platform from one of the leaders in computing. Finally, last Friday, I decided to explore GCP. For me cloud computing brings two advantages: 1) Zero time and cost overhead of maintaining a set of in-house linux servers; 2) Creating a truly geographic agnostic business. I am running an Ubuntu 16.10 workstation for running machine learning experiments. Most of the tasks handled by this workstation has already started to overwhelm its intended design purpose. Therefore, I was actively looking for solutions to expand the total compute infrastructure available for me. It was right at this moment, I turned to Google to help solve our compute infrastructure problem.

I have never used GCP before. Therefore, I had to go through a phase of experimentation and learning, which took approximately a day. Once I learned how to create a virtual machine, everything started to fall in place. To check if the vm is seen properly by the guest os, I ran some basic diagnostic tests.

The GCP has an interesting flat namespace Object storage. The feature is called Buckets. The buckets feature allow the virtual machine to share data from a remote computer, very conveniently over a web interface. Google has a command-line tool called gsutil, to help its users streamline the management of their cloud environment. One of the first commands I learned was to transfer files from my local computer to the Object storage space. Here is an example:

Once I learned file transfer, the next step was to learn how to host a virtual machine. After I set-up an Ubuntu 16.10 virtual machine in the cloud, I needed to configure it properly. Installing necessary linux packages and python libraries were easy and fast.

After the vm was configured to run the code I wrote, the next step was to test file transfer to the vm itself. Since vm and the Object storage are beautifully integrated, file transfer was super quick and convenient.

After the code and all the dependent files were inside the cloud vm, the next step was to test the code in python. The shell command below executed the code in the file test.py.

Since some of the code execution takes hours to complete, I needed a way to create a persisting ssh connection. Google offers a web-browser based ssh client. The browser ssh client is simple, basic way of accessing a virtual machine. But, for longer sessions: creating a persistent session is ideal. Since my goal is to make most of the computing tasks as geography agnostic as possible, I found a great tool for linux called screen. Installing screen was very straightforward.

Once screen was installed, I created a screen session by typing screen in terminal. The screen session works like the terminal. But, if you are using ssh, it allows persistence of the commands being executed in the terminal even after the ssh terminal is disconnected. To quit screen just use the keyboard short cuts: ctrl+a followed by ctrl+d.

To resume a screen session, just type screen -r in vm terminal. If there are multiple screen sessions running, then one will have to specify the specific session that needs to be restored.

The ssh + screen option is a life saver for tasks that require routine management, and needs a lot of time to execute. It allows a vm administrator to convert any network connection into a highly persistent ssh connection.

The combination of Google cloud Object storage, ease of networking with the vm, ssh and screen has allowed me to transfer some of the complex computing tasks to the cloud in less than a day.The simplicity and lack of cognitive friction of the GCP has taken me by surprise. The platform is extremely powerful, sophisticated and yet; very easy to use. I have future updates planned for the progress and evolution of my usage of GCP for our start-ups computing needs.

I am still amazed by how easy it was for me to take one of the most important steps in creating a truly geography/location agnostic enterprise with the help of Google Cloud Platform. I have to thank the amazing engineering team at Google for this brilliant and intuitive cloud computing solution.

This work is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Pi day – Calculate 2017 digits of pi using Python3

Here is a short and elegant code to calculate 2017 digits of pi. The code is implemented in python3. In three lines of very simple python code; we are going to calculate the 2017 digits of pi.

The output is:

Happy 2017 pi day.

The code is available on GitHub.

This work is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Installation notes – OpenCV in Python 3 and Ubuntu 16.10.

These are my installation notes for OpenCV in Ubuntu 16.10 linux for python3. These notes will help start a computer vision project from scratch. OpenCV is one of the most widely used libraries for image recognition tasks. The first step is to fire-up the linux terminal and type in the following lines of commands. These first set of commands will fulfill the OS dependencies required for installing OpenCV in Ubuntu 16.10.

The next step is to check if our python3 installation is properly configured. To check python3 configuration, we type in the following command in terminal:

The output from the terminal for the command above indicates the location pointers to target directory and current directory for the python configuration file. These two locations must match, before we should proceed with installation of OpenCV.

Example of a working python configuration location without any modification will look something like this:

The output from the terminal has a specific order, with the first pointer indicating the target location and the second pointer indicating the current location of the config file. If the two location pointers don’t match, use cp shell command to copy the configuration file to the target location. We are using the sudo prefix to give administrative rights to the linux terminal to perform the copying procedure. If your linux environment is password protected, you will very likely need to use sudo prefix. Otherwise, one can simply the skip the sudo prefix and execute the rest of the line in terminal. I am using sudo here because it is required for my linux installation environment.

Now we have to create a directory to copy the OpenCV source files from GitHub and download the source-files from GitHub to our local directory. This can be achieved using mkdir and cd commands along with git command. Here is an example:

The final step is to build, compile and install the packages from the files downloaded from GitHub. One thing to keep in mind here is that: we are now working in the newly created folder in terminal and not in the home directory. An example of linux terminal code to perform the installation of OpenCV is given below. Again, I am using sudo prefix to allow the terminal to have elevated privileges while executing the commands. It may not be necessary in all systems, depending on the nature of the linux installation and how system privileges are configured.

The final step of compiling and installing from the source is a very time consuming process. I have tried to speed this process up by using all the available processors to compile. This is achieved using the argument nproc –all for the make command.

Once OpenCV is installed, type cd / to return to home directory in terminal. Then type python3 to launch the python environment in your OS. Once you have launched python3 in your terminal, try importing OpenCV to verify the installation.

If the import command in python3 console returned no errors, then it means python3 can successfully import OpenCV. We will have to do some basic testing to verify if OpenCV installation has been successfully installed in your Ubuntu 16.10 linux environment. But, if one manages to reach up to this point, then it is a great start. Installing OpenCV is one of the first steps in preparing a linux environment to solve machine vision and machine learning problems.

This work is an important component of the machine vision for biomedical data-sets and is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Fair trade – How hard can it be?

This year’s United States Presidential address to the congress featured an impassioned plea by Pres. Donald J. Trump towards focusing more on fair trade. His reasoning was clear: a fair global trade will generate less disruptive effect on societies. His concerns were directed toward the ordinary American tax payers, and his interest in protecting their livelihood. But, his calls for incorporating fair trade as part of globalization has another beneficiary: the entire world itself. This is an unintended consequence for a US president who has admitted to eliminating any pretensions of him acting as “the leader of the free world”. American policies will be a powerful force in dictating the direction of the global economy for decades to come. It may not be a deliberate attempt to change the world, but a mere result of being the most powerful and richest country on earth.

To tackle fair trade, a key issue the president raised was: creating a better system of taxation across nation states. The goal of his proposed revised taxation structure is to make trade more equitable between nation states. The example he used to reiterate his logic on fair trade and taxation focused on Milwaukee, Wisconsin based Harley Davidson. Harley Davidson has been part of the American journey for the past 114 years. Yet, it has difficulty competing in some of the world’s largest motorcycle markets, including India.

North American motorcycle market accounts for only 2% of the global motorcycle sales volume. It is tiny compared to the rest of the global motorcycle marketplace. The largest motorcycle manufacturer in the world is Minato, Tokyo, Japan based Honda. India is one of the largest volume markets for Honda. In India, Honda has managed to establish a large local manufacturing facility to produce the popular mass market low displacement motorcycles. The sheer volume of monthly sales of Honda motorcycles overshadow the annual sales of Harley Davidson, not just in India, but around the world.

The reason for Harley Davidson being overshadowed by motorcycle manufacturers from the rest of the world is partly due to their strategy of catering to an exclusive set of customers. They position themselves as a lifestyle brand more than a motorcycle brand. Most of the sales volume in Asia is for commodity, commuter, low displacement motorcycles. Harley Davidson has no products to compete in this segment. For European markets, Harley Davidson again forgets to cater to the sports bike segment. The issues of Harley Davidson’s struggle in global markets are not just due to taxation and duties on motorcycles.

If the interest in making global trade fairer is genuine, one has to consider another key component: the human factor. Even the world’s most powerful democracy starts crying foul play when it comes to global trade, makes one wonder the real consequences of global trade in this world.

Recently, the privately held food manufacturer Cargill came under microscope for their questionable environmental practices in bringing food to the table for millions of Americans. Cargill and its local suppliers involved in circumventing local Brazilian laws to prevent deforestation, is another great example for the desperate need to incorporate fair trade in globalization. The Brazilian suppliers of Cargill are empowered by the capital and resources of the North American market, which even local governments can’t fight against.

The Brazilian soybeans could have been replaced by produce sourced from North American farmers themselves, who adhere to more stringent environmental standards than Cargill’s Brazilian counterparts. Instead Cargill’s decision to cut upfront costs for livestock feeds, explicitly demonstrate the flaws in global trade. A call for fair free trade also means placing restrictions for companies like Cargill. The current trade practices allow unhinged environmental damages in the process of bringing fast-food burgers to the American market. Therefore, a call for fairer traded world also means better protection of Brazilian rain-forests.

Global trading of coffee beans is another great example to illustrate the difficulty of implementing fair trade. Coffee is one of the largest volume traded commodity in the world. Yet, only 30% of the coffee beans produced worldwide meet the current definition of fair trade. Even the millennial favorite coffee chain: Starbucks, has managed to source only about 8% of their coffee beans through fair trade.  The current mechanisms to create fair traded coffee beans is riddled with massive economic and social problems. An important issue that comes to my mind is: despite the coffee chains marketing the fair traded coffee at a premium price; only a fraction of the added price paid by the customer, reaches the producer.

The discussion on fair global free trade is a great running start to create a more equitable world. Creating uniform taxation rules across nation states is the first logical step towards this goal. But, the concept of fair trade extends way beyond mere taxation. I am excited by the fact that the US president has initiated a conversation on fair trade. It is an important topic that has more substance to it than just the mettle of selling motorcycles made in Milwaukee to the milkmen in Mumbai.

Descriptions for photographs featured in this post from top to bottom: 1) Photograph of the Whitehouse, 1600 Pennsylvania Ave NW, Washington, DC 20500, United States, at dusk, via Wikipedia. 2) A vintage style Harley Davidson advertisement poster featuring model Marisa Miller, obtained from public domain via Pinterest and reproduced under fair usage rights. 3) Jaguar (Panthera onca) photographed at the Toronto zoo, obtained via Wikipedia. Jaguar is a near endangered apex predator. Its natural habitat include the Brazilian rain-forests, that are currently under threat of massive deforestation due to unfair trade practices followed by food suppliers like Cargill. 4) Commercial coffee farm in Jinotega, Nicaragua, Source: Lillo Caffe, obtained through Dartmouth college blog-post on subsistence agriculture in Nicaragua.

Formula One 2017 – Exciting changes for team Mercedes.

Today is the first day of the pre-season testing for 2017 formula one season. I am very excited about the 2017 season. The new regulations are expected to make the cars go faster. One of the criticisms for the hybrid era formula one races have been the lack of excitement. Faster cars are definitely going to make this sport very exciting indeed.

Along with car changes there is also a change in driver lineup. Last year’s world champion: Nico Rosberg retired from the sports. To fill-in his spot is Finnish driver Valtteri Bottas. Bottas moved from Williams-Martini Racing, a Mercedes powered team, to the factory team. Lewis Hamilton is the veteran driver for the team. I am expecting Hamilton to be faster of the two. Unlike last year, I also hope Hamilton to have a better luck with reliability. Mercedes AMG Petronas have made some significant changes to the 2017 car.

This video below explains all the regulation changes in place for 2017 season.

Here are my thoughts on some of the interesting details I found out about the Mercedes AMG Petronas W08 2017 car.

One of the important changes to the 2017 season W08 EQ+ powered car is the delta shaped front wings with 12.5 degree back sweep angle. It features cascade winglets similar to 2016 season. The delta front wings are a regulation change for the 2017 season.

The W08 has a slender nose, shorter camera pods and a new S duct design with deeper grooves and embedded antenna in the middle. The four element turning vanes attached to the new nose design features their own individual footplates that are again divided into a total of seven individual aerofoils. There is also a bommerang shaped fin sitting above the barge-boards.

The car also features a taller front suspension by placing the upper wishbone higher. A taller wishbone frees-up space between suspension elements and create cleaner and increased airflow into aerodynamic components behind it.

The primary barge-board occupies almost all of the box area set out by the FIA regulations. It also features a number of perforations along the base to help optimize the airflow and seal off the splitter behind. The perforated body work is designed for creating elongated vortices and optimized surrounding free-flow of air. There is also a more detailed out-sweep floor board design to optimize airflow underneath the car. Another interesting addition to the floor board is a simpler design to displace turbulent airflow by featuring nine perforations ahead of the rear tires. This approach is a relatively simpler design compared to previous years. These floor-board perforations will allow cleaner airflow over the rear tires by preventing the formation of vortices.

The overall length of the car has been increased by 15 cm. It also features a very big, complexly sculpted vane tower design on either sides of the barge-boards. The side pods are extended for better engine air intake. Since the 2017 design increases the airflow to the engine, I am expecting an increase in the power-output for the otherwise largely unchanged internal combustion design. The side-pods also feature a highly detailed three element flow conditioners for maximizing the deflection of the wake from the side-pod undercuts.

 

The rear of the car features a FIA regulation mandated slanted, bow shaped rear wing, which is shallower at the tips than at the center. There is also an open leading edge slotted end-plate design for the rear wing. A narrow ‘T-shaped’ mini-wing is also placed ahead of the rear tires. The new wing design and the mini-wing is aimed at making the car more aerodynamically balanced. I am expecting slight changes to the wing design, depending on the nature of the race circuit.

Even though the initial reveal at Silverstone showed a car with a subtle ‘shark-fin’ element over the rear engine cover, the race car debuted at Barcelona has a more prominent ‘shark-fin’ aero-structure. The more prominent ‘shark-fin’ engine cover element also features a clever engine intake point, possibly to assist in cooling some of the kinetic energy recovery (MGU-K) components.

Another area of important changes are to the hybrid power train. Here is a video from the Mercedes team explaining all the significant engine changes for the 2017 season.

According to Andy Cowell, the chief of Mercedes High Performance Powertrains, the new engine features updated high-power switches for better efficiency. The new engine design is aimed at taking advantage of the added down-force and grip with the new aerodynamic design. Since, the full-throttle time is projected to increase for the 2017 races, both engine and motor generator units (MGUs) for energy recovery systems needed the upgrade. The drive cycle change expected for 2017 races have led to the development of a more efficient MGU-H and an updated MGU-K system. I am expecting an increased reliability for the power train. The removal of token system for engine development means there is some room for significant improvement in performance as the season evolves.

An interesting addition to the new wing mirrors is the integration of infrared cameras for tire temperature monitoring. Mercedes team has been partnering with mobile communication giant Qualcomm. This partnership has made some exciting and significant improvements in telemetry and data acquisition from the new car.

The tires have changed for the 2017 season. The new Pirellis on the W08 are wider than last years, with 305 mm front and 405 mm rear tires. These tires will improve the overall grip. It also allows better transfer of power to the road, which means increased cornering speeds.

With the wider tires, improved aerodynamics for better stability and down-force, and an updated power-train, I am expecting to see a 3 to 3.5 second improvement in lap-times with the new car over 2016 season.

Here is an awesome video from the Mercedes AMG Petronas team featuring their drivers: Lewis Hamilton and Valtteri Bottas.

From the looks of it, team Mercedes has another winning package. Hamilton and Bottas are the two very talented drivers currently in formula one. The 2017 race car has significant improvements from last years in areas that matter. The engineers at Mercedes have managed to retain all the key elements of the W07 car that worked very well for the team last year. The new W08 car features extensive updates, but remains an evolutionary design over last years car. Considering the dominance of W07 over its rivals in the last season, an evolutionary design should work very well for the Mercedes team.

If the early track testings are any indication, the 2017 formula one season is going to be hugely exciting. My bet is on a close Mercedes-Ferrari battle with Mercedes having a slight upper hand.

All photographs featured in this blog post were taken at the Silverstone race circuit 2017 Silver Arrows ‘Collateral Day’ session, Towcester NN12 8TN, UK and photographed by Steve Etherington. The first picture at the beginning of this post features from left to right: Lewis Hamilton, Toto Wolff and Valtteri Bottas, behind a Mercedes AMG Petronas Silver-arrows W08 formula one race car.