Install notes — Tensorflow in Ubuntu 18.04 LTS with Nvidia CUDA

In this install note, I will discuss how to compile and install from source a GPU accelerated instance of tensorflow in Ubuntu 18.04. Tensorflow is a deep-learning framework developed by Google. It has become an industry standard tool for both deep-learning research and production grade application development.

Step 0 — Basic house-keeping:

Before starting the actual process of compiling and installing tensorflow, it is always good to update the already installed packages.

Next step is to check for Nvidia CUDA support. This is done using a package called pciutils.

In this particular example deployment, the GPU that we will be using is: Nvidia Tesla P4. The output from the console should look something similar below:

This helps us understand whether the GPU attached to the linux instance is properly visible to the system.

Now, we need to verify the linux version support. Run the following command in the terminal:

The output from the console will look something similar below:

Step 1 — Install dependencies:

First step to compile an optimized tensorflow installer is to fulfill all the installation dependencies. They are:

  1. build-essential
  2. cmake
  3. git
  4. unzip
  5. zip
  6. python3-dev
  7. pylint

In addition to the packages above, we will also need to install linux kernel header.

The header files define an interface: they specify how the functions in the source file are defined. These file are required for a compiler to check if the usage of a function is correct as the function signature (return value and parameters) is present in the header file. For this task the actual implementation of the function is not necessary. Any user could do the same with the complete kernel sources but that process will install a lot of unnecessary files.

Example: if a user wants to use the function in a program:

the program does not need to know how the implementation of foo is, It just needs to know that it accepts a single param (double) and returns an integer.

To fulfill these dependencies, run the following commands in the terminal.

Step 2 — Install Nvidia CUDA 9.2:

Nvidia CUDA is a parallel computing platform and programming model for general computing on graphical processing units (GPUs) from Nvidia. CUDA handles the GPU acceleration of deep-learning tasks using tensorflow.

Before, we install CUDA, we need to remove all the existing Nvidia drivers that come pre-installed in Ubuntu 18.04 distribution.

Now, let us fetch the necessary keys, installer and install all the necessary Nvidia drivers and CUDA.

Once this step is done, the system needs to reboot.

After the system has been rebooted, let us verify if the Nvidia drivers and CUDA 9.2 are installed properly:

The console output will be something similar below:

Step 3 — Install Nvidia CuDNN 7.2.1:

Nvidia CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the Nvidia Deep Learning SDK.

This is the next component of CUDA that is needed to for installing GPU accelerated tensorflow. Eventhough CuDNN is part of CUDA, the installation of CUDA alone, doesn’t install CuDNN. To install CuDNN, first we need an account with the Nvidia’s developer website. Once signed in, download the CuDNN installer from:

In this example it will look something similar below:

Once the download is finished, you will have a file: cudnn-9.2-linux-x64-v7.2.1.38.tgz in your working directory.

Installation steps for CuDNN is very straightforward. Just uncompress the tarball file and copy the necessary CuDNN files to the correct locations.

Step 4 — Install Nvidia NCCL:

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that are optimized to achieve high bandwidth over PCIe and NVLink high-speed interconnect.

Developers of deep learning frameworks and HPC applications can rely on NCCL’s highly optimized, MPI compatible and topology aware routines, to take full advantage of all available GPUs within and across multiple nodes. This allows them to focus on developing new algorithms and software capabilities, rather than performance tuning low-level communication collectives.

TensorFlow uses NCCL to deliver near-linear scaling of deep learning training on multi-GPU systems.

To install NCCL 2.2.13, from the Nvidia developer page download the os agnostic version of NCCL from Nvidia developer website.

This process will look something similar to the example below:

Once the download is finished, the working directory will have a file: nccl_2.2.13-1+cuda9.2_x86_64.txz

Similar to CuDNN installation, the steps for NCCL installation are similar. Uncompress the tarball file, copy all the files to the correct directories and then update the configuration. Follow the step below to install NCCL:

Step 5 — Install Nvidia CUDA profiling tool:

One last step, before we start compiling tensorflow is to install the CUDA profiling tool: CUPTI. Nvidia CUDA Profiling Tools Interface (CUPTI) provides performance analysis tools with detailed information about how applications are using the GPUs in a system.

CUPTI provides two simple yet powerful mechanisms that allow performance analysis tools such as the NVIDIA Visual Profiler, TAU and Vampir Trace to understand the inner workings of an application and deliver valuable insights to developers.
The first mechanism is a callback API that allows tools to inject analysis code into the entry and exit point of each CUDA C Runtime (CUDART) and CUDA Driver API function.

Using this callback API, tools can monitor an application’s interactions with the CUDA Runtime and driver. The second mechanism allows performance analysis tools to query and configure hardware event counters designed into the GPU and software event counters in the CUDA driver. These event counters record activity such as instruction counts, memory transactions, cache hits/misses, divergent branches, and more.

Run the following commands on the terminal to install CUPTI:

Step 6 — Install Tensorflow dependencies:

Tensorflow and the related Keras API installation requires:

  1. numpy
  2. python3-dev
  3. pip
  4. python3-wheel
  5. keras_applications and keras_preprocessing without any associated package dependencies
  6. h5py
  7. scipy
  8. matplotlib

These dependencies can be fulfilled by running the following commands in the terminal.

We also need to install the dependencies for the build tool Tensorflow uses: Bazel.

The dependencies for Bazel can be fulfilled by running the commands below in linux terminal.

Step 7 — Install Tensorflow build tool; Bazel:

Bazel is a scalable and highly extensible build tool created by Google that promises to speed up the builds and tests, that is created for multiple languages. It only rebuilds what is necessary. With advanced local and distributed caching, optimized dependency analysis and parallel execution, Bazel achieves fast and incremental builds.

Bazel can be used to build and test Java, C++, Android, iOS, Go and a wide variety of other language platforms. Bazel is officially supported for Linux. The promise of Bazel as a build tool is that it helps you scale your organization, code base and Continuous Integration system. It handles code bases of any size, in multiple repositories or a huge mono-repo.

Using Bazel, it is easy to add support for new languages and platforms with Bazel’s familiar extension language. The growing Bazel community has written Share and re-use language rules. Google has incorporated Bazel as the build tool for Tensorflow due to their belief that it is a better fit for the project than built-in tools in linux such as cmake. The inclusion of Bazel as the build tool adds one extra step of complexity to how GPU optimized Tensorflow is deployed in linux, from source.

To keep things neat and tidy, we will use the latest Bazel binary from GitHub, set the correct permissions to run the file as an executable in linux, run the file and update the configuration files. To install Bazel using these steps, run the following commands in the linux terminal.

Step 8 — Fetch latest Tensorflow version from GitHub and configure the build:

One of the key advantages of compiling from source is that, one can leverage all the latest features and updates released directly on GitHub. Typically the updated installer can take a few hours or days to show in the usual distribution channels. Therefore, in this example, we will build the latest Tensorflow version, by directly getting the source files from GitHub.

To fetch the latest source files from GitHub and configure the build process, run the following commands in the terminal.

Here is an example configuration.

Step 9 — Build Tensorflow installer using Bazel:

Since, we are targeting Tensorflow build process for Python 3, this is two step process.

  1. Build from the Tensorflow source files, the necessary files for creating a Python 3 pip package
  2. Build the wheel installer using the Python 3 pip package files and run this installer

The build process will take a very long time to complete and is dependent on the compute resources available to complete the build process.

Once the build process is completed, to create the wheel installer using Bazel and then run the installer file, run the following commands in the terminal.

Step 10 — Testing Tensorflow installation and install Keras:

Keras  is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. I am a huge fan of Keras API due to its seamless integration with Tensorflow. It also allows application developers to implement one of the foundational concepts of software engineering: Don’t Repeat Yourself(DRY), when it comes to building and testing deep-learning applications. Keras  APIs also help build scalable, maintainable and readable code base for deep-learning applications.

To test Tensorflow installation:

To install Keras from source and test this installation, run the following commands in the linux terminal.

This is a long install note. Compared to my install note from last year, on the same topic, the steps involved has increased dramatically. Most of it is due to the added features incorporated in Tensorflow, such as its ability to create and process distributed compute graphs.

As a concluding note, as part of our mission to empower developers of deep-learning and AI, at Moad Computer, we have a cloud environment: Jomiraki and a server platform: Ada. Both of them leverages Nvidia CUDA accelerated Tensorflow to achieve faster deep-learning application performance. We have also released a Raspberry Pi deep-learning module, but instead of CUDA acceleration, uses a slightly different technology called the Intel Neural Compute Stick. Check out all of these cool tools in our store.

If you have any questions or comments, feel free to post them below or reach out to our support team at Moad Computer:

Why Python Is Great: Subprocess — Running Linux Shell From Python Code.

Python is a brilliant object oriented programming language. In artificial intelligence/deep-learning circles, Python is often referred to as the default language (lingua franca) of artificial intelligence. But, the charm of Python extends way beyond running highly complicated deep-learning code. Python is first and foremost a general purpose programming language.

One of the key features of Python is its close integration with Linux. In this post I am going to explore one particular feature inside Python called ‘subprocess’. Subprocess, just like the name suggests initiates a linux shell command in python. This makes a lot of things easier inside Python, including, let us say, creating and manipulating file-system entries.

Here is a Python 3 function that utilizes subprocess to run a list of commands in linux terminal.

This function returns dictionary with two key values: Output and Error. Output records the console output, Error records the console error messages.

To run the function above, next, I will create a Python command:

This command will:

  1. Create a folder in Desktop called ‘New_photos’.
  2.  Randomly copy 10 .jpg files from  ~/Desktop/My_folder/Photos/ to ~/Desktop/New_photos/
  3.  Count the total number of files in the folder: ~/Desktop/New_photos/


That’s it. Running Linux commands inside python is as straightforward as passing a list of commands to a Python function.

Formula One 2018 — Analysis on my favorites for this season.

Formula one is back for an all new 2018 season. Teams unveiled their cars in late February. Mercedes announced the W-09, their 2018 competitor on February 22nd 2018. Details about the new car and how the season will evolve is slowly trickling down. After two weeks of winter testing in Barcelona, once again team Mercedes has emerged as a strong favorite for winning the 2018 formula one season.

The driver line-up for this year is unchanged from the last. British driver Lewis Hamilton in car 44 and Finnish colleague Valtteri Bottas in car 77. The engineering, manufacturing and management portions of the team has also remained unchanged. Toto Wolff, the executive director of Mercedes AMG Petronas Formula One Team, still leads it. James Allison, is the technical director. Andy Cowell,  the managing director of Mercedes AMG High Performance Powertrains, oversees the engine development. Aldo Costa is still the engineering director for the team.

The lack of significant changes in the team has reflected in the new car. From the outside, the W-09 looks like an evolutionary change to last year’s W-08. The W-09 exterior revisions can be broadly categorized into two. One: changes to accommodate the new rules, two: performance and aerodynamic changes.

The biggest exterior change to the W-09 is the addition of head protection device called the Halo. The Halo is a carbon fiber structure that sits on top of the driver cage, to protect the driver’s head. The Halo device is meant to prevent the risk of direct impact to the driver’s helmet with flying object like wheels or other parts from another car, in the event of an accident or collision. The addition of Halo device also adds weight to the car, making the new cars approximately 80kg heavier than the last year’s design.

Another major visual change is in the rear of the car. Once again, for 2018 season, the new rules have changed the use of a structure called shark fin. Last year’s cars had a very prominent shark fin. This structure is aimed at managing the aerodynamic vortices formed around the rear of the car. The shark fin acted as a separation layer for airflow between left and right sides. In 2018, the new rules mandate a less prominent shark fin structure on the rear engine cover.

Related to the shark fin rule is another change to the rear, that prevents the use of T-wings. T-wings were used last year, in conjunction with the shark fin. These secondary wing structures managed the airflow to the rear wing by modulating it. In 2018, the new rules prevent teams from using those large T-wings. The W-09 incorporates this rule change in the form of a smaller, less prominent T-wing that sits below the big rear wing structure.

The smaller evolutionary changes to the W-09 are aimed at improving the aerodynamic profile of the car. Last year’s W-08 was often referred to by the team as a “diva”, due to unpredictable handling characteristics. The W-09 design changes are aimed at making the new car more predictable over a wider range of tracks. One of the key features to accomplish this is by incorporating a sculpted, sleek, more aerodynamic side-pod cover and engine cover. This structure has a simpler, smaller aerodynamic profile than last year’s side pod/engine cover design. It also achieves tighter packaging of the powertrain components.

The new car also features a raised front suspension. This allows smoother airflow to the aerodynamic elements behind it. Both the front and rear suspension elements have incorporated slightly revised aerodynamic surfaces to manage airflow over them. In the W-09, the front suspension elements have a simpler airflow profile than those in the last year’s car. The new car also has a bigger and more aerodynamic steering control arm in the front. There is also a bigger air scoop along the bottom of the front nose cone structure, that directs the airflow to the bottom of the car. The W-09 also has a very aggressive rake. These tweaks are all aimed at taming the unpredictability associated with last year’s design.

In summary the W-09 design improves upon the W-08 design, a car that won both drivers championship and constructors championship in 2017. The philosophy behind W-09 seems to be simplifying the overall design and making the handling characteristics more predictable. Underneath all these changes, there is the usual reliable Mercedes turbo V6 hybrid powertrain. The reliability and performance of the powertrain will be very important for 2018 season. The regulation changes prevent teams from using more than three engines for the whole season, without incurring penalties. Also, Ferrari and Renault have made good progress over the winter, in improving the performance and reliability of their powertrains.

It looks like Andy and his team of engineers overseeing the powertrain development has done an excellent job. Despite tighter packaging, the car has been tested without any major issues. During the winter testing the Mercedes car logged the most amount of testing mileage and demonstrated good overall pace during full-race simulation sessions. Both Lewis and Valtteri are happy with the way the car is handling. The only big unknown for folks outside of the Mercedes team is the qualification potential of the car.

Even though Ferrari topped the time sheets in this year’s winter testing, and the testing numbers are harder to interpret than those from last year, it is safe to assume that the performance of the new Mercedes formula one competitor is something that other teams can’t take for granted. The W-09 is in someways a dark-horse for the 2018 season. Mercedes is the team that has to defend their turf, and it seems like they have created a great package that will do exactly the same. My predictions for the 2018 season are as follows: Lewis Hamilton will win the drivers championship, Valtteri Bottas will be the runner-up and Daniel Ricciardo will be third. Redbull Renault Racing will emerge as the second best team, relegating Ferrari to third place in constructors points.


Raspberry Pi — How to reformat a bootable SD card.

Recently, I received a Raspberry Pi to play with, from a good friend of mine. One of the projects I am working on is to deploy artificial intelligence applications in low powered devices. Raspberry Pi seemed to be a great platform to experiment with. But, I ran into a lot of interesting issues. I will write on those issues and how to solve them in a later post. Today, I am going to write about a simple issue of reformatting a Raspberry Pi SD card after you are done tinkering around. Instead of just leaving the SD card unused, I decided to salvage the storage device used to run Raspberry Pi OS. Formatting the SD card is a fairly involved process.

Here are the detailed steps to re-format a bootable device, in my case: a SanDisk 128 GB microSD card. I am using a Windows 10 machine to reformat the storage device.

Step 1: In the first step we will learn how to launch diskpart, list all attached disks and select the appropriate disk.

Run command prompt as an administrative user.
Launch diskpart by typing diskpart in the command line:

Here is an example output of how it looks like from the command line:

To list all the attached disks to the operating system, type list disk:

The output will look something like this:

To select the specific disk, type ‘select disk #’, where # represents the disk number. In the example above, the SD card is disk 2. To select disk 2 type the following command in the diskpart command prompt:

If you type ‘list disk’ again, it will show the list of disks, now with an asterisk preceding the selected disk.


Step2: In this step, we will remove all existing partitions and prepare the disk for reformatting by cleaning the device.

To remove existing partition, we need to know all the available partitions on the storage device. Let us list the existing partitions using list partition command:

In this example, the command line output looks like this:

We have two available partitions: 0 and 1. Now, let us remove both of these partitions. To do that, first we have to select the partition using ‘select partition command’.

Once we have selected the partitions, we can perform the partition removal operation using the ‘delete partition’ command.

In this example, the storage device has two partitions. We have removed both of these partitions one by one. The command line output looks like this:

Once we have deleted all the available partitions, the next step is to prepare the disk for reformatting by cleaning the disk. Type ‘clean’ in diskpart command line and it will successfully clean the disk.

The command line output for this example will look something similar to this:


Step3: Create partitions and reformat the disk with an appropriate file system:

The diskpart has a command line option called create. It takes three arguments. Those arguments are:

  1. PARTITION – To create a partition.
  2. VOLUME – To create a volume.
  3. VDISK – To create a virtual disk file.

We will use the ‘create partition’ command. To run this, we need to know the command line arguments that ‘create partition’ accepts. There are five options for ‘create partition’ command. They are as follows:

  1. EFI – To create an EFI system partition.
  2. EXTENDED – To create an extended partition.
  3. LOGICAL – To create a logical drive.
  4. MSR – To create a Microsoft Reserved partition.
  5. PRIMARY – To create a primary partition.

I want to create a new primary partition. Therefore, the DISKPART command is going to be:

Once, a primary partition is created, the next step is to reformat the new partition using NTFS file format. Microsoft recommends exFAT for removable media like SD card storage, but, I am selecting NTFS over exFAT for reasons of convenience. Before formatting, select the newly created partition. Then run the command:

The command line output for my example is as follows:

Once, the formatting is successfully completed, there will be two messages:  ‘100 percent completed’ followed by ‘DiskPart successfully formatted the volume’. Now, the SD card should be mounted properly in Windows Explorer. Under ‘This PC’, there should be a new empty disk storage device that can now be attached as a removable device to any PC.


Innovation – What is the secret sauce?

This week, Harris county, TX, saw an unprecedented amount of rainfall during the hurricane/tropical storm Harvey. The city of Houston and surrounding areas have been frequently affected by flooding in the past two decade or so, which led ProPublica to describe Houston: Boom Town to Flood Town. A major portion of the article is devoted to exposing the pitfalls in flood risk modeling and management. Instead of using the latest tools, data, discoveries and ideas, the flood risk modeling and management by Harris County, TX, still relies on archaic technology and outright denial. This laggardly approach to protecting the public has real world consequences. One such example is the constant “once in a lifetime” flooding events in the area. But, as a leader of an organization that focuses on developing technology driven solutions for large scale problems, what can I learn from this group of outright denialists? It is the stark lack of compassion, with a hint of cynicism. It is a dangerous attitude for not just public officials but also for private enterprises. To illustrate this point, I am going to use a positive example from history. One of the best examples of applying compassion to create a brilliant new solution. The story is all about family, father-son bonding and compassion towards fellow human beings. A story that everybody should be familiar with.

Robert Thomson (1822 – 1873), a Scottish inventor and self taught engineer invented the pneumatic tire at the age of 23. In 1839, using a new process called vulcanization that added sulfur to rubber to make it pliable but not sticky.  He built his first pneumatic tire out of hollowed out Indian rubber. The air cushion between the two layers of vulcanized rubber vastly increased the efficiency and comfort of horse-drawn carriages, over the conventional solid rubber tires that were in use at the time. Thomson received a French patent in 1846 and a US one in 1847. The early prototypes of Thompson’s new “aerial wheels” were used in horse carriages in Hyde Park, London. But, the invention didn’t catch the broader public attention for the next fifty years or so.
In 1888, John Boyd Dunlop, a successful veterinary doctor, trained in Scotland, practicing in Northern Ireland, encountered another interesting problem. His son’s interest in competitive cycling in his school made him acutely aware of how uncomfortable it was to ride a bicycle at speed. During his time, tires for bicycles were again made of solid rubber or wood, just like the horse driven carriages. He was thinking of ways to cushion the undulations of the track. He independently stumbled across the idea of pneumatic tires, but instead of applying it to horse-driven carriages, he applied it for his son’s bicycle. Using Dunlop’s newly devised pneumatic tire, due to the more comfortable ride, his son won the competition.

In 1889, Dunlop collaborated with Irish cyclist Willie Hume. It was a marketing coup for Dunlop and a highly beneficial sports deal for Hume as a cyclist. Hume dominated the cycling competitions at the time and became a poster boy for the pneumatic bicycle tires. In 1938, the magazine Cycling, now known as Cycling Weekly, awarded Hume his own page in the Golden BookThe success of Dunlop’s pneumatic tires in cycling scene, led to the massive opportunity to market these products with the help of Irish business man and financier Harvey du Cros. The late 1800s and the early 1900s also coincided with the advent of motor cars. Pneumatic tires became an important part of the personal transport revolution, spear headed by the advent of internal combustion engines and motor cars.

It might seem like a series of fortunate coincidences leading to the creation of a completely new industry, but it all started with a single act of compassion, by a father concerned for his son. After reading this story, it is easy to see where this is heading. For individuals, organizations, enterprises or governments, compassion is a great quality to have. Having a compassionate view towards fellow individuals help us understand the issues and problems and think about finding solutions. I firmly believe, the important secret sauce of innovation is a compassionate mind, a mind that looks to solve problems, save people time, money and effort in accomplishing things. I want this quality to be part of the culture at Ekaveda and help individuals and teams come up with great solutions no-one has imagined before. This same approach should be the guiding principle for governments too, like the city of Houston. A compassionate approach towards its fellow residents, an open minded approach to look at changing patterns and data to create resilient city infrastructure, and preventing the frequent occurrences of “once-in-a-lifetime” weather occurrences, will help us build better communities today, that will survive and flourish tomorrow. I want everyone reading this post to think like John Boyd Dunlop, and emulate the parental compassion exhibited by him. It is the road to great new innovations and creating brilliant new products.

TL,DR: Compassion is the secret sauce to innovation. Just like the song, Kill Em With Kindness, by Selena Gomez, compassion and kindness are two killer qualities for anyone, even for businesses.

(Captions for images from top to bottom: 1) Picture of a flooded Eckerd Pharmacy, Fort Worth, TX, in 2004 from National Oceanographic and Atmospheric Administration website, obtained via Google Image Search and reproduced under fair usage rights, 2)  A picture of the Brougham carriage used by Robert Thomson to demonstrate his pneumatic tires. The Brougham carriage was a new lightweight design by Lord Brougham, retrieved from the public domain and reproduced under fair usage rights from National Museums Scotland, 3) John Boyd Dunlop, who created the first pneumatic tires for bicycles and popularized them as a commercial product, picture from National Museums Scotland from public domain reproduced under fair usage rights, 4) An undated photograph of Willie Hume, the poster boy of pneumatic bicycle tires obtained via public domain from Merlin Cycles website5) A vintage style caricature depicting Willie Hume and his cycling success using Dunlop pneumatic tire, by artist, Deepa Mann-Kler, retrieved from the public domain and reproduced under fair usage rights .)

Learning from mistakes – Fixing pygpu attribute error.

I am part of a great AI community in New York. Today was the fourth week of an ongoing effort to share the practical tricks to developing a natural language processing pipe line using deep learning. This effort was borne out of my efforts to engage the community to develop a greater understanding of how artificial intelligence systems are developed. At a time when there is a lot of fear, uncertainty and doubt (FUD) about AI, I firmly believe community engagements like these are helpful to not only reduce the mistrust in AI systems, but also disseminate new ideas and achieve some collective learning.

During the course, I encountered an interesting error. The python development environment I was running was refusing to load a key library for performing deep learning: called Keras. The error was related to the deep learning back end called theano, specifically linked to a python package called pygpu. I was really surprised that I wasn’t able to load Keras on my system. It was completely new error for me, possibly related to some extra installations I performed last week for a separate project I was working on.

The error was: AttributeError: module ‘pygpu.gpuarray’ has no attribute ‘dtype_to_ctype’

Even though the solution for the problem was found after the sessionwas over, I am documenting the process of mine. This Keras error also taught me two important things today:

  1. Test your environment before D-day. Assumption is the mother of all fuck-ups and today, I assumed everything is going to work fine and it didn’t.
  2. Be persistent and never give up before you find a solution.

Step 1: Since I am using Anaconda distribution for Python 3, I uninstalled and re-installed the deep learning back end: Theano and the front end: Keras. To uninstall theano and keras run the following lines in command line or terminal.

Step 2: Reinstalling the deep learning backend and front end, along with a missing dependency called libgpuarray. Run the following lines in command line or terminal to install libgpuarray, theano and keras.

Step 3: Remove the offending package called pygpu from the python environment.

Step 4: Create a .theanorc.txt file in the home directory. Since I am running a Microsoft Windows development environment, the home directory is: C:\Users\Username\ and once inside the home directory, create a .theanorc.txt file and copy and paste the following lines:

By following these four simple steps, I was able to solve the issue. Even though the error message was really lengthy and intimidating, and initially I thought I may have to completely re-install my development environment, I am really proud of myself that I have discovered a fix for a really annoying, yet poorly documented problem with the pygpu package.

Kudos to Anthony, the founder and co-organizer of the meetup group for building a great AI community here in New York. We work really hard to bring this awesome course every week. To make these lessons always accessible, free and at the same time support our efforts, consider becoming our patron.

I am also part of a great deep learning company called Moad computers. Moad brings excellent, incredibly powerful deep learning dev-ops servers with all the necessary software packages pre-installed to develop artificial intelligence applications. The mission of Moad is to make the AI development efforts as friction-less as possible by creating a powerful opensource dev-ops server for AI. Check out the Moad computers’ store for the most recent version of these servers called Ada for AI.

Marissa Mayer for Uber CEO – It will work.

I recently came across two news items, one from Vanity Fair and the other from Inc, on possible future role for Marissa Mayer as the CEO of Uber. Uber has a very interesting year, including a high profile intellectual property dispute with Google’s autonomous driving car division: Waymo, a series of horrible sexual harassment cases, personal tragedy striking the current CEO and founder: Travis Kalanick and finally the most important and very wise decision by Travis to step down as the CEO of Uber.

Both the articles cast Marissa as a bad choice for Uber. I disagree with both the articles. Marissa’s time at Yahoo has been controversial. Initially portrayed as a savior for a yesteryear’s fallen tech behemoth, the opinions soon changed when it became clear that turning around Yahoo was almost impossible. Yahoo already lost their technology leadership way before Marissa was appointed as the CEO. There are very few second comings in technology, and Yahoo wasn’t one of them.

Uber on the other hand is a technology and market leader in ride sharing. If one looks at Marissa’s time at Google, converting a technologically dominant product portfolio into something even more enticing is her skill. Even at Yahoo, she has accomplished a great job. Finding a suitable suitor for Yahoo’s core business was the only way forward, other than the inevitable slow painful death of Yahoo.

Search and online advertisement business is a monopolistic business dominated by Google. Yahoo never had a real chance in the market to survive on its own. The current assets of Yahoo matches very well with Verizon’s existing business. Yahoo excels in curating content over a wide range of topics from healthcare to finance. Yahoo also has a great advertisement platform. Verizon needs both these businesses to differentiate itself as an internet and cellular service provider from its competitors. Getting Verizon to buy Yahoo is a great decision for the future prospects of Yahoo’s core businesses.

The key problem at Uber is a cultural one and not a marketing or technology issue. Despite all the media hoopla around an inclusive Silicon Valley, it is still a very homogeneous work culture. It will take years to fix these fundamental problems. Marissa isn’t the answer for that. But, during her time at Yahoo, Marissa has demonstrated a great fit into that stereo-typically sexist work environment, which Uber also suffers from. Uber needs a CEO who understands how to survive the horrible culture there, to institute long-term changes. One takeaway from Marissa’s tenure at Yahoo is her ability to survive in the cluster-fuck swamp. For Uber, therefore, Marissa Mayer is a great fit. Marissa knows how to keep the party going, and Uber needs to keep the party going at-least in the short term to keep all the brogrammers happy.

In summary, fixing corporate culture is a long-term mission for any company. In the short-term a new leader at Uber needs to understand and survive the current horrible culture there. Anyone, who expects the new CEO of Uber to magically wipe the slate clean is living in a self created bubble. Hiring Marissa Mayer won’t fix all of Uber’s problems in one day, but I am confident that she will be a great leader Uber needs right now. She will survive the swamp long enough to drain it, clean it and develop it into a beautiful golf course one day.

The video above is a great explanation of sexual harassment at workplace. As the founder and chief imagination officer at Ekaveda, Inc, I strive to ensure an inclusive, and a zero racial and gender discriminatory workplace. Our core team actively promote these ideas and foster a culture of radical openness. This honest expression of opinions on Uber’s next CEO choice is an attempt on behalf of Ekaveda, Inc, to incorporate fair and humane conditions in a modern workplace.

(Captions for pictures from top to bottom: 1) Marissa Mayer at the 2016 World Economic Forum, via Business Insider, 2) Marissa Mayer on Vogue cover in 2013,  3) The recently updated Uber logo via the Independent, UK, 4) The logos of Verizon and Yahoo at their respective headquarters buildings, Via Time, 5), Uber’s advanced technology initiative to develop a self driving car, obtained via Business Insider, 6) Ronald Reagan quote: “It’s hard, when you’re up to your armpits in alligators, to remember you came here to drain the swamp.” via Quotefancy.)

Hardware for artificial intelligence – My wishlist.

We recently developed an incredible machine learning workstation. It was born out of necessity, when we were developing image recognition algorithms for cancer detection. The idea was so incredibly powerful, that we decided to market it as a product to help developers in implementing artificial intelligence tools. During this launch process, I came across an interesting article from Wired on Google’s foray into hardware optimization for AI called the Tensor processing unit (TPU) and the release of second generation TPU.
I became a little bit confused about the benchmark published by Wired. The TPU is an integer ops co-processor. But, Wired cites teraflops as a metric. In the article it is not clear what specific operations are they referring to? Whether it is tensor operations or FP64 compute, the article is unclear. If these metrics are really FP64 compute, then the TPU is faster than any current GPU available, which I suspect isn’t the case, give the low power consumption. If those numbers refer to tensor ops, then the TPU has only ~ 1/3rd the performance of latest generation GPUs like the recently announced Nvidia Volta.
Then I came across this very interesting and more detailed post from Nvidia’s CEO, Jensen Huang, that directly compared Nvidia’s latest Volta processor with Google’s Tensor Processing Unit. Due to the non-specificity to what the teraflops metrics stand for, the Wired article felt like a public relations bit, even if it is from fairly well established publication like Wired. Nvidia’s blog post puts into better context the performance metrics’ than a technology journalist’s article. I wish, there was a little bit more of specificity and context to some of the benchmarks that Wired cites, instead of just copy-pasting the marketing bit passed on to them by Google. From a market standpoint, I still think, TPUs are a bargaining chip that Google can wave at GPU vendors like AMD and Nvidia to bring the prices of their workstations GPUs down. I also think, Google isn’t really serious in building the processor. Like most Google projects, they want to invest the least amount of money to get the maximum profit. Therefore, the chip won’t be having any IP related to really fast FP64 compute.
Low power, many core processors have been under development for many years. The Sunway processor from China, is built on the similar philosophy, but optimized for FP64 compute. Outside of one supercomputer center, I don’t know any developer group working on Sunway. Another recent example in the US is Intel trying it with their Knights Bridge and Knights Landing range of products and landed right on their face. I firmly believe, Google is on the wrong side of history. It will be really hard to gain dev-op traction, especially for custom built hardware.
Let us see how this evolves, whether it is just the usual Valley hype or something real. I like the Facebook engineer’s quote in the Wired article. I am firmly on the consumer GPU side of things. If history is a teacher, custom co-processors that are hard to program never really succeeded to gain market or customer traction. A great example is Silicon Graphics (SGI). They were once at the bleeding edge of high performance computational tools and then lost their market to commodity hardware that became faster and cheaper than SGIs custom built machines.
More interests in making artificial intelligence run faster is always good news, and this is very exciting for AI applications in enterprise. But, I have another problem. Google has no official plans to market the TPU. For a company like ours, at moad, we rely on companies like Nvidia developing cutting edge hardware and letting us integrate it into a coherent marketable product. In Google’s case, the most likely scenario is: Google will deploy TPU only for their cloud platform. In a couple of years, the hardware evolution will make it faster or as fast as any other product in the market, making their cloud servers a default monopoly. I have problem with this model. Not only will these developments keep independent developers from leveraging the benefits of AI, but also shrink the market place significantly.
I only wish, Google published clear cut plans to market their TPU devices to third party system integrators and data center operators like us, so that the AI revolution will be more accessible and democratized, jut like what Microsoft did with the PC revolution in the 90s.

Automation – A completely different thinking.

I have been spending some time on thinking about leadership. Recently, I had the opportunity to sit down and listen to an Angel investor, who specializes in life science companies particularly in New York. I was excited because I have heard incredibly fluffy pieces about this individual. But, the moment this individual started to speak, I suddenly realized something. This individual has no interest in the area of work. It reflected in his presentation, where, in-front of a very young and eager bunch of aspiring entrepreneurs, showed up late for the talk and started going through slides of a presentation created nearly two decades back.

This disappointment reminded me of  the recent article I came across. This article was from Harvard Business Review and it portrayed a dismal view of the enterprise where executives are living in a bubble. I realized, it is not just the corporate executives, but, even the investors are living in a bubble. These individuals are speaking words, but, it makes sense only to a raving mad man. The layers and filters, these individuals have created between themselves and the world outside have created an alternate reality for themselves. The recent news cycles of an incompetent CEO is way more common than any other job known to human kind. This points to an interesting point in our social evolution where CEOs have become one of the rare jobs where incompetence is tolerated and yet highly reimbursed.

This brings me to another interesting article on what exactly is a job and how to value a job. This amazing article published by BBC Capital exposes an interesting conundrum of a capitalist economy. The jobs that are absolutely essential for human survival are the lowest paid ones. The jobs that we consider to be completely pointless are the ones that are extremely highly paid. So, the whole question of what is valuable to the society arises here. Do we prioritize nonsense over our own survival. Our current social structure seems to indicate exactly that. According to Peter Fleming, Professor of Business and Society at City, University of London, who is quoted in the article: “The more pointless a task, the higher the pay.”

So, almost all of our low wage employments cluster around human activities that are absolutely essential for the survival of a society. If we apply these two diverse set of thoughts to another interesting question on the future of jobs, then, I believe I have the answer already.  By implementing mass automation on all of the mission critical jobs for the survival of a human society, we eliminate all the low-wage jobs, and individuals who had these jobs previously are moved to a position where they are given a pointless job.

For the sake of argumentative clarity imagine a situation where automation has been implemented in highly important jobs. A great example is the restaurant worker. Imagine a future where all of the restaurant workers who prepare food are replaced by an automated supply chain and a set of robots. The role of individuals are just to manage these automated supply chain and robots. Imagine, our teenage daughters or sons,  who will apply for a job at one of these restaurants. Instead of these young individuals being hired just for cheap human labor, now they will be hired as a manager for the automated supply chain. Instead of years of school and college education to end-up being incompetent and pointless to the society, even a high school graduate will be able to do this job, because the society is run reliably and efficiently by a highly robust and resilient automated supply chain. All the rest of the management hierarchy will remain the same, due to corporate and regulatory reasons.

It seems to me: a highly automated society will create more value than a society that still relies on human labor for its mission critical jobs. This may seem counter-intuitive, but, it is not. The value of human labor is a more abstract idea than the one based on absolutes. In our current system of human labor dependent economies, only a handful individuals are given the free-reign on incompetence and risk taking. In an automated economy, the underlying resilience will help more individuals to take more risks and be unfazed by the outcomes, because none of these activities are absolutely essential for the survival of a human society.

This trend of automation will not only massively simplify our daily life activities, we will have more time in our hands to do what we want to do in the first place, without worrying about thirst, hunger, hygiene or any of the mission critical activities for our life. The friction of human existence is reduced by machines, which help us create more value to the society.

More individuals will have the opportunity to be at an executive level than what exists today, even with minimal training. It will be the democratization of the executive jobs that will happen with the automation, not the loss of jobs. More people can have the freedom to be employed as CEOs and still get highly paid, because, the society is run on a highly reliable automated schedule. It will be a very different society, just like the massive shift of agrarian economy to the industrial economy that happened during the industrial revolution, but a much more highly paid and with less number of unemployed, for sure.

This research on current dismal state of executives and managers in corporate sector is part of our mission to understand the overarching impact of automation and machine-learning in healthcare. It is done as part of our cancer therapeutics company: nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Image caption from top to bottom: 1) A screen shot from the television programme: The Thick Of It with Malcolm Tucker (played by Peter Capaldi) and photographed by Des Willie, a great artistic portrayal of human dysfunction at the core of some of the most point-less, yet, highly coveted jobs. (C) BBC and reproduced under fair usage rights, 2) A satirical portrayal of recent troubles of United Airlines, published by Bloomberg, retrieved from public domain through Google search and published under fair usage rights, 3) A quote that says: “It’s as if someone were out there making up pointless jobs just for the sake of keeping us all working.” posted inside a London tube train car, with the hashtag bullshitjobs, 4) Image of a burger flipping robot in action, obtained from public domain through Google image search and reproduced under fair usage rights.

Forced errors – Lessons from an accounting scandal.

In 2015 Toshiba corporation based in Minato, Tokyo, Japan, disclosed to its investors of a major corporate accounting malpractice. The accounting scandal dated back to the 2008 financial collapse. When the market forces became unfavorable, Toshiba resorted to the terrible art of creative accounting practices a.k.a cooking the books.

Toshiba created a very interesting mechanism to cook the books. Instead of a direct order to restate revenue and profits, the top level executives of the firm created a strategy called “Challenges”. The “Challenges” were quarterly financial performance targets handed over to the managers of various divisions. These targets were handed over just a few days ahead of the submission due dates for the quarterly financial reports. The system was designed specifically to pressure various division heads to finally embrace the incredibly stupid act of creative accounting practices, instead of aiming at real improvements in corporate performance. The top level executives were extremely confident that the performance pressure created by their “Challenges” system would eventually lead the mid-level managers of  the company to resort to these despicable accounting practices.

I am amazed by the level of top level executive creativity in implementing a pressure cooker situation to do all the wrong things. The fiscal situation would have leveled-off if Toshiba started making money after the financial crisis. But, the problem was, the ingeniously devious top-level executives also started to believe in these cooked books. Now, armed with extremely brilliant, yet fake profit results, the company unleashed their ambitions upon the world.

Toshiba went ahead with their ambitious, capital intensive endeavor of expanding the nuclear power generation in North America. Toshiba had already purchased the North American nuclear power generation behemoth Westinghouse. In 2006, two years before the impending global economic disaster that originated in North America, Toshiba purchased Westinghouse Electric Company, a manufacturer of nuclear power, from British taxpayer funded company: British Nuclear Fuel Limited (BNFL). At the time, economists and analysts questioned the wisdom of BNFL selling their supposedly profitable nuclear power generation business to Toshiba. These were due to the then widely held belief that the market for nuclear power generation was poised to grow rapidly in the next decade. These projections were based on growing global demand in electricity mostly from rapidly growing Asia, especially China and India.

The reasons for the sale were multi-factorial. Being a UK tax-payer funded operation, BNFL had very little leverage in Asian markets like China and India. The then British government was unwilling to take extreme financial, marketing and operational risks of continuing the operations of BNFL. Also, a few years later the biggest secret was out: BNFL was in huge financial mess and its operations were in turmoil, albeit invisible from the public eye back in 2006.

Behind the sale of Westinghouse, another key market force was in play too. Even in 2006, the emerging pattern in the world of power generation was the slow shift away from complex solutions like nuclear power to simpler and more reliable solutions like natural gas fired power-plants and solar-energy. Also, the business of building nuclear power has always been riddled with extremely high risks, including large scale cost-overruns and unanticipated delays.

Even before the 2008 North American financial crisis, the appetite for large-scale capital intensive risks like funding nuclear power plants were coming to a crawling stop. The companies that operate these plants return a profit only after forty years of commercial operation. The operational life of the plants are approximately sixty years. Therefore, for almost two thirds of a nuclear power plant’s life-cycle, it is operating at a loss. Very few investment firms have the resources and the expertise to handle such a complex long-term operation. In such a tough economic situation, an UK taxpayer funded operation like BNFL had limited options to survive: either sell its then widely considered to be lucrative nuclear power business or to significantly scale down the operations by limiting its interests only in the UK and lose a huge amount of market valuation in the process.

In conclusion: market realities, complexities of operating a nuclear power-plant and a risk averse UK government at the time, led to the sale of Westinghouse to Toshiba. In hindsight, BNFL exiting UK taxpayer funded nuclear power generation business was one of the best business decisions ever. BNFL made over 5 times the money it paid for to buy Westinghouse Electric Company in 1999. Sadly the sale of Westinghouse division and the fire-sale of its other assets that followed didn’t save the company. BNFL became defunct in 2010.

At a time when nuclear power has slowly fallen out of favor around the world, Toshiba energetically and optimistically forged new deals in North America. After a flurry of  new orders, it seemed like Toshiba acquired a winner with the Westinghouse Electric Company. Then, in 2011, the bad news hit the nuclear power industry in the form of Fukushima disaster. The Westinghouse Electric Company’s most advanced reactor design called the AP1000 shared similarities in design with the GE developed reactor at Fukushima. The US regulatory scrutiny that followed, revealed design flaws in its core shielding system, particularly the strength of the building structure that holds the nuclear reactor core.

Concurrent to these set-backs, the accounting wizards at Toshiba were creating an alternate reality in corporate finance and accounting. It became increasingly clear early on to the executives, that Toshiba had ruinously overpaid to acquire Westinghouse Electric Company. To mitigate the financial blow of having unknowingly bought a lemon and having to deal with a global financial meltdown, from 2008 onward Toshiba started the practice of misstating its revenue and deferred expenses.

Some of the accrued and real expenses were shown as assets instead of liability and also led to constant misstatement of profits across all of its divisions. Since the accounting malpractice was so sophisticated, extremely well engineered and spread over its sprawling business interests ranging from semiconductors to healthcare to social infrastructure, it took nearly a decade to reveal its ugly face. It makes me wonder, what if Westinghouse Electric Company turned a profit and didn’t encounter all the cost over-runs and delays. We may never have heard about this scandal at all.

The ingenuity behind this large scale corporate malpractice is based on human psychology. The C-suites at Toshiba, instead of handing over direct orders of misstating the profits, created a new system: a system of impossible expectations, where deceitful behavior was the only way to remain employed, run the business and climb the corporate ladder inside Toshiba. The bet made by these executives were one that was cynical: human ethics will fail to intervene if the entire management system surrounding everyone forces them to behave unethically.

It reminds me of the brilliant work on human psychology by Stanley Milgram on obedience and authority. In Toshiba’s case, they used it to create an alternate corporate financial reality. The problem with this behavior is: sometimes unforeseen business risks will expose the quick sand upon which a false empire is built. Here is the movie “The Experimenter”, based on Stanley Milgram’s groundbreaking work on human behavior under authority. I recommend this movie to anyone interested in understanding the complex human behavior of obedience and authority.

It became exceedingly clear by late 2016, that both Westinghouse Electric Company and Toshiba were in deep financial mess. All this creative magical thinking and accounting practices couldn’t solve the financial mess of dealing with regulatory issues, construction problems, constant delays, cost escalations, increasingly frustrated operators and suppliers.

The profit making healthcare and semiconductor business couldn’t carry all of the financial burden of supporting a clearly failing corporate parent company. The profitable healthcare division was sold to Canon corporation in a hurry to prevent rapid loss in its value due to a future bankruptcy of the parent company. It is very likely that even this sudden yet, large infusion of cash from their healthcare division sale came too late to prevent an imminent catastrophic collapse of Toshiba. The next in line for the fire-sale appears to be Toshiba’s semi-conductor division.

I have learned three lessons after studying about the imminent collapse of Toshiba due to its terrible accounting practices. I am sharing those three lessons:

  1. Always question the corporate culture and create an environment where employees, partners, suppliers and anyone directly or indirectly involved with the business is free to ask questions. In other words embrace nanøvedas philosophy of radical openness.
  2. Remember Murphy’s law: anything that can go wrong, it will go wrong, and the scarier cousin of Murphy’s law, the Finagle’s law: Anything that can go wrong, will—at the worst possible moment.
  3. Businesses are a human enterprise, which means businesses have a built in optimism bias. Always be aware of this bias. When things go wrong, human psychology will direct us to hide it rather than share it. Therefore cultivate a culture of sharing mistakes, such as a reporting session every month on all the SNAFUs or a satirical evening of profanity riddled tirade against the management overlords. Comedy is the best way to reveal the ugliest of our secrets.

This is an honest and heartfelt research on a corporation that I once admired and how it all fell apart. This is part of my journey to create a better healthcare and cancer therapeutics company, here at nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

(Image captions from top to bottom: 1) Toshiba TC4013BP microprocessor on a printed circuit board, obtained through Wikipedia, 2) A painting that accurately depicts fraud and deceit discussed in this blog post. The painting is William Hogarth‘s The Inspection, the third canvas in his Marriage à-la-mode (The Visit to the Quack Doctor), obtained through Wikipedia, 3) Double-sided Westinghouse sign that was once located at the intersection of Borden Avenue and 31st Street on the north side of the Long Island Expressway in New York City, dated 1972 from the collection of Richard Huppertz, obtained from Wikipedia, 4) A cutaway section of the pressurized water reactor that was used in Fukushima Daiichi, obtained from Wikipedia, 5) Image of an English toast with Murphy’s law engraved on the jelly, obtained from public domain and reused with permission through Flickr.)