Analysis Management Research Research and development

Analysis — Bhopal tragedy: neither a sabotage nor a routine maintenance operation gone wrong

I recently came across a news article on the disproportionately devastating impact of COVID19 pandemic on the victims of the Bhopal gas tragedy.1 This news motivated me to revisit and learn more about the event that happened over 35 years ago. The Bhopal gas leak tragedy is the world’s worst industrial disaster. Methyl iscocyanate (MIC), a compound used to manufacture pesticides, was released from the Union Carbide India Limited’s (UCIL) chemical factory located in the city of Bhopal, the capital of the Indian state of Madhya Pradesh. Then Danbury, Connecticut, United States (US) based Union Carbide Corporation (UCC) established the Mumbai based UCIL as its Indian subsidiary to manufacture and market pesticides in the country. The US corporation held majority shares of 50.9% and the rest of the shares were held by various Indian insurance companies, Indian banks, Indian government and Indian investors. The land where the company stood was leased from the Madhya Pradesh state government.

Chemical structure of methyl isocyanate (MIC)
Chemical structure of methyl isocyanate (MIC), the toxic substance responsible for Bhopal gas leak tragedy. The image is from Wikipedia. (Copyright disclaimer: Copyright belongs to the original copyright holder. This image is reproduced here under the provision for fair usage of copyright content for research and commentary purposes. ) 

The gas released during the Bhopal gas tragedy: MIC is a highly toxic and irritant chemical. Even at concentrations of 2ppm, it is extremely toxic to humans.2 This catastrophic event resulted in nearly 3800 deaths and over 500,000 injuries. The gas leak that happened over 3 decades ago still continues to have adverse health effects on the survivors of this tragedy. The chronic effects of MIC exposure are in the form of increased incidence of cancer, respiratory disease, reproductive disorders, greater number of stillbirths, higher than usual spontaneous abortions3 and now during the COVID19 pandemic, an increased risk of mortality among those victims. The survivors of the tragedy also suffers from various mental health issues including post traumatic stress disorder, from having to witness and experience the gassing to death of thousands and injuring even more individuals. It left unfathomably deep scars on the psyche of Bhopal and India. Even though the Bhopal gas tragedy happened before I was born, it was one of the dominating news events of my childhood during the 1990s. Even with the constant exposure about the Bhopal gas tragedy right from a very young age, after my recent research into the tragedy, I came across a few previously unknown facts that took me by great surprise.

Image from Bhopal after the methyl isocyanate gas leak.
The methyl isocyanate gas leak in Bhopal resulted in the deaths of more than 3800 people and injured nearly 500,000. Its legacy still haunts both Bhopal and India. This image is from India Today’s coverage of the Bhopal gas tragedy during its 30th anniversary.
(Copyright disclaimer: Copyright belongs to the original copyright holder. This image is reproduced here under the provision for fair usage of copyright content for research and commentary purposes. )

The biggest surprise of all to me was the cause of the Bhopal gas leak. On 2nd December, 1984, the UCIL employees undertook an unreported, riskier, modified maintenance procedure, during the second shift (14:45 hours to 22:45 hours), around 21:30 hours. Water was injected through a pressure gauge tap near the MIC storage tank 610. This pressure gauge tap was located very close to the common valve located over the tank 610. This tap provided a more direct access to a malfunctioning diaphragm motor valve (DMV) that could not be cleaned properly using the routine water-washing maintenance procedure. The restoration of normal functionality of the DMV was critical for nitrogen pressurization and therefore the MIC transfer from tank 610. During this off-the-books operation by the UCIL employees, they inadvertently introduced 500kg of water into the MIC tank 610.4 This human error caused a massive heat releasing chemical reaction within the tank 610 that resulted in the Bhopal gas tragedy.

Figure from Kenneth Bloch's book depicting the entry of water into the tank 610.
The route of accidental injection of water into the methyl isocyanate storage tank 610 during a modified water-washing operation undertaken by the employees of the Union Carbide India Limited’s Bhopal plant. This image is from the book: Rethinking Bhopal: a Definitive Guide to Investigating, Preventing, and Learning from Industrial Disasters (2016) by Kenneth Bloch.
(Copyright disclaimer: Copyright belongs to the original copyright holder. This image is reproduced here under the provision for fair usage of copyright content for research and commentary purposes. )

Approximate timeline of events

2nd December 1984
Unknown period before 21:30 hours : The UCIL Bhopal plant manager and the MIC supervisor formulated a modified water-washing procedure that was supposed to establish a more direct water access route to a malfunctioning diaphragm motor valve (DMV) due to trimethyl isocyanurate crystal deposition. The role of UCC, the parent company of UCIL is unknown in the formulation of this modified maintenance plan. The modified water-washing plan involved removal of a pressure gauge closer to the common valve near the storage tank 610 and then inject water through the pressure gauge tap. The employees overseeing this modified maintenance plan sketched diagrams depicting how to inject water through the pressure gauge tap in their daily log book. But, most importantly, their planning did not account for the possibility of trimethyl isocyanurate crystal depositions on the common valve as well.

21:30 hours: The employees started the modified cleaning procedure for the DMV through the pressure gauge tap near the MIC storage tank 610. The workers established water pressure across the DMV. The workers performed a cursory check to see if the common valve was closed. The trimethyl isocyanurate crystal deposits blocking the common valve gave a false sense of security to the operators by offering feedback resistance to the valve closure attempts. In reality, the common valve was jammed open due to the trimethyl isocyanurate crystal deposits inside the valve body.

Between 21:30 hours and 22:30 hours: Trimethyl isocyanurate crystal plug around the common valve dissolved away restoring the normal functionality of the common valve. It was therefore left in a partially open position. Since the modified water washing procedure was an unsupervised one, water started to pour into the MIC storage tank 610, with no one noticing it.

Between 22:30 hours and 22:45 hours: Brief spikes in the pressure gauge readings for the tank 610 were noticed by the workers manning the gauges. The pressure increase was due to the release of the gaseous products during the heat producing reaction between MIC and water. The pressure rise was quickly nullified by the subsequent escape of those gases through the same route that was established for the modified water-washing procedure. The gases from the chemical reaction inside the tank escaping through the process valve header (PVH) resulted in the tearing effects around the MIC storage area.

Between 22:45 hours and 23:00 hours: The workers looking for a leak detected a constant dripping of yellowish white substance near the process valve header (PVH), relief vent valve header (RVVH) area. This leak was due to the escaping of the gaseous products of the chemical reaction in the tank, through the modified water-washing route. If the modified water-washing process was working as expected, there should have been a steady flow of water from the PVH-RVVH area. Instead the reported leakage of yellowish fluid clearly indicated the escape of reaction products between MIC and water. The workers from the newly changed night shift noticed the water hose connected to the pressure gauge tap for the modified water-washing procedure, near the common valve of the MIC storage tank 610. They feared the worst and checked the common valve, which they found to be left in an open position. They quickly closed the common valve and disconnected the water hose. They also found that the water was still running through that hose at the time their discovery.

23:30 hours: The workers started reporting significant tearing effects around the MIC storage tank area.

23:45 hours: The workers investigating the leak in the MIC storage area reported to the MIC supervisor about the leak near the PVH-RVVH area. The MIC supervisor misunderstood the MIC leak for a water leak, because a water leak was the expected outcome from the modified maintenance process.

Between 23:45 hours and 00:15 hours: With the common valve near the storage tank 610 properly closed, the tearing effect temporarily subsided. But, the pressure was building-up inside the tank 610, due to the auto-catalytic, exothermic reaction between ~500kg of water and 80 metric tonnes of MIC.
3rd December 1984
Between 00:15 hours and 00:30 hours: The usual late night tea service was noted to be unusually quiet and the mood was gloomy. The employees of the plant had realized that water had entered the storage tank 610 through the pressure gauge tap, during the modified maintenance operation.

Between 00:30 hours and 00:40 hours: The pressure gauge recordings were steadily increasing. The pressure readings were inching closer to the operationally significant 40 psi level. The workers attempted to transfer the MIC in the storage tank to one of the transfer pots used for Sevin production, using the pressure build-up inside the storage tank. The amount of MIC transferred was insufficient to reduce the pressure build-up inside the storage tank.

Between 00:40 hours and 00:45 hours: The temperature inside the storage tank exceeded 25 degree Celsius, the maximum recording range for the temperature gauge. The pressure of the tank exceeded 40 psi, resulting in the breaking of the rupture disc, a safety device that was supposed to break when the pressure reached 40 psi. The safety valve in the next check point had also popped open. The concrete covering the tanks had cracked open. The chemical vapors from the reaction inside the tank along with the vaporized MIC started shooting out of the vents.

00:45 hours: The internal pressure of the storage tank 610 exceeded 55 psi. The MIC supervisor ordered shutting off all the water sources around the MIC storage area. The workers attempted to break down the MIC by spraying water on the toxic white cloud hanging over the storage area. But, the water pressure was insufficient to reach the height of the white cloud of leaked toxic gases.

00:50 hours: Vent gas scrubber (VGS), a system that was designed to inject caustic soda solution to breakdown the escaping gases was turned on. The meter showing caustic soda circulation indicated that there was none circulating inside the VGS. The VGS system failed to contain any of the gases escaping from the storage tank 610.

01:30 hours: The toxic white MIC cloud started engulfing the employee area. The employees started to panic. They ran away from the white cloud of death. Some of the employees had access to oxygen masks. They wore those oxygen masks and moved to safer downwind areas.

02:00 hours: The winds blew the toxic white MIC cloud slowly away from the factory site and it started its slow travel towards the city. It was just the beginning of the gassing to death of the Bhopal residents.

After 02:00 hours: Once the chemical reaction subsided, since the common valve was inadvertently restored to its normal functionality during the modified maintenance procedure at around 21:30 hours on December 2nd 1984 and it was securely closed during the tank inspection around 22:45 hours on December 2nd 1984, the tank 610 managed to retain an absolute internal pressure of 3 psi, (~12 psi below atmospheric pressure) for nearly 20 days. This was a surprising vindication for the designers of the storage system, that their design could withstand an explosive decompression. But, that robust storage system alone was clearly demonstrated to be insufficient in preventing a catastrophic gas leak.

06:00 hours: The senior instrumentation supervisor arrived early to inspect the storage tank 610. He discovered that a pressure gauge was missing from the storage tank 610 area and a hose with water still running out of it, right next to the pressure gauge tap.
Tank 610 abandoned at the Union Carbide India Limited's Bhopal plant site
A photograph of the tank 610 from where the methyl isocyanate leaked out following a modified maintenance operation undertaken by the Union Carbide India Limited’s Bhopal plant employees. Following the tragedy, the tank was excavated from its original underground location. The image of the abandoned Union Carbide India Limited’s Bhopal facility showing the tank 610 is from Flickr. (Copyright disclaimer: Copyright belongs to the original copyright holder. This image is reproduced here under the provision for fair usage of copyright content for research and commentary purposes. )

The purpose of the modified water-washing maintenance procedure was to establish a more direct water access line to the diaphragm motor valve (DMV) that led to the process valve header (PVH). This valve malfunctioned due to trimethyl isocyanurate (MIC trimer) crystal deposition. The faulty valve prevented the nitrogen pressurization of the tank to transfer out the stored MIC. After the MIC production was terminated in October, two attempts to pressurize the tank 610 using nitrogen, the first one on 30th November, 1984 during the first shift (06:45 hours to 14:45 hours) and the second one on 1st December, 1984 during the second shift (14:45 hours to 22:45 hours), were unsuccessful. The employees were working against a literal ticking clock to fix the malfunctioning valves. Time was running out for the once promising pesticide manufacturing plant in Bhopal. The investors and the parent corporation had run out of patience and decided to shut down the plant, dismantle everything and ship it off either to Brazil or Indonesia.6 The industrial manufacturing license from the Madhya Pradesh government was going to expire on 1st January, 1985. The parent company Union Carbide Corporation was going to withdraw from the Indian partnership.

The employees were left with only a handful of days to convert the stored MIC to a technical grade pesticide product at the nearby Sevin manufacturing facility. But, to accomplish that, they needed to first transfer the stored MIC from tank 610 to the 1 ton production pots used by the Sevin team. When the plant was new, the employees accomplished this using transfer pumps. But, once the transfer pumps started failing frequently, also due to the deposition of trimethyl isocyanurate crystals, the factory abandoned the transfer pumps altogether. They started to use the nitrogen pressurization of the storage tanks to push out the stored MIC. The nitrogen pressurization also started failing due to valve leaks. Therefore the only way forward for the employees was to address those valve leaks as quickly as possible so that they could restart the nitrogen pressurization of the MIC storage tanks.

Chemical reaction depicting the formation of trimethyl isocyanurate from methyl isocyanate.
The chemical reaction that shows the formation of trimethyl isocyanurate (MIC trimer) from methyl isocyanate (MIC). This chemical reaction turned out to be the literal nail in the coffin for the Union Carbide India Limited’s Bhopal plant. In their case, the catalyst was rust formed inside their carbon steel pipes and valves. The image is from Wikipedia. (Copyright disclaimer: Copyright belongs to the original copyright holder. This image is reproduced here under the provision for fair usage of copyright content for research and commentary purposes. )

In the minds of the employees that undertook this slightly revised operation, restoring the normal function of DMV was highly necessary to have a fighting chance to keep the plant operational. This covert maintenance procedure was very similar to a routine water-washing operation to clean the relief vent valve header (RVVH) and the process valve header (PVH). It could be the reason for initial reports that attributed the accident to a routine maintenance operation going wrong, but, could not explain the later insistence by the Indian government that the maintenance procedure was nothing but routine. But everyone involved in that process missed a key fact. Both the common valve and the DMV were made of iron. This made both those valves equally prone to the trimethyl isocyanurate crystal deposition. These crystal deposits often give a false feeling of security to plant operators managing the affected valves. For the untrained eye, the feed back resistance to the force that is applied to close the valve gives the impression of a securely shut-off valve. In reality the crystals are just blocking the valves from operating normally.

Critically, the employees overseeing the operation did not insert a blind to reduce the reliance on the common valve integrity. Even though this was mandated by the operation procedures and a safer method to accomplish an otherwise risky task, it would have increased the amount of work to perform an operation that they thought was routine and urgent. Also, since the plant was seriously understaffed due to several brutal cost-cutting exercises, waiting for an overburdened maintenance staff to come around and install a valve blind would have further delayed their ability to transfer MIC for Sevin production. Time was of essence for the remaining UCIL staff in Bhopal, even if that meant taking a few extra risks here and there. This covert maintenance operation was probably also an attempt by the employees to demonstrate their value to the management, so that, even if the Bhopal plant was shutdown, they had a fighting chance of being employed elsewhere in the large network of factories under the Union Carbide umbrella.

A satellite view of the Union Carbide India Limited's abandoned factory site in Bhopal.
A recent satellite image of the abandoned Union Carbide India Limited’s factory site, in Bhopal, Madhya Pradesh, India. This satellite image was collected originally by CNES/Airbus, Maxar Technologies and obtained via Google Maps satellite imaging view. (Copyright disclaimer: Copyright belongs to the original copyright holder. This image is reproduced here under the provision for fair usage of copyright content for research and commentary purposes. )

Initially everything proceeded normally. Sadly, it was the proverbial calm before the storm. Despite this cleaning attempt being clearly riskier and off-the-books, the employees tasked with this operation added another layer of risk by sticking with the usual method employed during routine water-washing maintenance operations. They let the water run uninterrupted for 3 hours, that too without any supervision. Once the water pressure was established across the DMV, the employees that undertook the cleaning operation left their shifts and handed the plant operations over to the employees of the next shift (22:45 hours to 06:45 hours). During this time, the trimethyl isocyanurate deposits in the common valve washed away. The common valve was in partially open position after all the crystal deposits were washed away. Since the water connection was closer to the common valve, the basic hydraulics dictated that the water would go directly through the common valve and into the MIC storage tank 610. Assuming that the water must have been leaking into the tank right from the start of that modified maintenance operation, the water entry continued for well over two hours. Approximately 500kg of water entered the storage tank during that time period. Even though MIC is a relatively stable chemical, mixing of water with MIC results in a powerful exothermic chemical reaction. Soon enough, the unintended entry of large amounts of water triggered a violent heat producing chemical reaction.

The stored MIC was already at room temperature, even before the modified water-washing operation started. The company recommended storage temperature for MIC was supposed to be at zero degree Celsius. But, due to an earlier cost-cutting decision to disconnect the refrigeration system, the MIC storage tanks were no longer maintained at a steady zero degree Celsius. This refrigeration system was also meant for MIC storage temperature control. Once the water started pouring-in, the resultant chemical reaction between MIC and water caused the temperature in the storage tank to rise quickly. With no temperature controlling mechanism in place, it soon exceeded the boiling point of MIC. This resulted in the conversion of the remaining un-reacted liquid MIC into its gaseous form.

Since the MIC tank was 80% full, the volume of the gases produced during the exothermic reaction could not be contained in the vessel. There was also no option of safe transfer of the boiling, violently reacting MIC to the other two storage tanks. The other two storage vessels also contained large quantities of MIC. The pumps designed to transfer the materials from the storage tank were decommissioned as part of the plant liquidation process. So, the employees had to resort to yet another riskier procedure of transferring the boiling MIC to the Sevin manufacturing vats using the pressure build-up due to the chemical reaction. They performed this risky transfer of a batch of MIC using that technique, but, the amount removed was grossly insufficient to reduce the pressure build-up inside the tank. The storage tank acted essentially as a giant chemical reaction pot, with the employees having left with no options to control that violent reaction.

Finally, the vaporized MIC and the exothermic chemical reaction products escaped through the safety release mechanism, once the internal pressure of the storage tank exceeded its design limits. This resulted in the release of approximately 40 metric tons of MIC into the atmosphere. The employees decided not to inform the police. There was an informal company policy in place that forced employees to not alert the local law enforcement in the event of a gas leak. The alarm for alerting the public of a gas leak was also disconnected. Even if the alarm was activated, there were no warnings to an outside observer. The toxic white cloud hung over the MIC storage tanks, like the pristine white robe of an angel of death, slowly traveling along the breeze towards a sleeping city, with the people deep in slumber, unaware of their imminent ghastly deaths awaiting them.

Immediately after the gas leak, the thin veil of secrecy around the covert maintenance operation started falling apart quickly. Next morning, a senior instrumentation supervisor at the plant found that the pressure gauge attached to the tank 610 was missing. The plant logs showed that the particular missing pressure gauge was present on November 30, 1984, two days before the tragedy. The same instrument supervisor also stated to the investigators that he also found a hose lying beside the tank man-head that morning, with water still running out of it.9 Even though this testimony was used by the Union Carbide Corporation to build a case of sabotage, they selectively ignored the evidences that pointed towards the elaborate planning that happened as part of the modified maintenance operation. The ‘independent’ investigation report noted that they recovered a sketch depicting water injection through the pressure gauge in a daily log.9 This sketch was noted to have been made prior to the incident. On the other hand, Indian government, through CBI resorted to extreme intimidation and interrogation tactics against this supervisor. They dubbed this interrogation and intimidation exercise as ‘lie detection’, to force this individual to change his testimony to suit the government narrative that the water-washing operation was nothing but a routine operation.10

Union Carbide: Science helps build a new India, print advertisement, 1962
Union Carbide’s ‘Science helps build a new India’ (1962) print advertisement. Union Carbide and its Indian subsidiary created and dominated the Indian pesticide industry. But, by the early 1980s, through a series of short sighted decisions and misguided cost cutting exercises, the company squandered away one of their largest markets. Union Carbide quickly ran out of depth of their understanding of their own chemistry. The Bhopal tragedy was a direct consequence of that lack of mastery of Carbide company’s own inventions. The advertisement is a throwback to the better days Union Carbide had in India. This image is from Amazon Canada. (Copyright disclaimer: Copyright belongs to the original copyright holder. This image is reproduced here under the provision for fair usage of copyright content for research and commentary purposes. )

The other surprising fact I learned was the repetitive nature of history, in this instance, the lethal combination of human error and a clueless corporation. Union Carbide Corporation was a legacy corporation with very poor understanding of their own chemical process. This observation is in sharp contrast to the health and safety report published by UCC after the tragedy. In that report, the author portrayed UCC as a safety conscious organization. He especially waxed poetically about his mother imbibing him the quality of not telling any lies. But, the facts related to the Bhopal tragedy told me a totally different story. The Union Carbide Corporation lied. The Union Carbide Corporation lied to the investors and the public about their depth of understanding of their own chemistry. They also lied about the problems they were having with the Bhopal plant. They also lied about the extend of their involvement in running the Bhopal plant. The running of UCIL by UCC could be characterized as hands-off style corporate management, with minuscule amount of direct communications between their US headquarters in Danbury and their Indian counterpart in Mumbai. But, the US company forcefully exerted a huge influence on how its Indian subsidiary was running their daily operations, at the very least, the directions of those.

The sabotage theory was first revealed by the UCC’s ‘independent’ investigations into the tragedy. To protect themselves against defamation law suits in the US, the firm decided to never officially name the ‘saboteur’. Officially publicly naming the ‘saboteur’ could have opened themselves to a series of defamation lawsuits by the named parties. So, instead they let the media and the former employees do the reveal through the various publications and the Indian court system.6,7The sabotage theory was never really taken seriously by the Indian public, but the US court system loved the ingenuity of their argument. The sabotage theory spread by UCC is the worst example of corporate gas-lighting and racism, that is so convincing that it has outlived the very individuals who had created this fantasy tale.

The theory behind the sabotage was that, a disgruntled employee decided to spoil the MIC batches by pumping water into the MIC storage tank. It is a simple and easy to understand theory. No understanding of valve mechanics or chemical crystal formations or basic human decency is required to digest the crap-shoot of a hollywoodesque fantasy manufactured by UCC. Like a brilliant movie script, it had very clear motives, a very clear victim and a very clear villain. According to UCC representatives and the ‘independent’ investigators hired by UCC, the real victim was their incredible safety conscious corporation that could never do anything wrong. The villain in their eyes was a lowly paid, disgruntled, individual of color, living in a third world country and harbored resentments against a capitalist multinational corporation for ruining their career prospects. This clearly mad theory of sabotage was often dismissed by the Indian mainstream media as a distraction tactic by the US corporation that held majority shares in UCIL at the time. The media called it a tactic by UCC to shift the blame of the industrial accident onto a lowly paid employee at the facility.8

On the other hand, the media and the Indian government threw their weights around the notion that a routine maintenance procedure was responsible for this tragedy. The official communications around the Bhopal gas tragedy highlighted a flaw in the plant design which resulted in the water entering the MIC tanks during a routine water-washing maintenance operation that occurred a few hours before the tragedy.11 Even though the theory sounded plausible, the experts from the government and the ones hired by UCC who evaluated the plant design and ran various tests, accurately concluded that the routine maintenance operation of water-washing the process valve header (PVH) and the relief valve vent header (RVVH) lines could not have resulted in the entry of 500kg of water into the storage tanks.

Union Carbide India Limited’s Bhopal factory gate shortly after the massive release of methyl isocyanate from one of their underground storage tanks. The image was originally taken by Peter Kemp for the Associated Press and obtained via The Atlantic’s 30 year anniversary coverage of the Bhopal gas leak tragedy. (Copyright disclaimer: Copyright belongs to the original copyright holder. This image is reproduced here under the provision for fair usage of copyright content for research and commentary purposes. )

During one of the federal investigations by the Indian government into the tragedy, the Indian Central Bureau of Investigations (CBI) and the Council for Scientific and Industrial Research (CSIR) investigators found no traces of water in the pipes that were supposed to have carried these large quantities of water into the MIC storage tanks, during a nitrogen purge. The investigators and the factory employees overseeing the purge were expecting gallons of water accumulated inside the plumbings that led to the MIC storage tanks. Following the nitrogen purge, they waited with large vats to collect the accumulated water. Instead, the pipes turned out to be bone-dry.9Even though this investigative exercise was never reported in the Indian media, this little known fact delivered a death blow to the most widely circulated theory in India for the Bhopal gas leak tragedy. For the routine water-washing maintenance gone wrong theory to be true, the water should have accumulated in some of the pipes that led to the MIC storage tanks. The absence of any water in these pipes conclusively proved that the some of the safety designs of the plant worked as expected.

The basic hydraulics confounded another easy to explain, but, totally flawed theory. The relatively low water pressure in the hoses used for cleaning combined with the drains being opened, ensured that the water would flow through those drains and not against pressure of a closed valve. The back pressure exerted against these valves would not be significant enough to result in water entering those tanks. The impossibility of water entering the MIC tanks through just a valve leak during a routine PVH-RVVH line cleaning was confirmed by both the independent experts hired by UCC9 as well during a follow-up investigation by the Indian federal investigators during the evaluation of the associated valves to see whether a valve failure could result in the water flow into the MIC storage tanks. These investigations could not demonstrate water accumulation in the MIC storage tanks.

The official CSIR report released in 1985, a year after the tragedy, had no mention of any obvious functional defects with the valve responsible for preventing the water entry into the MIC tanks during the pipe cleaning process. In retrospect, this was obvious because the common valve was inadvertently restored to its normal functional state, during the fateful modified maintenance operation. They also failed to report any obvious obstructions with the drains, which could have resulted in the build-up of water pressure. Even though the CSIR report agreed with the independent investigators that the MIC release was due to the entry of large quantities (~500kg) of water into the tank 610, they failed to provide a reasonable explanation for the presence of such large quantities of water.

The other supporting fact for the off-the-books cleaning operation and its effectiveness was the relative ease with which the MIC storage tank 610 held lower tank pressure than the atmosphere following the explosion. The CSIR investigators working to understand the causes of the MIC release noted on December 20th, 1984, that the tank 610 was maintaining an internal absolute pressure of 160mm Hg (~3 psi) and decided to increase the tank pressure to atmospheric levels using nitrogen.12 If the common valve was left in a partially open position due to the deposition of trimethyl isocyanurate crystals, the tank 610 would have been unable to maintain a negative internal pressure. In that case, at the time of inspection, the internal pressure of the tank 610 would have been equal to that of the atmospheric pressure. The tank 610 maintaining a significantly lower internal pressure than the atmosphere was another indirect evidence that the modified maintenance procedure resulted in the common valve being accidentally restored to its normal functional state. Therefore, the maintenance of a negative pressure and the subsequent nitrogen pressurization back to atmospheric levels were easy tasks to achieve within the storage tank 610. The fatal maintenance operation was in some ways a success in that, the tank 610 no longer had any leaks that previously plagued it.

The Bhopal gas leak was certainly neither a sabotage nor a routine water-washing maintenance operation gone wrong. The leading root causes promoted by both UCC and Indian federal government were total fabrications and falsehoods. Both parties were extremely reluctant in revealing the real cause of the release of large quantities of MIC in Bhopal.6,7Based on my research, the immediate cause behind the Bhopal gas tragedy was that: a group of employees’ last ditch efforts to keep a dying factory alive resulted in the release of large quantities of a lethal gas into the atmosphere of a city that was clearly unprepared for it.

The real truth behind the Bhopal tragedy was darker and even more complex than the simple explanations floated by both UCC and the Indian government. Like the inventive and imaginative lawyers hired by UCC, the Bhopal gas tragedy could be interpreted as a case of sabotage. But, the sabotage was not committed by a single employee. Union Carbide Corporation, its Indian subsidiary, the Indian bureaucracy and the investors of UCIL, all acted together in this sabotage process. Through a process of constant dilution of safety, misguided attempts at short-term cost cutting, enforcement of a culture that subjected employees to unnecessary risks without providing proper training and a toxic blame driven work culture, all combined forces to cause the tragedy that unfolded in Bhopal, over 35 years ago. The tragedy was definitely not because a smart, Indian mathematics graduate who was denied a meaningless promotion, showed-up for work on time and decided to spoil a batch of precious materials that were necessary to support his livelihood.

The question of how it happened is an easy answer and it is as follows: on 2nd December, 1984, a group of UCIL workers, while attempting to restore the functionality of the 610 MIC storage tank, resorted to desperation. The question of why it happened is more complex question to answer. The employees were simply trying to clean a valve that was rendered inoperable by the constant gremlin of trimethyl isocyanurate crystal formations. These crystals showed up everywhere in the MIC manufacturing plant, frequently interfered with the normal operation of the plant and were the proverbial nail in the coffin for the Bhopal plant. Yet, the management and the investors of UCIL tried to soldier-on without any meaningful mitigation efforts.

The insurmountable trimethyl isocyanurate crystal deposition problem was a direct consequence of the management’s miscalculated cost cutting efforts. Instead of addressing the root cause plaguing the production facility, the UCIL management forced the workers to work harder, denied promotions and burdened them with unknown and unacceptable risks. The nickels and dimes first, people last attempts to keep a dying industrial beast alive was definitely a megalomaniac and ill-informed exercise. It was doomed to fail from the start. The employees were deliberately set-up for failure by the UCIL management. The investors, the company and its parent corporation were ruthlessly trying to recoup every nickel and dime worth of their investments that was sunk into a failed plant, through shameless exploitation of their own employees. The employees were patiently tolerating this cruel corporate tyranny in the hopes of protecting their meager livelihoods. Out of sheer desperation the workers attempted a poorly thought-out, unscientific and extremely risky maintenance operation.

Unknown to every single one of them was the fact that, the water intended to wash away the gremlins and restart the pesticide production, also dissolved away the very crystals that were jamming the common valve leading to the 610 storage tank. Once the crystals dissolved away, the malfunctioning valves were restored to their normal functional state. That unfortunately meant, the common valve would be set in a partially opened position. There were no monitoring mechanisms for the water-washing process. There were no reliable temperature monitoring protocols implemented to detect any anomalous exothermal reactions occurring in the MIC storage tank. Soon, the free flowing water surged past the common valve and flowed directly into the tank brimming with a substance that was ready to react violently with water.

Before anyone had noticed this mistake, large quantities of water already entered the storage tank. After that gross miscalculation, it dawned upon those employees that nothing could stop the violent chemical reaction that was going to follow. Hence the reason for an eerily silence and an overarching gloomy mood that was reported by the immigrant tea vendor from Nepal during the routine late night tea service that happened around 12:15am, the morning of 3rd December 1984.9

After 35 years of lengthy court battles and untold amount of sufferings endured by the victims and their relatives, the haze of the tragedy is slowly subsiding and the facts are finally becoming more visible. Those facts are as ugly as the tragedy itself. The corporate falsehoods by Union Carbide Corporation was not very surprising. But, as a citizen of India, a great nation with its motto ‘let truth prevail’, I am genuinely appalled and ashamed that the erstwhile Indian administration would shamelessly lie and promote falsehoods to hide their own culpability in the worst industrial disaster.




4 Bloch, Kenneth. Rethinking Bhopal: a Definitive Guide to Investigating, Preventing, and Learning from Industrial Disasters. Elsevier, 2016. Print.

5Madhava, Menon N. R. Documents and Court Opinions on Bhopal Gas Leak Disaster Case: For Course on Tort-Ii (industrial and Mass Torts). Bangalore: National Law School of Indian University, 1991. Print.





10Union of India versus Union Carbide Corporation, Court of district judge, Bhopal, GAS CLAIM Case No. 1113, filed in 1986, decided on 3rd April 1987


12 “Report on Scientific Studies on the Release Factors Related to Bhopal Toxic Gas Leakage”, Indian Council of Scientific and Industrial Research, December 1985.

Artificial Intelligence Cloud computing Deep learning GPU Hardware Nvidia Programming Python Python Python3 TensorFlow

Install notes — Tensorflow in Ubuntu 18.04 LTS with Nvidia CUDA

In this install note, I will discuss how to compile and install from source a GPU accelerated instance of tensorflow in Ubuntu 18.04. Tensorflow is a deep-learning framework developed by Google. It has become an industry standard tool for both deep-learning research and production grade application development.

Step 0 — Basic house-keeping:

Before starting the actual process of compiling and installing tensorflow, it is always good to update the already installed packages.

Next step is to check for Nvidia CUDA support. This is done using a package called pciutils.

In this particular example deployment, the GPU that we will be using is: Nvidia Tesla P4. The output from the console should look something similar below:

This helps us understand whether the GPU attached to the linux instance is properly visible to the system.

Now, we need to verify the linux version support. Run the following command in the terminal:

The output from the console will look something similar below:

Step 1 — Install dependencies:

First step to compile an optimized tensorflow installer is to fulfill all the installation dependencies. They are:

  1. build-essential
  2. cmake
  3. git
  4. unzip
  5. zip
  6. python3-dev
  7. pylint

In addition to the packages above, we will also need to install linux kernel header.

The header files define an interface: they specify how the functions in the source file are defined. These file are required for a compiler to check if the usage of a function is correct as the function signature (return value and parameters) is present in the header file. For this task the actual implementation of the function is not necessary. Any user could do the same with the complete kernel sources but that process will install a lot of unnecessary files.

Example: if a user wants to use the function in a program:

the program does not need to know how the implementation of foo is, It just needs to know that it accepts a single param (double) and returns an integer.

To fulfill these dependencies, run the following commands in the terminal.

Step 2 — Install Nvidia CUDA 9.2:

Nvidia CUDA is a parallel computing platform and programming model for general computing on graphical processing units (GPUs) from Nvidia. CUDA handles the GPU acceleration of deep-learning tasks using tensorflow.

Before, we install CUDA, we need to remove all the existing Nvidia drivers that come pre-installed in Ubuntu 18.04 distribution.

Now, let us fetch the necessary keys, installer and install all the necessary Nvidia drivers and CUDA.

Once this step is done, the system needs to reboot.

After the system has been rebooted, let us verify if the Nvidia drivers and CUDA 9.2 are installed properly:

The console output will be something similar below:

Step 3 — Install Nvidia CuDNN 7.2.1:

Nvidia CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the Nvidia Deep Learning SDK.

This is the next component of CUDA that is needed to for installing GPU accelerated tensorflow. Eventhough CuDNN is part of CUDA, the installation of CUDA alone, doesn’t install CuDNN. To install CuDNN, first we need an account with the Nvidia’s developer website. Once signed in, download the CuDNN installer from:

In this example it will look something similar below:

Once the download is finished, you will have a file: cudnn-9.2-linux-x64-v7.2.1.38.tgz in your working directory.

Installation steps for CuDNN is very straightforward. Just uncompress the tarball file and copy the necessary CuDNN files to the correct locations.

Step 4 — Install Nvidia NCCL:

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that are optimized to achieve high bandwidth over PCIe and NVLink high-speed interconnect.

Developers of deep learning frameworks and HPC applications can rely on NCCL’s highly optimized, MPI compatible and topology aware routines, to take full advantage of all available GPUs within and across multiple nodes. This allows them to focus on developing new algorithms and software capabilities, rather than performance tuning low-level communication collectives.

TensorFlow uses NCCL to deliver near-linear scaling of deep learning training on multi-GPU systems.

To install NCCL 2.2.13, from the Nvidia developer page download the os agnostic version of NCCL from Nvidia developer website.

This process will look something similar to the example below:

Once the download is finished, the working directory will have a file: nccl_2.2.13-1+cuda9.2_x86_64.txz

Similar to CuDNN installation, the steps for NCCL installation are similar. Uncompress the tarball file, copy all the files to the correct directories and then update the configuration. Follow the step below to install NCCL:

Step 5 — Install Nvidia CUDA profiling tool:

One last step, before we start compiling tensorflow is to install the CUDA profiling tool: CUPTI. Nvidia CUDA Profiling Tools Interface (CUPTI) provides performance analysis tools with detailed information about how applications are using the GPUs in a system.

CUPTI provides two simple yet powerful mechanisms that allow performance analysis tools such as the NVIDIA Visual Profiler, TAU and Vampir Trace to understand the inner workings of an application and deliver valuable insights to developers.
The first mechanism is a callback API that allows tools to inject analysis code into the entry and exit point of each CUDA C Runtime (CUDART) and CUDA Driver API function.

Using this callback API, tools can monitor an application’s interactions with the CUDA Runtime and driver. The second mechanism allows performance analysis tools to query and configure hardware event counters designed into the GPU and software event counters in the CUDA driver. These event counters record activity such as instruction counts, memory transactions, cache hits/misses, divergent branches, and more.

Run the following commands on the terminal to install CUPTI:

Step 6 — Install Tensorflow dependencies:

Tensorflow and the related Keras API installation requires:

  1. numpy
  2. python3-dev
  3. pip
  4. python3-wheel
  5. keras_applications and keras_preprocessing without any associated package dependencies
  6. h5py
  7. scipy
  8. matplotlib

These dependencies can be fulfilled by running the following commands in the terminal.

We also need to install the dependencies for the build tool Tensorflow uses: Bazel.

The dependencies for Bazel can be fulfilled by running the commands below in linux terminal.

Step 7 — Install Tensorflow build tool; Bazel:

Bazel is a scalable and highly extensible build tool created by Google that promises to speed up the builds and tests, that is created for multiple languages. It only rebuilds what is necessary. With advanced local and distributed caching, optimized dependency analysis and parallel execution, Bazel achieves fast and incremental builds.

Bazel can be used to build and test Java, C++, Android, iOS, Go and a wide variety of other language platforms. Bazel is officially supported for Linux. The promise of Bazel as a build tool is that it helps you scale your organization, code base and Continuous Integration system. It handles code bases of any size, in multiple repositories or a huge mono-repo.

Using Bazel, it is easy to add support for new languages and platforms with Bazel’s familiar extension language. The growing Bazel community has written Share and re-use language rules. Google has incorporated Bazel as the build tool for Tensorflow due to their belief that it is a better fit for the project than built-in tools in linux such as cmake. The inclusion of Bazel as the build tool adds one extra step of complexity to how GPU optimized Tensorflow is deployed in linux, from source.

To keep things neat and tidy, we will use the latest Bazel binary from GitHub, set the correct permissions to run the file as an executable in linux, run the file and update the configuration files. To install Bazel using these steps, run the following commands in the linux terminal.

Step 8 — Fetch latest Tensorflow version from GitHub and configure the build:

One of the key advantages of compiling from source is that, one can leverage all the latest features and updates released directly on GitHub. Typically the updated installer can take a few hours or days to show in the usual distribution channels. Therefore, in this example, we will build the latest Tensorflow version, by directly getting the source files from GitHub.

To fetch the latest source files from GitHub and configure the build process, run the following commands in the terminal.

Here is an example configuration.

Step 9 — Build Tensorflow installer using Bazel:

Since, we are targeting Tensorflow build process for Python 3, this is two step process.

  1. Build from the Tensorflow source files, the necessary files for creating a Python 3 pip package
  2. Build the wheel installer using the Python 3 pip package files and run this installer

The build process will take a very long time to complete and is dependent on the compute resources available to complete the build process.

Once the build process is completed, to create the wheel installer using Bazel and then run the installer file, run the following commands in the terminal.

Step 10 — Testing Tensorflow installation and install Keras:

Keras  is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. I am a huge fan of Keras API due to its seamless integration with Tensorflow. It also allows application developers to implement one of the foundational concepts of software engineering: Don’t Repeat Yourself(DRY), when it comes to building and testing deep-learning applications. Keras  APIs also help build scalable, maintainable and readable code base for deep-learning applications.

To test Tensorflow installation:

To install Keras from source and test this installation, run the following commands in the linux terminal.

This is a long install note. Compared to my install note from last year, on the same topic, the steps involved has increased dramatically. Most of it is due to the added features incorporated in Tensorflow, such as its ability to create and process distributed compute graphs.

As a concluding note, as part of our mission to empower developers of deep-learning and AI, at Moad Computer, we have a cloud environment: Jomiraki and a server platform: Ada. Both of them leverages Nvidia CUDA accelerated Tensorflow to achieve faster deep-learning application performance. We have also released a Raspberry Pi deep-learning module, but instead of CUDA acceleration, uses a slightly different technology called the Intel Neural Compute Stick. Check out all of these cool tools in our store.

If you have any questions or comments, feel free to post them below or reach out to our support team at Moad Computer:

Artificial Intelligence Automation Programming Python Python Python3 Software

Why Python Is Great: Subprocess — Running Linux Shell From Python Code.

Python is a brilliant object oriented programming language. In artificial intelligence/deep-learning circles, Python is often referred to as the default language (lingua franca) of artificial intelligence. But, the charm of Python extends way beyond running highly complicated deep-learning code. Python is first and foremost a general purpose programming language.

One of the key features of Python is its close integration with Linux. In this post I am going to explore one particular feature inside Python called ‘subprocess’. Subprocess, just like the name suggests initiates a linux shell command in python. This makes a lot of things easier inside Python, including, let us say, creating and manipulating file-system entries.

Here is a Python 3 function that utilizes subprocess to run a list of commands in linux terminal.

This function returns dictionary with two key values: Output and Error. Output records the console output, Error records the console error messages.

To run the function above, next, I will create a Python command:

This command will:

  1. Create a folder in Desktop called ‘New_photos’.
  2.  Randomly copy 10 .jpg files from  ~/Desktop/My_folder/Photos/ to ~/Desktop/New_photos/
  3.  Count the total number of files in the folder: ~/Desktop/New_photos/


That’s it. Running Linux commands inside python is as straightforward as passing a list of commands to a Python function.

Cars F1 Formula one Sports

Formula One 2018 — Analysis on my favorites for this season.

Formula one is back for an all new 2018 season. Teams unveiled their cars in late February. Mercedes announced the W-09, their 2018 competitor on February 22nd 2018. Details about the new car and how the season will evolve is slowly trickling down. After two weeks of winter testing in Barcelona, once again team Mercedes has emerged as a strong favorite for winning the 2018 formula one season.

The driver line-up for this year is unchanged from the last. British driver Lewis Hamilton in car 44 and Finnish colleague Valtteri Bottas in car 77. The engineering, manufacturing and management portions of the team has also remained unchanged. Toto Wolff, the executive director of Mercedes AMG Petronas Formula One Team, still leads it. James Allison, is the technical director. Andy Cowell,  the managing director of Mercedes AMG High Performance Powertrains, oversees the engine development. Aldo Costa is still the engineering director for the team.

The lack of significant changes in the team has reflected in the new car. From the outside, the W-09 looks like an evolutionary change to last year’s W-08. The W-09 exterior revisions can be broadly categorized into two. One: changes to accommodate the new rules, two: performance and aerodynamic changes.

The biggest exterior change to the W-09 is the addition of head protection device called the Halo. The Halo is a carbon fiber structure that sits on top of the driver cage, to protect the driver’s head. The Halo device is meant to prevent the risk of direct impact to the driver’s helmet with flying object like wheels or other parts from another car, in the event of an accident or collision. The addition of Halo device also adds weight to the car, making the new cars approximately 80kg heavier than the last year’s design.

Another major visual change is in the rear of the car. Once again, for 2018 season, the new rules have changed the use of a structure called shark fin. Last year’s cars had a very prominent shark fin. This structure is aimed at managing the aerodynamic vortices formed around the rear of the car. The shark fin acted as a separation layer for airflow between left and right sides. In 2018, the new rules mandate a less prominent shark fin structure on the rear engine cover.

Related to the shark fin rule is another change to the rear, that prevents the use of T-wings. T-wings were used last year, in conjunction with the shark fin. These secondary wing structures managed the airflow to the rear wing by modulating it. In 2018, the new rules prevent teams from using those large T-wings. The W-09 incorporates this rule change in the form of a smaller, less prominent T-wing that sits below the big rear wing structure.

The smaller evolutionary changes to the W-09 are aimed at improving the aerodynamic profile of the car. Last year’s W-08 was often referred to by the team as a “diva”, due to unpredictable handling characteristics. The W-09 design changes are aimed at making the new car more predictable over a wider range of tracks. One of the key features to accomplish this is by incorporating a sculpted, sleek, more aerodynamic side-pod cover and engine cover. This structure has a simpler, smaller aerodynamic profile than last year’s side pod/engine cover design. It also achieves tighter packaging of the powertrain components.

The new car also features a raised front suspension. This allows smoother airflow to the aerodynamic elements behind it. Both the front and rear suspension elements have incorporated slightly revised aerodynamic surfaces to manage airflow over them. In the W-09, the front suspension elements have a simpler airflow profile than those in the last year’s car. The new car also has a bigger and more aerodynamic steering control arm in the front. There is also a bigger air scoop along the bottom of the front nose cone structure, that directs the airflow to the bottom of the car. The W-09 also has a very aggressive rake. These tweaks are all aimed at taming the unpredictability associated with last year’s design.

In summary the W-09 design improves upon the W-08 design, a car that won both drivers championship and constructors championship in 2017. The philosophy behind W-09 seems to be simplifying the overall design and making the handling characteristics more predictable. Underneath all these changes, there is the usual reliable Mercedes turbo V6 hybrid powertrain. The reliability and performance of the powertrain will be very important for 2018 season. The regulation changes prevent teams from using more than three engines for the whole season, without incurring penalties. Also, Ferrari and Renault have made good progress over the winter, in improving the performance and reliability of their powertrains.

It looks like Andy and his team of engineers overseeing the powertrain development has done an excellent job. Despite tighter packaging, the car has been tested without any major issues. During the winter testing the Mercedes car logged the most amount of testing mileage and demonstrated good overall pace during full-race simulation sessions. Both Lewis and Valtteri are happy with the way the car is handling. The only big unknown for folks outside of the Mercedes team is the qualification potential of the car.

Even though Ferrari topped the time sheets in this year’s winter testing, and the testing numbers are harder to interpret than those from last year, it is safe to assume that the performance of the new Mercedes formula one competitor is something that other teams can’t take for granted. The W-09 is in someways a dark-horse for the 2018 season. Mercedes is the team that has to defend their turf, and it seems like they have created a great package that will do exactly the same. My predictions for the 2018 season are as follows: Lewis Hamilton will win the drivers championship, Valtteri Bottas will be the runner-up and Daniel Ricciardo will be third. Redbull Renault Racing will emerge as the second best team, relegating Ferrari to third place in constructors points.


Artificial Intelligence Deep learning Hardware Raspberry Pi

Raspberry Pi — How to reformat a bootable SD card.

Recently, I received a Raspberry Pi to play with, from a good friend of mine. One of the projects I am working on is to deploy artificial intelligence applications in low powered devices. Raspberry Pi seemed to be a great platform to experiment with. But, I ran into a lot of interesting issues. I will write on those issues and how to solve them in a later post. Today, I am going to write about a simple issue of reformatting a Raspberry Pi SD card after you are done tinkering around. Instead of just leaving the SD card unused, I decided to salvage the storage device used to run Raspberry Pi OS. Formatting the SD card is a fairly involved process.

Here are the detailed steps to re-format a bootable device, in my case: a SanDisk 128 GB microSD card. I am using a Windows 10 machine to reformat the storage device.

Step 1: In the first step we will learn how to launch diskpart, list all attached disks and select the appropriate disk.

Run command prompt as an administrative user.
Launch diskpart by typing diskpart in the command line:

Here is an example output of how it looks like from the command line:

To list all the attached disks to the operating system, type list disk:

The output will look something like this:

To select the specific disk, type ‘select disk #’, where # represents the disk number. In the example above, the SD card is disk 2. To select disk 2 type the following command in the diskpart command prompt:

If you type ‘list disk’ again, it will show the list of disks, now with an asterisk preceding the selected disk.


Step2: In this step, we will remove all existing partitions and prepare the disk for reformatting by cleaning the device.

To remove existing partition, we need to know all the available partitions on the storage device. Let us list the existing partitions using list partition command:

In this example, the command line output looks like this:

We have two available partitions: 0 and 1. Now, let us remove both of these partitions. To do that, first we have to select the partition using ‘select partition command’.

Once we have selected the partitions, we can perform the partition removal operation using the ‘delete partition’ command.

In this example, the storage device has two partitions. We have removed both of these partitions one by one. The command line output looks like this:

Once we have deleted all the available partitions, the next step is to prepare the disk for reformatting by cleaning the disk. Type ‘clean’ in diskpart command line and it will successfully clean the disk.

The command line output for this example will look something similar to this:


Step3: Create partitions and reformat the disk with an appropriate file system:

The diskpart has a command line option called create. It takes three arguments. Those arguments are:

  1. PARTITION – To create a partition.
  2. VOLUME – To create a volume.
  3. VDISK – To create a virtual disk file.

We will use the ‘create partition’ command. To run this, we need to know the command line arguments that ‘create partition’ accepts. There are five options for ‘create partition’ command. They are as follows:

  1. EFI – To create an EFI system partition.
  2. EXTENDED – To create an extended partition.
  3. LOGICAL – To create a logical drive.
  4. MSR – To create a Microsoft Reserved partition.
  5. PRIMARY – To create a primary partition.

I want to create a new primary partition. Therefore, the DISKPART command is going to be:

Once, a primary partition is created, the next step is to reformat the new partition using NTFS file format. Microsoft recommends exFAT for removable media like SD card storage, but, I am selecting NTFS over exFAT for reasons of convenience. Before formatting, select the newly created partition. Then run the command:

The command line output for my example is as follows:

Once, the formatting is successfully completed, there will be two messages:  ‘100 percent completed’ followed by ‘DiskPart successfully formatted the volume’. Now, the SD card should be mounted properly in Windows Explorer. Under ‘This PC’, there should be a new empty disk storage device that can now be attached as a removable device to any PC.


Compassion Innovation Management

Innovation – What is the secret sauce?

This week, Harris county, TX, saw an unprecedented amount of rainfall during the hurricane/tropical storm Harvey. The city of Houston and surrounding areas have been frequently affected by flooding in the past two decade or so, which led ProPublica to describe Houston: Boom Town to Flood Town. A major portion of the article is devoted to exposing the pitfalls in flood risk modeling and management. Instead of using the latest tools, data, discoveries and ideas, the flood risk modeling and management by Harris County, TX, still relies on archaic technology and outright denial. This laggardly approach to protecting the public has real world consequences. One such example is the constant “once in a lifetime” flooding events in the area. But, as a leader of an organization that focuses on developing technology driven solutions for large scale problems, what can I learn from this group of outright denialists? It is the stark lack of compassion, with a hint of cynicism. It is a dangerous attitude for not just public officials but also for private enterprises. To illustrate this point, I am going to use a positive example from history. One of the best examples of applying compassion to create a brilliant new solution. The story is all about family, father-son bonding and compassion towards fellow human beings. A story that everybody should be familiar with.

Robert Thomson (1822 – 1873), a Scottish inventor and self taught engineer invented the pneumatic tire at the age of 23. In 1839, using a new process called vulcanization that added sulfur to rubber to make it pliable but not sticky.  He built his first pneumatic tire out of hollowed out Indian rubber. The air cushion between the two layers of vulcanized rubber vastly increased the efficiency and comfort of horse-drawn carriages, over the conventional solid rubber tires that were in use at the time. Thomson received a French patent in 1846 and a US one in 1847. The early prototypes of Thompson’s new “aerial wheels” were used in horse carriages in Hyde Park, London. But, the invention didn’t catch the broader public attention for the next fifty years or so.
In 1888, John Boyd Dunlop, a successful veterinary doctor, trained in Scotland, practicing in Northern Ireland, encountered another interesting problem. His son’s interest in competitive cycling in his school made him acutely aware of how uncomfortable it was to ride a bicycle at speed. During his time, tires for bicycles were again made of solid rubber or wood, just like the horse driven carriages. He was thinking of ways to cushion the undulations of the track. He independently stumbled across the idea of pneumatic tires, but instead of applying it to horse-driven carriages, he applied it for his son’s bicycle. Using Dunlop’s newly devised pneumatic tire, due to the more comfortable ride, his son won the competition.

In 1889, Dunlop collaborated with Irish cyclist Willie Hume. It was a marketing coup for Dunlop and a highly beneficial sports deal for Hume as a cyclist. Hume dominated the cycling competitions at the time and became a poster boy for the pneumatic bicycle tires. In 1938, the magazine Cycling, now known as Cycling Weekly, awarded Hume his own page in the Golden BookThe success of Dunlop’s pneumatic tires in cycling scene, led to the massive opportunity to market these products with the help of Irish business man and financier Harvey du Cros. The late 1800s and the early 1900s also coincided with the advent of motor cars. Pneumatic tires became an important part of the personal transport revolution, spear headed by the advent of internal combustion engines and motor cars.

It might seem like a series of fortunate coincidences leading to the creation of a completely new industry, but it all started with a single act of compassion, by a father concerned for his son. After reading this story, it is easy to see where this is heading. For individuals, organizations, enterprises or governments, compassion is a great quality to have. Having a compassionate view towards fellow individuals help us understand the issues and problems and think about finding solutions. I firmly believe, the important secret sauce of innovation is a compassionate mind, a mind that looks to solve problems, save people time, money and effort in accomplishing things. I want this quality to be part of the culture at Ekaveda and help individuals and teams come up with great solutions no-one has imagined before. This same approach should be the guiding principle for governments too, like the city of Houston. A compassionate approach towards its fellow residents, an open minded approach to look at changing patterns and data to create resilient city infrastructure, and preventing the frequent occurrences of “once-in-a-lifetime” weather occurrences, will help us build better communities today, that will survive and flourish tomorrow. I want everyone reading this post to think like John Boyd Dunlop, and emulate the parental compassion exhibited by him. It is the road to great new innovations and creating brilliant new products.

TL,DR: Compassion is the secret sauce to innovation. Just like the song, Kill Em With Kindness, by Selena Gomez, compassion and kindness are two killer qualities for anyone, even for businesses.

(Captions for images from top to bottom: 1) Picture of a flooded Eckerd Pharmacy, Fort Worth, TX, in 2004 from National Oceanographic and Atmospheric Administration website, obtained via Google Image Search and reproduced under fair usage rights, 2)  A picture of the Brougham carriage used by Robert Thomson to demonstrate his pneumatic tires. The Brougham carriage was a new lightweight design by Lord Brougham, retrieved from the public domain and reproduced under fair usage rights from National Museums Scotland, 3) John Boyd Dunlop, who created the first pneumatic tires for bicycles and popularized them as a commercial product, picture from National Museums Scotland from public domain reproduced under fair usage rights, 4) An undated photograph of Willie Hume, the poster boy of pneumatic bicycle tires obtained via public domain from Merlin Cycles website5) A vintage style caricature depicting Willie Hume and his cycling success using Dunlop pneumatic tire, by artist, Deepa Mann-Kler, retrieved from the public domain and reproduced under fair usage rights .)

Artificial Intelligence GPU Machine Learning Open source pygpu Python Python Python3 Software Theano

Learning from mistakes – Fixing pygpu attribute error.

I am part of a great AI community in New York. Today was the fourth week of an ongoing effort to share the practical tricks to developing a natural language processing pipe line using deep learning. This effort was borne out of my efforts to engage the community to develop a greater understanding of how artificial intelligence systems are developed. At a time when there is a lot of fear, uncertainty and doubt (FUD) about AI, I firmly believe community engagements like these are helpful to not only reduce the mistrust in AI systems, but also disseminate new ideas and achieve some collective learning.

During the course, I encountered an interesting error. The python development environment I was running was refusing to load a key library for performing deep learning: called Keras. The error was related to the deep learning back end called theano, specifically linked to a python package called pygpu. I was really surprised that I wasn’t able to load Keras on my system. It was completely new error for me, possibly related to some extra installations I performed last week for a separate project I was working on.

The error was: AttributeError: module ‘pygpu.gpuarray’ has no attribute ‘dtype_to_ctype’

Even though the solution for the problem was found after the sessionwas over, I am documenting the process of mine. This Keras error also taught me two important things today:

  1. Test your environment before D-day. Assumption is the mother of all fuck-ups and today, I assumed everything is going to work fine and it didn’t.
  2. Be persistent and never give up before you find a solution.

Step 1: Since I am using Anaconda distribution for Python 3, I uninstalled and re-installed the deep learning back end: Theano and the front end: Keras. To uninstall theano and keras run the following lines in command line or terminal.

Step 2: Reinstalling the deep learning backend and front end, along with a missing dependency called libgpuarray. Run the following lines in command line or terminal to install libgpuarray, theano and keras.

Step 3: Remove the offending package called pygpu from the python environment.

Step 4: Create a .theanorc.txt file in the home directory. Since I am running a Microsoft Windows development environment, the home directory is: C:\Users\Username\ and once inside the home directory, create a .theanorc.txt file and copy and paste the following lines:

By following these four simple steps, I was able to solve the issue. Even though the error message was really lengthy and intimidating, and initially I thought I may have to completely re-install my development environment, I am really proud of myself that I have discovered a fix for a really annoying, yet poorly documented problem with the pygpu package.

Kudos to Anthony, the founder and co-organizer of the meetup group for building a great AI community here in New York. We work really hard to bring this awesome course every week. To make these lessons always accessible, free and at the same time support our efforts, consider becoming our patron.

I am also part of a great deep learning company called Moad computers. Moad brings excellent, incredibly powerful deep learning dev-ops servers with all the necessary software packages pre-installed to develop artificial intelligence applications. The mission of Moad is to make the AI development efforts as friction-less as possible by creating a powerful opensource dev-ops server for AI. Check out the Moad computers’ store for the most recent version of these servers called Ada for AI.


Marissa Mayer for Uber CEO – It will work.

I recently came across two news items, one from Vanity Fair and the other from Inc, on possible future role for Marissa Mayer as the CEO of Uber. Uber has a very interesting year, including a high profile intellectual property dispute with Google’s autonomous driving car division: Waymo, a series of horrible sexual harassment cases, personal tragedy striking the current CEO and founder: Travis Kalanick and finally the most important and very wise decision by Travis to step down as the CEO of Uber.

Both the articles cast Marissa as a bad choice for Uber. I disagree with both the articles. Marissa’s time at Yahoo has been controversial. Initially portrayed as a savior for a yesteryear’s fallen tech behemoth, the opinions soon changed when it became clear that turning around Yahoo was almost impossible. Yahoo already lost their technology leadership way before Marissa was appointed as the CEO. There are very few second comings in technology, and Yahoo wasn’t one of them.

Uber on the other hand is a technology and market leader in ride sharing. If one looks at Marissa’s time at Google, converting a technologically dominant product portfolio into something even more enticing is her skill. Even at Yahoo, she has accomplished a great job. Finding a suitable suitor for Yahoo’s core business was the only way forward, other than the inevitable slow painful death of Yahoo.

Search and online advertisement business is a monopolistic business dominated by Google. Yahoo never had a real chance in the market to survive on its own. The current assets of Yahoo matches very well with Verizon’s existing business. Yahoo excels in curating content over a wide range of topics from healthcare to finance. Yahoo also has a great advertisement platform. Verizon needs both these businesses to differentiate itself as an internet and cellular service provider from its competitors. Getting Verizon to buy Yahoo is a great decision for the future prospects of Yahoo’s core businesses.

The key problem at Uber is a cultural one and not a marketing or technology issue. Despite all the media hoopla around an inclusive Silicon Valley, it is still a very homogeneous work culture. It will take years to fix these fundamental problems. Marissa isn’t the answer for that. But, during her time at Yahoo, Marissa has demonstrated a great fit into that stereo-typically sexist work environment, which Uber also suffers from. Uber needs a CEO who understands how to survive the horrible culture there, to institute long-term changes. One takeaway from Marissa’s tenure at Yahoo is her ability to survive in the cluster-fuck swamp. For Uber, therefore, Marissa Mayer is a great fit. Marissa knows how to keep the party going, and Uber needs to keep the party going at-least in the short term to keep all the brogrammers happy.

In summary, fixing corporate culture is a long-term mission for any company. In the short-term a new leader at Uber needs to understand and survive the current horrible culture there. Anyone, who expects the new CEO of Uber to magically wipe the slate clean is living in a self created bubble. Hiring Marissa Mayer won’t fix all of Uber’s problems in one day, but I am confident that she will be a great leader Uber needs right now. She will survive the swamp long enough to drain it, clean it and develop it into a beautiful golf course one day.

The video above is a great explanation of sexual harassment at workplace. As the founder and chief imagination officer at Ekaveda, Inc, I strive to ensure an inclusive, and a zero racial and gender discriminatory workplace. Our core team actively promote these ideas and foster a culture of radical openness. This honest expression of opinions on Uber’s next CEO choice is an attempt on behalf of Ekaveda, Inc, to incorporate fair and humane conditions in a modern workplace.

(Captions for pictures from top to bottom: 1) Marissa Mayer at the 2016 World Economic Forum, via Business Insider, 2) Marissa Mayer on Vogue cover in 2013,  3) The recently updated Uber logo via the Independent, UK, 4) The logos of Verizon and Yahoo at their respective headquarters buildings, Via Time, 5), Uber’s advanced technology initiative to develop a self driving car, obtained via Business Insider, 6) Ronald Reagan quote: “It’s hard, when you’re up to your armpits in alligators, to remember you came here to drain the swamp.” via Quotefancy.)

Artificial Intelligence Cloud computing Deep learning GPU Hardware Machine Learning Nvidia Software Tensor processing unit TensorFlow Volta

Hardware for artificial intelligence – My wishlist.

We recently developed an incredible machine learning workstation. It was born out of necessity, when we were developing image recognition algorithms for cancer detection. The idea was so incredibly powerful, that we decided to market it as a product to help developers in implementing artificial intelligence tools. During this launch process, I came across an interesting article from Wired on Google’s foray into hardware optimization for AI called the Tensor processing unit (TPU) and the release of second generation TPU.
I became a little bit confused about the benchmark published by Wired. The TPU is an integer ops co-processor. But, Wired cites teraflops as a metric. In the article it is not clear what specific operations are they referring to? Whether it is tensor operations or FP64 compute, the article is unclear. If these metrics are really FP64 compute, then the TPU is faster than any current GPU available, which I suspect isn’t the case, give the low power consumption. If those numbers refer to tensor ops, then the TPU has only ~ 1/3rd the performance of latest generation GPUs like the recently announced Nvidia Volta.
Then I came across this very interesting and more detailed post from Nvidia’s CEO, Jensen Huang, that directly compared Nvidia’s latest Volta processor with Google’s Tensor Processing Unit. Due to the non-specificity to what the teraflops metrics stand for, the Wired article felt like a public relations bit, even if it is from fairly well established publication like Wired. Nvidia’s blog post puts into better context the performance metrics’ than a technology journalist’s article. I wish, there was a little bit more of specificity and context to some of the benchmarks that Wired cites, instead of just copy-pasting the marketing bit passed on to them by Google. From a market standpoint, I still think, TPUs are a bargaining chip that Google can wave at GPU vendors like AMD and Nvidia to bring the prices of their workstations GPUs down. I also think, Google isn’t really serious in building the processor. Like most Google projects, they want to invest the least amount of money to get the maximum profit. Therefore, the chip won’t be having any IP related to really fast FP64 compute.
Low power, many core processors have been under development for many years. The Sunway processor from China, is built on the similar philosophy, but optimized for FP64 compute. Outside of one supercomputer center, I don’t know any developer group working on Sunway. Another recent example in the US is Intel trying it with their Knights Bridge and Knights Landing range of products and landed right on their face. I firmly believe, Google is on the wrong side of history. It will be really hard to gain dev-op traction, especially for custom built hardware.
Let us see how this evolves, whether it is just the usual Valley hype or something real. I like the Facebook engineer’s quote in the Wired article. I am firmly on the consumer GPU side of things. If history is a teacher, custom co-processors that are hard to program never really succeeded to gain market or customer traction. A great example is Silicon Graphics (SGI). They were once at the bleeding edge of high performance computational tools and then lost their market to commodity hardware that became faster and cheaper than SGIs custom built machines.
More interests in making artificial intelligence run faster is always good news, and this is very exciting for AI applications in enterprise. But, I have another problem. Google has no official plans to market the TPU. For a company like ours, at moad, we rely on companies like Nvidia developing cutting edge hardware and letting us integrate it into a coherent marketable product. In Google’s case, the most likely scenario is: Google will deploy TPU only for their cloud platform. In a couple of years, the hardware evolution will make it faster or as fast as any other product in the market, making their cloud servers a default monopoly. I have problem with this model. Not only will these developments keep independent developers from leveraging the benefits of AI, but also shrink the market place significantly.
I only wish, Google published clear cut plans to market their TPU devices to third party system integrators and data center operators like us, so that the AI revolution will be more accessible and democratized, jut like what Microsoft did with the PC revolution in the 90s.
(Image captions from top to botton: 1) Image header of our store front for the machine learning dev-op server, 2) Tensor processing unit version 2, obtained from Google TPU2 blog post, 3) Image of Nvidia Tesla Volta V100 GPU, obtained from wccftech, 4) Image of Taihu Light supercomputer powered by Sunway microprocessors, from Top500, 5) An advertisement for Silicon Graphics 02 Visual Workstation, obtained from public domain via Pinterest, 6) Gerty, the artificially intelligent robot in Duncan Jones directed movie: The Moon (2009), obtained from public domain via Pinterest, 7) A meme about the marketing department, obtained via public domain from the blog ‘A small orange’.)
Artificial Intelligence Automation Economics Healthcare

Automation – A completely different thinking.

I have been spending some time on thinking about leadership. Recently, I had the opportunity to sit down and listen to an Angel investor, who specializes in life science companies particularly in New York. I was excited because I have heard incredibly fluffy pieces about this individual. But, the moment this individual started to speak, I suddenly realized something. This individual has no interest in the area of work. It reflected in his presentation, where, in-front of a very young and eager bunch of aspiring entrepreneurs, showed up late for the talk and started going through slides of a presentation created nearly two decades back.

This disappointment reminded me of  the recent article I came across. This article was from Harvard Business Review and it portrayed a dismal view of the enterprise where executives are living in a bubble. I realized, it is not just the corporate executives, but, even the investors are living in a bubble. These individuals are speaking words, but, it makes sense only to a raving mad man. The layers and filters, these individuals have created between themselves and the world outside have created an alternate reality for themselves. The recent news cycles of an incompetent CEO is way more common than any other job known to human kind. This points to an interesting point in our social evolution where CEOs have become one of the rare jobs where incompetence is tolerated and yet highly reimbursed.

This brings me to another interesting article on what exactly is a job and how to value a job. This amazing article published by BBC Capital exposes an interesting conundrum of a capitalist economy. The jobs that are absolutely essential for human survival are the lowest paid ones. The jobs that we consider to be completely pointless are the ones that are extremely highly paid. So, the whole question of what is valuable to the society arises here. Do we prioritize nonsense over our own survival. Our current social structure seems to indicate exactly that. According to Peter Fleming, Professor of Business and Society at City, University of London, who is quoted in the article: “The more pointless a task, the higher the pay.”

So, almost all of our low wage employments cluster around human activities that are absolutely essential for the survival of a society. If we apply these two diverse set of thoughts to another interesting question on the future of jobs, then, I believe I have the answer already.  By implementing mass automation on all of the mission critical jobs for the survival of a human society, we eliminate all the low-wage jobs, and individuals who had these jobs previously are moved to a position where they are given a pointless job.

For the sake of argumentative clarity imagine a situation where automation has been implemented in highly important jobs. A great example is the restaurant worker. Imagine a future where all of the restaurant workers who prepare food are replaced by an automated supply chain and a set of robots. The role of individuals are just to manage these automated supply chain and robots. Imagine, our teenage daughters or sons,  who will apply for a job at one of these restaurants. Instead of these young individuals being hired just for cheap human labor, now they will be hired as a manager for the automated supply chain. Instead of years of school and college education to end-up being incompetent and pointless to the society, even a high school graduate will be able to do this job, because the society is run reliably and efficiently by a highly robust and resilient automated supply chain. All the rest of the management hierarchy will remain the same, due to corporate and regulatory reasons.

It seems to me: a highly automated society will create more value than a society that still relies on human labor for its mission critical jobs. This may seem counter-intuitive, but, it is not. The value of human labor is a more abstract idea than the one based on absolutes. In our current system of human labor dependent economies, only a handful individuals are given the free-reign on incompetence and risk taking. In an automated economy, the underlying resilience will help more individuals to take more risks and be unfazed by the outcomes, because none of these activities are absolutely essential for the survival of a human society.

This trend of automation will not only massively simplify our daily life activities, we will have more time in our hands to do what we want to do in the first place, without worrying about thirst, hunger, hygiene or any of the mission critical activities for our life. The friction of human existence is reduced by machines, which help us create more value to the society.

More individuals will have the opportunity to be at an executive level than what exists today, even with minimal training. It will be the democratization of the executive jobs that will happen with the automation, not the loss of jobs. More people can have the freedom to be employed as CEOs and still get highly paid, because, the society is run on a highly reliable automated schedule. It will be a very different society, just like the massive shift of agrarian economy to the industrial economy that happened during the industrial revolution, but a much more highly paid and with less number of unemployed, for sure.

This research on current dismal state of executives and managers in corporate sector is part of our mission to understand the overarching impact of automation and machine-learning in healthcare. It is done as part of our cancer therapeutics company: nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Image caption from top to bottom: 1) A screen shot from the television programme: The Thick Of It with Malcolm Tucker (played by Peter Capaldi) and photographed by Des Willie, a great artistic portrayal of human dysfunction at the core of some of the most point-less, yet, highly coveted jobs. (C) BBC and reproduced under fair usage rights, 2) A satirical portrayal of recent troubles of United Airlines, published by Bloomberg, retrieved from public domain through Google search and published under fair usage rights, 3) A quote that says: “It’s as if someone were out there making up pointless jobs just for the sake of keeping us all working.” posted inside a London tube train car, with the hashtag bullshitjobs, 4) Image of a burger flipping robot in action, obtained from public domain through Google image search and reproduced under fair usage rights.