Learning from mistakes – Fixing pygpu attribute error.

I am part of a great AI community in New York. Today was the fourth week of an ongoing effort to share the practical tricks to developing a natural language processing pipe line using deep learning. This effort was borne out of my efforts to engage the community to develop a greater understanding of how artificial intelligence systems are developed. At a time when there is a lot of fear, uncertainty and doubt (FUD) about AI, I firmly believe community engagements like these are helpful to not only reduce the mistrust in AI systems, but also disseminate new ideas and achieve some collective learning.

During the course, I encountered an interesting error. The python development environment I was running was refusing to load a key library for performing deep learning: called Keras. The error was related to the deep learning back end called theano, specifically linked to a python package called pygpu. I was really surprised that I wasn’t able to load Keras on my system. It was completely new error for me, possibly related to some extra installations I performed last week for a separate project I was working on.

The error was: AttributeError: module ‘pygpu.gpuarray’ has no attribute ‘dtype_to_ctype’

Even though the solution for the problem was found after the sessionwas over, I am documenting the process of mine. This Keras error also taught me two important things today:

  1. Test your environment before D-day. Assumption is the mother of all fuck-ups and today, I assumed everything is going to work fine and it didn’t.
  2. Be persistent and never give up before you find a solution.

Step 1: Since I am using Anaconda distribution for Python 3, I uninstalled and re-installed the deep learning back end: Theano and the front end: Keras. To uninstall theano and keras run the following lines in command line or terminal.

Step 2: Reinstalling the deep learning backend and front end, along with a missing dependency called libgpuarray. Run the following lines in command line or terminal to install libgpuarray, theano and keras.

Step 3: Remove the offending package called pygpu from the python environment.

Step 4: Create a .theanorc.txt file in the home directory. Since I am running a Microsoft Windows development environment, the home directory is: C:\Users\Username\ and once inside the home directory, create a .theanorc.txt file and copy and paste the following lines:

By following these four simple steps, I was able to solve the issue. Even though the error message was really lengthy and intimidating, and initially I thought I may have to completely re-install my development environment, I am really proud of myself that I have discovered a fix for a really annoying, yet poorly documented problem with the pygpu package.

Kudos to Anthony, the founder and co-organizer of the meetup group for building a great AI community here in New York. We work really hard to bring this awesome course every week. To make these lessons always accessible, free and at the same time support our efforts, consider becoming our patron.

I am also part of a great deep learning company called Moad computers. Moad brings excellent, incredibly powerful deep learning dev-ops servers with all the necessary software packages pre-installed to develop artificial intelligence applications. The mission of Moad is to make the AI development efforts as friction-less as possible by creating a powerful opensource dev-ops server for AI. Check out the Moad computers’ store for the most recent version of these servers called Ada for AI.

Marissa Mayer for Uber CEO – It will work.

I recently came across two news items, one from Vanity Fair and the other from Inc, on possible future role for Marissa Mayer as the CEO of Uber. Uber has a very interesting year, including a high profile intellectual property dispute with Google’s autonomous driving car division: Waymo, a series of horrible sexual harassment cases, personal tragedy striking the current CEO and founder: Travis Kalanick and finally the most important and very wise decision by Travis to step down as the CEO of Uber.

Both the articles cast Marissa as a bad choice for Uber. I disagree with both the articles. Marissa’s time at Yahoo has been controversial. Initially portrayed as a savior for a yesteryear’s fallen tech behemoth, the opinions soon changed when it became clear that turning around Yahoo was almost impossible. Yahoo already lost their technology leadership way before Marissa was appointed as the CEO. There are very few second comings in technology, and Yahoo wasn’t one of them.

Uber on the other hand is a technology and market leader in ride sharing. If one looks at Marissa’s time at Google, converting a technologically dominant product portfolio into something even more enticing is her skill. Even at Yahoo, she has accomplished a great job. Finding a suitable suitor for Yahoo’s core business was the only way forward, other than the inevitable slow painful death of Yahoo.

Search and online advertisement business is a monopolistic business dominated by Google. Yahoo never had a real chance in the market to survive on its own. The current assets of Yahoo matches very well with Verizon’s existing business. Yahoo excels in curating content over a wide range of topics from healthcare to finance. Yahoo also has a great advertisement platform. Verizon needs both these businesses to differentiate itself as an internet and cellular service provider from its competitors. Getting Verizon to buy Yahoo is a great decision for the future prospects of Yahoo’s core businesses.

The key problem at Uber is a cultural one and not a marketing or technology issue. Despite all the media hoopla around an inclusive Silicon Valley, it is still a very homogeneous work culture. It will take years to fix these fundamental problems. Marissa isn’t the answer for that. But, during her time at Yahoo, Marissa has demonstrated a great fit into that stereo-typically sexist work environment, which Uber also suffers from. Uber needs a CEO who understands how to survive the horrible culture there, to institute long-term changes. One takeaway from Marissa’s tenure at Yahoo is her ability to survive in the cluster-fuck swamp. For Uber, therefore, Marissa Mayer is a great fit. Marissa knows how to keep the party going, and Uber needs to keep the party going at-least in the short term to keep all the brogrammers happy.

In summary, fixing corporate culture is a long-term mission for any company. In the short-term a new leader at Uber needs to understand and survive the current horrible culture there. Anyone, who expects the new CEO of Uber to magically wipe the slate clean is living in a self created bubble. Hiring Marissa Mayer won’t fix all of Uber’s problems in one day, but I am confident that she will be a great leader Uber needs right now. She will survive the swamp long enough to drain it, clean it and develop it into a beautiful golf course one day.

The video above is a great explanation of sexual harassment at workplace. As the founder and chief imagination officer at Ekaveda, Inc, I strive to ensure an inclusive, and a zero racial and gender discriminatory workplace. Our core team actively promote these ideas and foster a culture of radical openness. This honest expression of opinions on Uber’s next CEO choice is an attempt on behalf of Ekaveda, Inc, to incorporate fair and humane conditions in a modern workplace.

(Captions for pictures from top to bottom: 1) Marissa Mayer at the 2016 World Economic Forum, via Business Insider, 2) Marissa Mayer on Vogue cover in 2013,  3) The recently updated Uber logo via the Independent, UK, 4) The logos of Verizon and Yahoo at their respective headquarters buildings, Via Time, 5), Uber’s advanced technology initiative to develop a self driving car, obtained via Business Insider, 6) Ronald Reagan quote: “It’s hard, when you’re up to your armpits in alligators, to remember you came here to drain the swamp.” via Quotefancy.)

Hardware for artificial intelligence – My wishlist.

We recently developed an incredible machine learning workstation. It was born out of necessity, when we were developing image recognition algorithms for cancer detection. The idea was so incredibly powerful, that we decided to market it as a product to help developers in implementing artificial intelligence tools. During this launch process, I came across an interesting article from Wired on Google’s foray into hardware optimization for AI called the Tensor processing unit (TPU) and the release of second generation TPU.
I became a little bit confused about the benchmark published by Wired. The TPU is an integer ops co-processor. But, Wired cites teraflops as a metric. In the article it is not clear what specific operations are they referring to? Whether it is tensor operations or FP64 compute, the article is unclear. If these metrics are really FP64 compute, then the TPU is faster than any current GPU available, which I suspect isn’t the case, give the low power consumption. If those numbers refer to tensor ops, then the TPU has only ~ 1/3rd the performance of latest generation GPUs like the recently announced Nvidia Volta.
Then I came across this very interesting and more detailed post from Nvidia’s CEO, Jensen Huang, that directly compared Nvidia’s latest Volta processor with Google’s Tensor Processing Unit. Due to the non-specificity to what the teraflops metrics stand for, the Wired article felt like a public relations bit, even if it is from fairly well established publication like Wired. Nvidia’s blog post puts into better context the performance metrics’ than a technology journalist’s article. I wish, there was a little bit more of specificity and context to some of the benchmarks that Wired cites, instead of just copy-pasting the marketing bit passed on to them by Google. From a market standpoint, I still think, TPUs are a bargaining chip that Google can wave at GPU vendors like AMD and Nvidia to bring the prices of their workstations GPUs down. I also think, Google isn’t really serious in building the processor. Like most Google projects, they want to invest the least amount of money to get the maximum profit. Therefore, the chip won’t be having any IP related to really fast FP64 compute.
Low power, many core processors have been under development for many years. The Sunway processor from China, is built on the similar philosophy, but optimized for FP64 compute. Outside of one supercomputer center, I don’t know any developer group working on Sunway. Another recent example in the US is Intel trying it with their Knights Bridge and Knights Landing range of products and landed right on their face. I firmly believe, Google is on the wrong side of history. It will be really hard to gain dev-op traction, especially for custom built hardware.
Let us see how this evolves, whether it is just the usual Valley hype or something real. I like the Facebook engineer’s quote in the Wired article. I am firmly on the consumer GPU side of things. If history is a teacher, custom co-processors that are hard to program never really succeeded to gain market or customer traction. A great example is Silicon Graphics (SGI). They were once at the bleeding edge of high performance computational tools and then lost their market to commodity hardware that became faster and cheaper than SGIs custom built machines.
More interests in making artificial intelligence run faster is always good news, and this is very exciting for AI applications in enterprise. But, I have another problem. Google has no official plans to market the TPU. For a company like ours, at moad, we rely on companies like Nvidia developing cutting edge hardware and letting us integrate it into a coherent marketable product. In Google’s case, the most likely scenario is: Google will deploy TPU only for their cloud platform. In a couple of years, the hardware evolution will make it faster or as fast as any other product in the market, making their cloud servers a default monopoly. I have problem with this model. Not only will these developments keep independent developers from leveraging the benefits of AI, but also shrink the market place significantly.
I only wish, Google published clear cut plans to market their TPU devices to third party system integrators and data center operators like us, so that the AI revolution will be more accessible and democratized, jut like what Microsoft did with the PC revolution in the 90s.

Automation – A completely different thinking.

I have been spending some time on thinking about leadership. Recently, I had the opportunity to sit down and listen to an Angel investor, who specializes in life science companies particularly in New York. I was excited because I have heard incredibly fluffy pieces about this individual. But, the moment this individual started to speak, I suddenly realized something. This individual has no interest in the area of work. It reflected in his presentation, where, in-front of a very young and eager bunch of aspiring entrepreneurs, showed up late for the talk and started going through slides of a presentation created nearly two decades back.

This disappointment reminded me of  the recent article I came across. This article was from Harvard Business Review and it portrayed a dismal view of the enterprise where executives are living in a bubble. I realized, it is not just the corporate executives, but, even the investors are living in a bubble. These individuals are speaking words, but, it makes sense only to a raving mad man. The layers and filters, these individuals have created between themselves and the world outside have created an alternate reality for themselves. The recent news cycles of an incompetent CEO is way more common than any other job known to human kind. This points to an interesting point in our social evolution where CEOs have become one of the rare jobs where incompetence is tolerated and yet highly reimbursed.

This brings me to another interesting article on what exactly is a job and how to value a job. This amazing article published by BBC Capital exposes an interesting conundrum of a capitalist economy. The jobs that are absolutely essential for human survival are the lowest paid ones. The jobs that we consider to be completely pointless are the ones that are extremely highly paid. So, the whole question of what is valuable to the society arises here. Do we prioritize nonsense over our own survival. Our current social structure seems to indicate exactly that. According to Peter Fleming, Professor of Business and Society at City, University of London, who is quoted in the article: “The more pointless a task, the higher the pay.”

So, almost all of our low wage employments cluster around human activities that are absolutely essential for the survival of a society. If we apply these two diverse set of thoughts to another interesting question on the future of jobs, then, I believe I have the answer already.  By implementing mass automation on all of the mission critical jobs for the survival of a human society, we eliminate all the low-wage jobs, and individuals who had these jobs previously are moved to a position where they are given a pointless job.

For the sake of argumentative clarity imagine a situation where automation has been implemented in highly important jobs. A great example is the restaurant worker. Imagine a future where all of the restaurant workers who prepare food are replaced by an automated supply chain and a set of robots. The role of individuals are just to manage these automated supply chain and robots. Imagine, our teenage daughters or sons,  who will apply for a job at one of these restaurants. Instead of these young individuals being hired just for cheap human labor, now they will be hired as a manager for the automated supply chain. Instead of years of school and college education to end-up being incompetent and pointless to the society, even a high school graduate will be able to do this job, because the society is run reliably and efficiently by a highly robust and resilient automated supply chain. All the rest of the management hierarchy will remain the same, due to corporate and regulatory reasons.

It seems to me: a highly automated society will create more value than a society that still relies on human labor for its mission critical jobs. This may seem counter-intuitive, but, it is not. The value of human labor is a more abstract idea than the one based on absolutes. In our current system of human labor dependent economies, only a handful individuals are given the free-reign on incompetence and risk taking. In an automated economy, the underlying resilience will help more individuals to take more risks and be unfazed by the outcomes, because none of these activities are absolutely essential for the survival of a human society.

This trend of automation will not only massively simplify our daily life activities, we will have more time in our hands to do what we want to do in the first place, without worrying about thirst, hunger, hygiene or any of the mission critical activities for our life. The friction of human existence is reduced by machines, which help us create more value to the society.

More individuals will have the opportunity to be at an executive level than what exists today, even with minimal training. It will be the democratization of the executive jobs that will happen with the automation, not the loss of jobs. More people can have the freedom to be employed as CEOs and still get highly paid, because, the society is run on a highly reliable automated schedule. It will be a very different society, just like the massive shift of agrarian economy to the industrial economy that happened during the industrial revolution, but a much more highly paid and with less number of unemployed, for sure.

This research on current dismal state of executives and managers in corporate sector is part of our mission to understand the overarching impact of automation and machine-learning in healthcare. It is done as part of our cancer therapeutics company: nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Image caption from top to bottom: 1) A screen shot from the television programme: The Thick Of It with Malcolm Tucker (played by Peter Capaldi) and photographed by Des Willie, a great artistic portrayal of human dysfunction at the core of some of the most point-less, yet, highly coveted jobs. (C) BBC and reproduced under fair usage rights, 2) A satirical portrayal of recent troubles of United Airlines, published by Bloomberg, retrieved from public domain through Google search and published under fair usage rights, 3) A quote that says: “It’s as if someone were out there making up pointless jobs just for the sake of keeping us all working.” posted inside a London tube train car, with the hashtag bullshitjobs, 4) Image of a burger flipping robot in action, obtained from public domain through Google image search and reproduced under fair usage rights.

Forced errors – Lessons from an accounting scandal.

In 2015 Toshiba corporation based in Minato, Tokyo, Japan, disclosed to its investors of a major corporate accounting malpractice. The accounting scandal dated back to the 2008 financial collapse. When the market forces became unfavorable, Toshiba resorted to the terrible art of creative accounting practices a.k.a cooking the books.

Toshiba created a very interesting mechanism to cook the books. Instead of a direct order to restate revenue and profits, the top level executives of the firm created a strategy called “Challenges”. The “Challenges” were quarterly financial performance targets handed over to the managers of various divisions. These targets were handed over just a few days ahead of the submission due dates for the quarterly financial reports. The system was designed specifically to pressure various division heads to finally embrace the incredibly stupid act of creative accounting practices, instead of aiming at real improvements in corporate performance. The top level executives were extremely confident that the performance pressure created by their “Challenges” system would eventually lead the mid-level managers of  the company to resort to these despicable accounting practices.

I am amazed by the level of top level executive creativity in implementing a pressure cooker situation to do all the wrong things. The fiscal situation would have leveled-off if Toshiba started making money after the financial crisis. But, the problem was, the ingeniously devious top-level executives also started to believe in these cooked books. Now, armed with extremely brilliant, yet fake profit results, the company unleashed their ambitions upon the world.

Toshiba went ahead with their ambitious, capital intensive endeavor of expanding the nuclear power generation in North America. Toshiba had already purchased the North American nuclear power generation behemoth Westinghouse. In 2006, two years before the impending global economic disaster that originated in North America, Toshiba purchased Westinghouse Electric Company, a manufacturer of nuclear power, from British taxpayer funded company: British Nuclear Fuel Limited (BNFL). At the time, economists and analysts questioned the wisdom of BNFL selling their supposedly profitable nuclear power generation business to Toshiba. These were due to the then widely held belief that the market for nuclear power generation was poised to grow rapidly in the next decade. These projections were based on growing global demand in electricity mostly from rapidly growing Asia, especially China and India.

The reasons for the sale were multi-factorial. Being a UK tax-payer funded operation, BNFL had very little leverage in Asian markets like China and India. The then British government was unwilling to take extreme financial, marketing and operational risks of continuing the operations of BNFL. Also, a few years later the biggest secret was out: BNFL was in huge financial mess and its operations were in turmoil, albeit invisible from the public eye back in 2006.

Behind the sale of Westinghouse, another key market force was in play too. Even in 2006, the emerging pattern in the world of power generation was the slow shift away from complex solutions like nuclear power to simpler and more reliable solutions like natural gas fired power-plants and solar-energy. Also, the business of building nuclear power has always been riddled with extremely high risks, including large scale cost-overruns and unanticipated delays.

Even before the 2008 North American financial crisis, the appetite for large-scale capital intensive risks like funding nuclear power plants were coming to a crawling stop. The companies that operate these plants return a profit only after forty years of commercial operation. The operational life of the plants are approximately sixty years. Therefore, for almost two thirds of a nuclear power plant’s life-cycle, it is operating at a loss. Very few investment firms have the resources and the expertise to handle such a complex long-term operation. In such a tough economic situation, an UK taxpayer funded operation like BNFL had limited options to survive: either sell its then widely considered to be lucrative nuclear power business or to significantly scale down the operations by limiting its interests only in the UK and lose a huge amount of market valuation in the process.

In conclusion: market realities, complexities of operating a nuclear power-plant and a risk averse UK government at the time, led to the sale of Westinghouse to Toshiba. In hindsight, BNFL exiting UK taxpayer funded nuclear power generation business was one of the best business decisions ever. BNFL made over 5 times the money it paid for to buy Westinghouse Electric Company in 1999. Sadly the sale of Westinghouse division and the fire-sale of its other assets that followed didn’t save the company. BNFL became defunct in 2010.


At a time when nuclear power has slowly fallen out of favor around the world, Toshiba energetically and optimistically forged new deals in North America. After a flurry of  new orders, it seemed like Toshiba acquired a winner with the Westinghouse Electric Company. Then, in 2011, the bad news hit the nuclear power industry in the form of Fukushima disaster. The Westinghouse Electric Company’s most advanced reactor design called the AP1000 shared similarities in design with the GE developed reactor at Fukushima. The US regulatory scrutiny that followed, revealed design flaws in its core shielding system, particularly the strength of the building structure that holds the nuclear reactor core.

Concurrent to these set-backs, the accounting wizards at Toshiba were creating an alternate reality in corporate finance and accounting. It became increasingly clear early on to the executives, that Toshiba had ruinously overpaid to acquire Westinghouse Electric Company. To mitigate the financial blow of having unknowingly bought a lemon and having to deal with a global financial meltdown, from 2008 onward Toshiba started the practice of misstating its revenue and deferred expenses.

Some of the accrued and real expenses were shown as assets instead of liability and also led to constant misstatement of profits across all of its divisions. Since the accounting malpractice was so sophisticated, extremely well engineered and spread over its sprawling business interests ranging from semiconductors to healthcare to social infrastructure, it took nearly a decade to reveal its ugly face. It makes me wonder, what if Westinghouse Electric Company turned a profit and didn’t encounter all the cost over-runs and delays. We may never have heard about this scandal at all.

The ingenuity behind this large scale corporate malpractice is based on human psychology. The C-suites at Toshiba, instead of handing over direct orders of misstating the profits, created a new system: a system of impossible expectations, where deceitful behavior was the only way to remain employed, run the business and climb the corporate ladder inside Toshiba. The bet made by these executives were one that was cynical: human ethics will fail to intervene if the entire management system surrounding everyone forces them to behave unethically.

It reminds me of the brilliant work on human psychology by Stanley Milgram on obedience and authority. In Toshiba’s case, they used it to create an alternate corporate financial reality. The problem with this behavior is: sometimes unforeseen business risks will expose the quick sand upon which a false empire is built. Here is the movie “The Experimenter”, based on Stanley Milgram’s groundbreaking work on human behavior under authority. I recommend this movie to anyone interested in understanding the complex human behavior of obedience and authority.

It became exceedingly clear by late 2016, that both Westinghouse Electric Company and Toshiba were in deep financial mess. All this creative magical thinking and accounting practices couldn’t solve the financial mess of dealing with regulatory issues, construction problems, constant delays, cost escalations, increasingly frustrated operators and suppliers.

The profit making healthcare and semiconductor business couldn’t carry all of the financial burden of supporting a clearly failing corporate parent company. The profitable healthcare division was sold to Canon corporation in a hurry to prevent rapid loss in its value due to a future bankruptcy of the parent company. It is very likely that even this sudden yet, large infusion of cash from their healthcare division sale came too late to prevent an imminent catastrophic collapse of Toshiba. The next in line for the fire-sale appears to be Toshiba’s semi-conductor division.

I have learned three lessons after studying about the imminent collapse of Toshiba due to its terrible accounting practices. I am sharing those three lessons:

  1. Always question the corporate culture and create an environment where employees, partners, suppliers and anyone directly or indirectly involved with the business is free to ask questions. In other words embrace nanøvedas philosophy of radical openness.
  2. Remember Murphy’s law: anything that can go wrong, it will go wrong, and the scarier cousin of Murphy’s law, the Finagle’s law: Anything that can go wrong, will—at the worst possible moment.
  3. Businesses are a human enterprise, which means businesses have a built in optimism bias. Always be aware of this bias. When things go wrong, human psychology will direct us to hide it rather than share it. Therefore cultivate a culture of sharing mistakes, such as a reporting session every month on all the SNAFUs or a satirical evening of profanity riddled tirade against the management overlords. Comedy is the best way to reveal the ugliest of our secrets.

This is an honest and heartfelt research on a corporation that I once admired and how it all fell apart. This is part of my journey to create a better healthcare and cancer therapeutics company, here at nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

(Image captions from top to bottom: 1) Toshiba TC4013BP microprocessor on a printed circuit board, obtained through Wikipedia, 2) A painting that accurately depicts fraud and deceit discussed in this blog post. The painting is William Hogarth‘s The Inspection, the third canvas in his Marriage à-la-mode (The Visit to the Quack Doctor), obtained through Wikipedia, 3) Double-sided Westinghouse sign that was once located at the intersection of Borden Avenue and 31st Street on the north side of the Long Island Expressway in New York City, dated 1972 from the collection of Richard Huppertz, obtained from Wikipedia, 4) A cutaway section of the pressurized water reactor that was used in Fukushima Daiichi, obtained from Wikipedia, 5) Image of an English toast with Murphy’s law engraved on the jelly, obtained from public domain and reused with permission through Flickr.)

Failure Mode Effects – What I learned from a failed car engine:

The 2010 Dodge Charger HEMI is the epitome of a modern muscle car. It is powered by an absolute monster of an engine. The V8 engine displacement is at 6.1L, produces 317kW of power and 569Nm torque. The engine is manufactured at Saltillo engine plant and has a great track record for being low maintenance high displacement naturally aspirated North American engines. In-fact, there is nothing to complain about the engineering or quality of the engine. This is the same with almost all modern cars and car engines. All the cars in the market right now are very well developed and mature products.

Unlike yesteryear’s car models, like the 90’s Mercedes with moody electronics or Lanicas that rot away faster than unrefrigerated cheese or Triumphs that leak oil like a rain soaked umbrella and Jaguars that hibernate whenever there is a light shower; cars these days are reliable, reliable like an appliance. The reliability masquerades another interesting problem; the problem of ‘what if something goes wrong’.

I have known folks who obsess about tire pressure. The same individuals who pop open the engine-bay to check the oil dipstick to see if there is enough engine oil. These are the folks who had the reluctant and unpleasant experience of living with one of the yesteryear’s unreliable vehicle engineering samples. They know very well that things can go wrong and they will go wrong. So, their life’s journey has made them curious and cautious about every single little mechanical clatter.

As for me, I am the nineties kid. My first vehicular experience was on a universal Japanese motorcycle or commonly abbreviated as the UJM. It was a bright red Yamaha. It was a reliable, fun piece of engineering which has made me a Yamaha fan for life. My first car was a Hyundai. Again, reliable piece of Korean engineering and a sensible one too. Some might say soul-less, but I had great fun with it. Even though the car I had was stereotypically reliable, recent history suggests Hyundai has some serious issues with their engines. Recently Hyundai issued a recall of 1.2 million vehicles, which covers both of its brands Hyundai and Kia motors in North America. So, I am fortunate that at no part of my life did I ever experience the horrible vehicular gremlins of electrical issues, oil leaks or anything of sorts.

The story is similar to almost everyone whom I grew up with or for any of those who had their early adult driving years with a well engineered car. This brings to attention, the problem of what do we do when things go wrong. The reason I talked about the HEMI is because one my friends own one. He got the car from his dad. So there is a family connection and major emotional value to it. The problem was a rogue oil pressure sensor. It happens even to the best of us. When we see a rogue oil pressure sensor, we are optimistic that we won’t have any oil issues. We procrastinate to fix the wonky sensor.

But, the problem is, even-though, being optimistic in life is a brilliant quality, it is not so much when it comes to dealing with engineered systems. The oil pressure sensor is there for a reason. It is a two layer protection. The first layer is to engineer something that works great in the first place. The second layer is to prepare for the malfunctions. Most of the time even I, ignore the oil pressure warnings. I know it is a bad thing, but at the same time, I also know the odds of it being something really bad is very low. Also, all the advertisements about how reliable all the modern cars are; add to our sense of highly optimistic thinking.

There are occasions when these non-zero odds turn out to be something really bad. Something like the oil line had a leak and the engine has no oil. This is exactly what happened to my friend. The one time he decided to ignore a warning light that was going off for months on his dashboard turned out to be something really really bad. While he was cruising along, the HEMI engine shuts down, since it has no oil to lubricate all the moving part. The optimistic us, whose life experience has been with reliably engineered devices, suddenly become flummoxed. It is not a pretty sight when an engine runs out of oil. Here is a video of a mechanic describing the horrors of dealing with an engine that ran out of oil.

I really admire the folks who are meticulous with their cars. Those who take the warning lights seriously every single time. It is an approach that will prevent disasters like engine running out of oil. But, then I realized, this story has more to it. When we are dealing with any systems, warning signals need to be taken seriously. Especially when it comes to engineered systems. These warning lights are there for a purpose. Our optimistic self and the modern life we are surrounded with are filled with highly reliable devices. It has subconsciously taught us to ignore these warning signs. But, one day one of these warning signs flashing in front of us can turn it into a really messy problem.

So, how can a regular person like you and me, avoid a situation similar to the HEMI engine running out of oil from happening? My idea to solve this important issue is to create better reminders and warning signals for cars. Since we are all addicted and tethered to our smartphones, a simple way will be to connect the sensor readings from the engine to an app. The app warns us through a notification event, may be periodically, perhaps every time one starts a car. A simple LED light flashing on the dash isn’t enough for me or anyone I know of, to be motivated to see a mechanic and figure-out what is wrong with the car engine. Since, most of my decision makings are surrounding the phone, the computer and everything linked to the internet, I am firmly in support of a better car: a better internet connected car. It will make the warning lights appear more serious and also well prepared for the #applife.

I am confident the next car of mine will be a connected car, that sends notifications to my phone.

The Dodge HEMI Charger has been very important for our continued work on cancer research as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Fine tuning the cloud – Making machines learn faster.

In a previous post I discussed how easy it was to setup machine learning libraries in python using virtual machines (vm) hosted in the cloud. Today, I am going to add more details to it. This post covers how to make machine learning code run faster. This post will help any user compile a tensorflow installer to make it run faster in the cloud. The speed advantages are due to installer optimizations to take advantage of processor instruction set extensions that power the cloud vm.

The steps  described below will help us compile a tensorflow installer with accelerated floating point and integer operations using instruction set extensions for the AMD64 architecture . The CPU we will be dealing with today is an Intel Xeon processor running at 2.3GHz. It is a standard server CPU that supports AVX, AVX2, SSE + floating point math and SSE4.2.

In the previous post, we were using a simple pip installer for tensorflow. The pip installer is a quick and easy way to set-up tensorflow. But, this installation method for tensorflow is not optimized for the extended instruction sets that are present in these advanced CPUs powering the vm in Google Cloud Platform.

If we compile an installer optimized for these instruction set extensions, we can speed-up many of the computation tasks. When tensorflow is used as a back-end in keras for machine learning problems, the console reminds you constantly, to optimize your tensorflow installation. These instructions below will also help you get rid of those warning messages in your console.

First step to compile an optimized tensorflow installer is to complete all the linux package dependencies. Run the line below to complete the missing packages needed to compile tensorflow installer.

To compile tensorflow, we need a build tool called Bazel. Bazel is an opensource tool developed by Google. It is used to automate software development and testing. Since tensorflow is at the leading edge of machine learning world, features are added, bugs are fixed and progress is made at a dizzying speed than the relatively pedestrian pace of traditional software development. In this rapid development, testing and deployment environment; Bazel help users manage this process more efficiently. Here is the set of code to install Bazel.

Once Bazel is installed in your computer, next step is to clone the source files of tensorflow from GitHub.

After the source files are copied from GitHub repository to your local machine, we have to do some housekeeping. We need to ensure the python environment to run tensorflow has all the necessary libraries installed. To fulfill the library dependencies in python, we need to install numpy, development environment, pip and wheel using the code below:

The next step is to configure the build process for the installer. Before we do the actual configuration, we are going to use a preview of the configuration dialog. This will help us understand what parameters we should know before hand to complete the configuration process successfully. The configuration dialog for building the installer is as follows:

We need four important pieces of information before we go ahead and configure the build process. They are: 1) location of the python installation, 2) python library path, 3) g++ location and 4) gcc location. The last two are optional and only needed to enable OpenCL. If your cloud vm supports OpenCL and CUDA, the instructions to compile tensorflow are slightly different, which I will not cover in this post. Identifying python installation location and library paths can be done using the following code. I have also included the steps for finding locations for gcc and g++ compilers in the code below:

We have all the information needed for configuring build process. We can proceed to configure the build process using the following line:

If you encounter the following error:

Purge the openjdk-9 and reinstall jdk-8 version. Use the instructions below:

Now, try ./configure again. Once the build process is configured properly, we can go ahead and build the installer using the following commands:

Everything should proceed smoothly and the build process is going to take some serious time. An octacore 2.3GHz Intel Xeon powered virtual machine needs around 30 minutes to complete this process. So, plan this process ahead of time. A short notice deployment is impossible, if one is looking to build the installer from scratch.

If the last step above threw a file not found error, it can be resolved by peeking into the build directory for the correct installer name.

Once you have the correct installer name, append the last line of code above with the correct installer name and the installation process should finish without any error messages. If we manage to finish all the steps above, it means: we have successfully installed an optimized installer for tensorflow. This installed tensorflow library is compiled to take advantage of the processor instruction set extensions.

Finally, to check and see if tensorflow can be imported into python, use the next few lines of code. The code follows these steps: first, we have to exit the tensorflow install directory, then invoke python and import tensorflow.

The import tensorflow line in python environment should proceed with no errors. Hurrah, we have a working tensorflow library that can be imported in python. We will need to do some tests to ensure that everything is in order, but, no errors up to this point means easy sailing ahead.

I have described a step-by-step process of building and installing tensorflow based on my logical reasoning of progression of things. These steps have worked extremely well for me, so far. The optimized tensorflow installation has cut-down run-time for some of the tasks by a factor of ten. As an added advantage, I longer have error messages that ask me to do optimizations to tensorflow installation in the python console.

Happy machine learning in the cloud everyone.

This work is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

In the cloud – Geography agnostic enterprise using Google Cloud Platform.

The concept of geography/location agnostic enterprise is very simple. Whether I am in Beijing or Boston, Kent or Kentucky, New Delhi or New York, Silicon Valley or Silicon Wadi, Qatar or Quebec; I should have access to a standard set of tools to run the company. Moving around geographic locations are hard challenge for any enterprise. One of the first problems we wanted to solve is how quickly we can deploy some of our tools to countries around the world. Part of the reason why I envision my mission with nanøveda to be a geography/location agnostic enterprise is because the problem we are trying to solve is universal. We want people around the world to have uniform access to our technology. It will also help us become better in business too.

To solve this problem I am in search for answers. I found a brilliant solution a few days back. Recently, I got an invite for a trial of Google Cloud Platform (GCP). Google was kind enough to give me a $300 credit towards creating applications in their cloud platform. I was very excited to try this cloud computing platform from one of the leaders in computing. Finally, last Friday, I decided to explore GCP. For me cloud computing brings two advantages: 1) Zero time and cost overhead of maintaining a set of in-house linux servers; 2) Creating a truly geographic agnostic business. I am running an Ubuntu 16.10 workstation for running machine learning experiments. Most of the tasks handled by this workstation has already started to overwhelm its intended design purpose. Therefore, I was actively looking for solutions to expand the total compute infrastructure available for me. It was right at this moment, I turned to Google to help solve our compute infrastructure problem.

I have never used GCP before. Therefore, I had to go through a phase of experimentation and learning, which took approximately a day. Once I learned how to create a virtual machine, everything started to fall in place. To check if the vm is seen properly by the guest os, I ran some basic diagnostic tests.

The GCP has an interesting flat namespace Object storage. The feature is called Buckets. The buckets feature allow the virtual machine to share data from a remote computer, very conveniently over a web interface. Google has a command-line tool called gsutil, to help its users streamline the management of their cloud environment. One of the first commands I learned was to transfer files from my local computer to the Object storage space. Here is an example:

Once I learned file transfer, the next step was to learn how to host a virtual machine. After I set-up an Ubuntu 16.10 virtual machine in the cloud, I needed to configure it properly. Installing necessary linux packages and python libraries were easy and fast.

After the vm was configured to run the code I wrote, the next step was to test file transfer to the vm itself. Since vm and the Object storage are beautifully integrated, file transfer was super quick and convenient.

After the code and all the dependent files were inside the cloud vm, the next step was to test the code in python. The shell command below executed the code in the file test.py.

Since some of the code execution takes hours to complete, I needed a way to create a persisting ssh connection. Google offers a web-browser based ssh client. The browser ssh client is simple, basic way of accessing a virtual machine. But, for longer sessions: creating a persistent session is ideal. Since my goal is to make most of the computing tasks as geography agnostic as possible, I found a great tool for linux called screen. Installing screen was very straightforward.

Once screen was installed, I created a screen session by typing screen in terminal. The screen session works like the terminal. But, if you are using ssh, it allows persistence of the commands being executed in the terminal even after the ssh terminal is disconnected. To quit screen just use the keyboard short cuts: ctrl+a followed by ctrl+d.

To resume a screen session, just type screen -r in vm terminal. If there are multiple screen sessions running, then one will have to specify the specific session that needs to be restored.

The ssh + screen option is a life saver for tasks that require routine management, and needs a lot of time to execute. It allows a vm administrator to convert any network connection into a highly persistent ssh connection.

The combination of Google cloud Object storage, ease of networking with the vm, ssh and screen has allowed me to transfer some of the complex computing tasks to the cloud in less than a day.The simplicity and lack of cognitive friction of the GCP has taken me by surprise. The platform is extremely powerful, sophisticated and yet; very easy to use. I have future updates planned for the progress and evolution of my usage of GCP for our start-ups computing needs.

I am still amazed by how easy it was for me to take one of the most important steps in creating a truly geography/location agnostic enterprise with the help of Google Cloud Platform. I have to thank the amazing engineering team at Google for this brilliant and intuitive cloud computing solution.

This work is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Pi day – Calculate 2017 digits of pi using Python3

Here is a short and elegant code to calculate 2017 digits of pi. The code is implemented in python3. In three lines of very simple python code; we are going to calculate the 2017 digits of pi.

The output is:

Happy 2017 pi day.

The code is available on GitHub.

This work is done as part of our startup project nanøveda. For continuing nanøveda’s wonderful work, we are running a crowdfunding campaign using gofundme’s awesome platform. Donation or not, please share our crowdfunding campaign and support our cause.

Donate here: gofundme page for nanøveda.

Installation notes – OpenCV in Python 3 and Ubuntu 17.04.

These are my installation notes for OpenCV in Ubuntu 17.04 linux for python3. These notes will help start a computer vision project from scratch. OpenCV is one of the most widely used libraries for image recognition tasks. The first step is to fire-up the linux terminal and type in the following lines of commands. These first set of commands will fulfill the OS dependencies required for installing OpenCV in Ubuntu 17.04.

Step1:

Update the available packages by running:

Step2:

Install developer tools for compiling OpenCV 3.0

Step3:

Install packages for handling image and video formats:

Add installer repository from earlier version of Ubuntu to install libjasper and libpng and install these two packages:

Install libraries for handling video files:

Step4:

Install GTK for OpenCV GUI and other package dependencies for OpenCV:

Step5:

The next step is to check if our python3 installation is properly configured. To check python3 configuration, we type in the following command in terminal:

The output from the terminal for the command above indicates the location pointers to target directory and current directory for the python configuration file. These two locations must match, before we should proceed with installation of OpenCV.

Example of a working python configuration location without any modification will look something like this:

The output from the terminal has a specific order, with the first pointer indicating the target location and the second pointer indicating the current location of the config file. If the two location pointers don’t match, use cp shell command to copy the configuration file to the target location. We are using the sudo prefix to give administrative rights to the linux terminal to perform the copying procedure. If your linux environment is password protected, you will very likely need to use sudo prefix. Otherwise, one can simply the skip the sudo prefix and execute the rest of the line in terminal. I am using sudo here because it is required for my linux installation environment.

Step6 (Optional):

Set-up a virtual environment for python:

The next step is to update ~/.bashrc file. Open the file in text editor. Here we are using nano.

At the end of the file, past the following text to update the virtual environment parameters:

Now, either open a new terminal window or enforce the changes made to bashrc file by:

Create a virtual environment for python named OpenCV:

Step7:

Add python developer tools and numpy to the python environment we want to run OpenCV. Run the following code below:

Step8:

Now we have to create a directory to copy the OpenCV source files from GitHub and download the source-files from GitHub to our local directory. This can be achieved using mkdir and cd commands along with git command. Here is an example:

We also need OpenCV contrib-repo for access to standard keypoint detectors and local invariant descriptors (such as SIFT, SURF, etc.) and newer OpenCV 3.0 features like text detection in images.

Step9:

The final step is to build, compile and install the packages from the files downloaded from GitHub. One thing to keep in mind here is that: we are now working in the newly created folder in terminal and not in the home directory. An example of linux terminal code to perform the installation of OpenCV is given below. Again, I am using sudo prefix to allow the terminal to have elevated privileges while executing the commands. It may not be necessary in all systems, depending on the nature of the linux installation and how system privileges are configured.

Step10:

The final step of compiling and installing from the source is a very time consuming process. I have tried to speed this process up by using all the available processors to compile. This is achieved using the argument nproc –all for the make command.

Here are the command line instructions to install OpenCV from the installer we just built:

For the OpenCV to work in python, we need to update the binding files. Go to the install directory and get the file name for the OpenCV file installed. It is located in either dist-packages or site-packages.

The terminal command lines are the following:

Now we need to update the bindings to the python environment we are going to use.  We also need to rename the symbolic link to cv2 to ensure we can import OpenCV in python as cv2. The terminal commands are as follows:

Once OpenCV is installed, type cd / to return to home directory in terminal. Then type python3 to launch the python environment in your OS. Once you have launched python3 in your terminal, try importing OpenCV to verify the installation.

Let us deactivate the current environment by:

First we need to ensure, we are in the correct environment. In this case we should activate the virtual environment called OpenCV and then launch the python terminal interpreter. Here are the terminal commands:

Let us try to import OpenCV and get the version number of the installed OpenCV version. Run these commands in the terminal:

If the import command in python3 console returned no errors, then it means python3 can successfully import OpenCV. We will have to do some basic testing to verify if OpenCV installation has been successfully installed in your Ubuntu 17.04 linux environment. But, if one manages to reach up to this point, then it is a great start. Installing OpenCV is one of the first steps in preparing a linux environment to solve machine vision and machine learning problems.

A more concise version of the instructions is also available on GitHub. Using the Moad Ada dev-ops server for deep-learning, the linux environment comes pre-installed with OpenCV. This makes it easier for artificial intelligence & deep learning application developers to develop machine vision applications. The dev-ops server can be ordered here: Moad online store.