At the dawn of the twentieth century, pessimism pervaded the field of Physics. The existing theories had been so fully fleshed out and generalized that there appeared little room for future exploration. The widespread belief was that there was nothing of importance left to be discovered. The Nobel Prize winning physicist Albert Michelson famously stated: “Our future discoveries must be looked for in the sixth place of decimals.” It seemed that incremental refinements were all that awaited the world. And yet, only a decade later Albert Einstein published “The Formal Foundations to the General Theory of Relativity,” profoundly changing our understanding of the universe and laying the groundwork for a whole new theoretical foundation on which to build upon. The new theory (and the immediate progress that followed) radically changed the trajectory of Physics – so much so that everything before it came to be referred to as classical physics.
The technology industry has had a few such inflection points. The one that is most apparent today is the founding of Google in 1998. When Google came to market with a Search engine, there were more than a few established players with substantial traction. At the time, most believed Search was largely a solved problem. The big technical breakthroughs had gotten us most of the way, and it seemed the future was in incremental improvements.
Of course, things played out very differently. Google’s starting point was pagerank, a smarter algorithm that returned vastly superior search results. But the more important distinction (less obvious at the time) was that it marked the first leg in our journey towards building intelligent machines that can automatically learn new tasks, learn new associations and make decisions independently. It was an ambitious mission to take humans out of the decision-making loop everywhere. Now almost twenty years later, over 15% of Google Search results are served by RankBrain – a deep-learning based black box system that wasn’t explicitly programmed for Search but has instead learned the algorithmic rules of Search automatically by absorbing large amounts of raw data. The explicitly programmed system is giving way to a ‘meta system’ that programs itself – and it is seeing applications in everything from photo search to language translation to autonomous driving vehicles. Jeff Dean, a highly respected early Google engineer, has gone as far as to say that if that Google was built today “much of it would not be coded but learned.”
Other consumer internet companies (Netflix, Facebook, Amazon to name a few) have followed Google’s lead. Today these leading companies are machine learning factories that crunch large amounts of data to automatically make business decisions that optimize every part of the business from making product recommendations and personalizing user experience to revenue optimization. Take for instance Airbnb which uses machine learning to recommend an optimal price for a given rental. It uses over five billion data points and even things like the “quality” of the photos of the rental to automatically suggest a price. Or Amazon which uses machine learning to dynamically price over 20M product SKUs based on competitor prices and demand.
Software is running these companies – but more critically, software is running the cognitive tasks previously left to human expertise.
Meanwhile, in striking contrast, the enterprise software world is stuck in a endless cycle of incrementalism – pursuing improvements in “the sixth place of decimals.” The early enterprise applications (going back to the 1950s) were all about record keeping and automating paper-based manual tasks from bookkeeping, invoicing, payroll processing, and inventory management to customer relationship management. Today more than fifty years later the mainframes programmed with punch cards have given way to mobile and cloud software that is infinitely more powerful and easier to use. Sophisticated visualizations, charts and real-time reporting have made decision-making far easier and quicker. Productivity software and collaboration tools have made organizations more efficient, transparent and adept in reacting to changing business needs. And yet in a fundamental sense little has changed: the software has eaten the routine and repetitive, but most of the critical business decision-making remains in the hands of human operators aided by spreadsheet models and business intelligence tools. Employees are still left crafting business logic and programming it over and over again in response to changing business and market needs.
So while companies like Google pursue a path towards offloading more and more cognitive tasks to software – and in the process gaining unprecedented new levers for differentiation, operational efficiency and business growth – the enterprise software world is stuck in the past automating workflows.
The obvious question is why?
The most plausible explanation is enterprise software is dominated by aging franchises. The top four enterprise software vendors by revenue today (IBM, Microsoft, Oracle and SAP) all predate the internet – the youngest in the group (Oracle) was founded almost forty years ago, a time when most businesses ran paper based manual workflows. The founding mission of these companies revolved around automating business operations, digitizing the paper trail and empowering the human operator to make quicker decisions. Over the years as technology evolved, they adapted well by reengineering their software stack to the new platforms (from mainframes to PCs to the web and now cloud) – but did little to reimagine the core applications themselves. Even relative new upstarts like Salesforce and Workday have focused more on reengineering old business application categories (CRM and ERP) for the cloud (and innovating on subscription based business models) than any sort of reimagination. Salesforce didn’t fundamentally change how you thought of a customer relationship management (CRM) application relative to its predecessor Siebel. The same can be said about Workday relative to SAP or Oracle.
Reimagination, it turns out, is hard: it requires picturing a new world on a fresh canvas and conceptualizing the possibilities that flow from it. If you had set out to build a search engine in 1993 – internalizing the world as it existed – you would have ended up with an online version of the yellow pages (interestingly, this was Yahoo! Directory). Instead, if you looked at the world and decided that what was needed was a new way to “organize the world’s information and make it universally accessible and useful,” you would end up with Google.
There is a silver lining to all of this: An unprecedented opportunity exists for startups to transform the enterprise software landscape – much the same way Google transformed how we think about the web. And, a number of favorable secular trends make it possible today.
First, we know it can be done. Often the biggest hurdle is a mental hurdle. Consumer internet (Google, Facebook, Amazon, Netflix etc.) and full-stack companies (Airbnb, Uber etc.) are living proof of the competitive advantage that comes from building systems that take human decision-making out of the loop. In each of these companies, if you take out the machine intelligence layers or delete the data on which they’re trained, you effectively kill the business and its competitive advantage. Can you imagine Netflix without its recommendation engine? Or Amazon without its product recommendation feature – which alone is responsible for a third of its product sales?
Second, the datasets required to train machine learning models are now readily available in the public domain. Even proprietary customer datasets inside the enterprise are easily accessible through APIs and they are in many cases large enough to effectively train robust machine learning models. Also there’s been exciting progress in the field of transfer learning wherein models can be pre-trained on a large (often publicly available) dataset and then retargeted to a customer use-case using only a small amount of customer data.
Third, deep learning based approaches allow systems to automatically infer the relevant features (or signals) in the data, without the need for data-scientists and domain experts to manually build the models. As a result, we can expect to see more off-the-shelf models in use with less hand tuning and last-mile optimization.
Finally, the core algorithmic building blocks are now open source. Google’s TensorFlow – a comprehensive open-source machine learning toolkit – is making it much easier to build sophisticated machine-learning based applications. With rapidly falling cloud computing prices, it is now possible to build, deploy and run these applications without breaking the bank.
If we can credibly talk about self-driving cars, we should also be able to talk about self-driving business software. If deep learning can learn the rules of search, identify arbitrary objects in an image, learn to translate from one language to another, or play the game Go better than any human, there is no reason why deep learning can’t effectively learn the business logic required to optimally run any business in any industry.
The opportunity is here for the taking and the enabling technologies are no longer considered magic. It is an exciting time for sure: a $150bn enterprise application software market is waiting to be reimagined.