Amazon’s AWS continues to lead its performance highlights

Amazon’s web services AWS continue to be the highlight of the company’s balance sheet, one again showing the kind of growth Amazon is looking for in a new business for the second quarter — especially one that has dramatically better margins than its core retail business.

Despite now running a grocery chain, the company’s AWS division — which has an operating margin over 25% compared to its tiny margins on retail — grew 49% year-over-year in the quarter compared to last year’s second quarter. It’s also up 49% year-over-year when comparing the most recent six months to the same period last year. AWS is now on a run rate well north of $10 billion annually, generating more than $6 billion in revenue in the second quarter this year. Meanwhile, amazon’s retail operations generated nearly $47 billion with a net income of just over $1.3 billion (unaudited). Amazon’s AWS generated $1.6 billion in operating income on its $6.1 billion in revenue.

So, in short, Amazon’s dramatically more efficient AWS business is its biggest contributor to its actual net income. The company reported earnings of $5.07 per share, compared to analyst estimates of around $2.50 per share, on revenue of $52.9 billion. That revenue number fell under what investors were looking for, so the stock isn’t really doing anything in after hours and Amazon still remains in the race to become a company with a market cap of $1 trillion alongside Google, Apple and Microsoft.

This isn’t extremely surprising as Amazon was one of the original harbingers of the move to a cloud computing-focused world, and as a result Microsoft and Google are now chasing it to capture up as much share as possible. While Microsoft doesn’t break out Azure, the company says it’s one of its fastest-growing businesses, and Google’s “other revenue” segment that includes Google Cloud Platform also continues to be one of its fastest-growing divisions. Running a bunch of servers with access to on-demand compute, it turns out, is a pretty efficient business that can account for the very slim margins that Amazon has on the rest of its core business.

Microsoft caps off a fine fiscal year seemingly without any major missteps in its last quarter

Microsoft is capping off a rather impressive year without any major missteps in its final report for its performance in its 2018 fiscal year, posting a quarter that seems to have been largely non-offensive to Wall Street.

In the past year, Microsoft’s stock has gone up more than 40%. In the past two years, it’s nearly doubled. All of this came after something around a decade of that price not really doing anything as Microsoft initially missed major trends like the shift to mobile and the cloud. But since then, new CEO Satya Nadella has turned that around and increased the company’s focused on both, and Azure is now one of the company’s biggest highlights. Microsoft is now an $800 billion company, which while still considerably behind Apple, Amazon and Google, is a considerable high considering the past decade.

In addition, Microsoft passed $100 billion in revenue for a fiscal year. So, as you might expect, the stock didn’t really do anything, given that nothing seemed to be too wrong with what was going on. For a company that’s at around $800 billion, that it’s not doing anything bad at this point is likely a good thing. That Microsoft is even in the discussion of being one of the companies chasing a $1 trillion market cap is likely something we wouldn’t have been talking about just three or four years ago.

The company said it generated $30.1 billion in revenue, up 17% year-over-year, and adjusted earnings of $1.13 per share. Analysts were looking for earnings of $1.08 per share on revenue of $29.23 billion.

So, under Nadella, this is more or less a tale of two Microsofts — one squarely pointed at a future of productivity software with an affinity toward cloud and mobile tools (though Windows is obviously still a part of this), and one that was centered around the home PC. Here are a couple highlights from the report:

  • LinkedIn: Microsoft said revenue for LinkedIn increased 37%, with LinkedIn sessions growth of 41%. Microsoft’s professional network was also listed in a bucket of other segments that it attributed to an increased operating expenditures, which also included cloud engineering, and commercial sales capacity. It was also bucketed into a 12% increase in research and development with cloud engineering, as well as a bump in sales and marketing expenses. This all seems pretty normal for a network Microsoft hopes to continue to grow.
  • Azure: Microsoft’s cloud platform continued to drive its server products and cloud services revenue, which increased 26%. The company said Azure’s revenue was up 89% “due to growth from consumed and SaaS revenue.” Once again, Microsoft didn’t break out specifics on its Azure products, though it seems pretty clear that this is one of their primary growth drivers.
  • Office 365: Office 365 saw commercial revenue growth of 38%, and consumer subscribers increased to 31.4 million. Alongside LinkedIn, Microsoft seems to be assembling a substantial number of subscription SaaS products that offset a shift in its model away from personal computing and into a more cloud-oriented company.
  • GitHub: Nada here in the report. Microsoft earlier this year said it acquired it for a very large sum of money (in stock), but it isn’t talking about it. But bucket it alongside Office 365 and LinkedIn as part of that increasingly large stable of productivity tools for businesses, as Github is one of the most widely-adopted developer tools available.

Oracle could be feeling cloud transition growing pains

Oracle is learning that it’s hard for enterprise companies born in the data center to make the transition to the cloud, an entirely new way of doing business. Yesterday it reported its earnings and it was a mixed bag, made harder by changing the way the company counts cloud revenue.

In its earnings press release from yesterday, it put it this way: “Q4 Cloud Services and License Support revenues were up 8% to $6.8 billion. Q4 Cloud License and On-Premise License revenues were down 5% to $2.5 billion.”

Let’s compare that with the language from their Q3 revenue in March: “Cloud Software as a Service (SaaS) revenues were up 33% to $1.2 billion. Cloud Platform as a Service (PaaS) plus Infrastructure as a Service (IaaS) revenues were up 28% to $415 million. Total Cloud Revenues were up 32% to $1.6 billion.”

See how they broke out the cloud revenue loudly and proudly in March, yet chose to combine their cloud revenue with license revenue in June.

In the post-reporting earnings call, Safra Catz, Oracle Co-CEO, responding to a question from analyst John DiFucci, took exception to the idea that the company was somehow obfuscating cloud revenue by reporting it in this way. “So first of all, there is no hiding. I told you the Cloud number, $1.7 billion. You can do the math. You see we are right where we said we’d be.”

She says the new reporting method is due to the new combined licensing products that lets customer use their license on-premise or in the cloud. Fair enough, but if your business is booming you probably want to let investors know about that. They seem to be uneasy about this approach with the stock down over 7 percent today as of publishing this article.

Oracle Stock Chart: Google

Oracle could of course settle all of this by spelling out their cloud revenue, but instead chose a different path. John Dinsdale, an analyst with Synergy Research, a firm that watches the cloud market was dubious about Oracle’s reasoning.

“Generally speaking, when a company chooses to reduce the amount of financial detail it shares on its key strategic initiatives, that is not a good sign. I think one of the justifications put forward is that is becoming difficult to differentiate between cloud and non-cloud revenues. If that is indeed what Oracle is claiming, I have a hard time buying into that argument. Its competitors are all moving in the opposite direction,” he said.

Indeed most are. While it’s often hard to tell exactly the nature of cloud revenue, the bigger players have been more open about this. For instance in its most recent earnings report, Microsoft reported its Azure cloud revenue grew 93 percent. Amazon reported its cloud revenue from AWS was up 49 percent to $5.4 billion in revenue, getting very specific about the revenue number.

Further you can see from Synergy’s most recent market share cloud growth numbers from the 4th quarter last year, Oracle was lumped in with “the Next 10,” not large enough to register on its own.

That Oracle chose not to break out cloud revenue this quarter can’t be seen as a good sign. To be fair, we haven’t really seen Google break out their cloud revenue either with one exception in February. But when the guys at the top of the market shout about their growth, and the guys further down don’t, you can draw your own conclusions.

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.

Amazon starts shipping its $249 DeepLens AI camera for developers

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

AT&T launches its LTE-powered Amazon Dash-style button

When we first told you about AT&T’s LTE-M Button, the information was socked away in a deluge of AWS Re:Invent announcements. The telecom giant was a bit more upfront when announcing its availability earlier this week — but just a bit.

After all, it’s not a direct-to-consumer device. Unlike the product-branded hunk of plastic you can presently pick up from Amazon to refresh your supply of Goldfish crackers and Tide Pods, this one’s currently open to developers at companies looking to build their own. What it does have going for it, however, is LTE-M, a cheaper, lower cost version of 4G that’s set to power a future generation of IoT devices.

That means it can be used for your standard Dash-like activities — letting customers replenish items with a press — and it can also be implemented in some more interesting scenarios, out of the bounds of regular WiFi. AT&T offers up a couple of case uses, including customer feedback in public venues and use in places like construction sites where home/office Wifi isn’t an option.

Of course, without the direct retail feedback loop, it’s not really a Dash competitor — and besides, AWS is helping power the thing, so Amazon’s still getting a kickback here. Oh, and then there’s the price — the buttons start at $30 a piece, which amounts to a lot of Tide Pods. As such, we likely won’t see them take off too quickly, but they do provide an interesting usage as AT&T looks to LTE-M to push IoT outside of the home. 

AWS adds more EC2 instance types with local NVMe storage

AWS is adding a new kind of virtual machine to its growing list of EC2 options. These new machines feature local NVMe storage, which offers significantly faster throughput than standard SSDs.

These new so-called C5d instances join the existing lineup of compute-optimized C5 instances the service already offered. AWS cites high-performance computing workloads, real-time analytics, multiplayer gaming and video encoding as potential use cases for its regular C5 machines and with the addition of this faster storage option, chances are users who switch will see even better performance.

Since the local storage is attached to the machine, it’ll also be terminated when the instance is stopped, so this is meant for storing intermediate files, not long-term storage.

Both C5 and C5d instances share the same underlying platform, with 3.0 GHz Intel Xeon Platinum 8000 processors.

The new instances are now available in a number of AWS’s U.S. regions, as well as in the service’s Canada regions. Prices are, unsurprisingly a bit higher than for regular C5 machines, starting at $0.096 per hour for the most basic machine with 4 in AWS’s Oregon region, for example. Regular C5 machines start at $0.085 per hour.

It’s worth noting that the EC2 F1 instances, which offer access to FPGAs, also use NVMe storage. Those are highly specialized machines, though, while the C5 instances are interesting to a far wider audience of developers.

On top of the NVMe announcement, AWS today also noted that its EC2 Bare Metal Instances are now generally available. These machines provide direct access to all the features of the underlying hardware, making them ideal for running applications that simply can’t run on virtualized hardware and for running secured container clusters. These bare metal instances also offer support for NVMe storage.

Our digital future will be shaped by increasingly mobile technologies coming from China

Since the dawn of the internet, the titans of this industry have fought to win the “starting point” — the place that users start their online experiences. In other words, the place where they begin “browsing.” The advent of the dial-up era had America Online mailing a CD to every home in America, which passed the baton to Yahoo’s categorical listings, which was swallowed by Google’s indexing of the world’s information — winning the “starting point” was everything.

As the mobile revolution continues to explode across the world, the battle for the starting point has intensified. For a period of time, people believed it would be the hardware, then it became clear that the software mattered most. Then conversation shifted to a debate between operating systems (Android or iOS) and moved on to social properties and messaging apps, where people were spending most of their time. Today, my belief is we’re hovering somewhere between apps and operating systems. That being said, the interface layer will always be evolving.

The starting point, just like a rocket’s launchpad, is only important because of what comes after. The battle to win that coveted position, although often disguised as many other things, is really a battle to become the starting point of commerce.  

Google’s philosophy includes a commitment to get users “off their page” as quickly as possible…to get that user to form a habit and come back to their starting point. The real (yet somewhat veiled) goal, in my opinion, is to get users to search and find the things they want to buy.

Of course, Google “does no evil” while aggregating the world’s information, but they pay their bills by sending purchases to Priceline, Expedia, Amazon and the rest of the digital economy.  

Facebook, on the other hand, has become a starting point through its monopolization of users’ time, attention and data. Through this effort, it’s developed an advertising business that shatters records quarter after quarter.

Google and Facebook, this famed duopoly, represent 89 percent of new advertising spending in 2017. Their dominance is unrivaled… for now.

Change is urgently being demanded by market forces — shifts in consumer habits, intolerable rising costs to advertisers and through a nearly universal dissatisfaction with the advertising models that have dominated (plagued) the U.S. digital economy.  All of which is being accelerated by mobile. Terrible experiences for users still persist in our online experiences, deliver low efficacy for advertisers and fraud is rampant. The march away from the glut of advertising excess may be most symbolically seen in the explosion of ad blockers. Further evidence of the “need for a correction of this broken industry” is Oracle’s willingness to pay $850 million for a company that polices ads (probably the best entrepreneurs I know ran this company, so no surprise).

As an entrepreneur, my job is to predict the future. When reflecting on what I’ve learned thus far in my journey, it’s become clear that two truths can guide us in making smarter decisions about our digital future:

Every day, retailers, advertisers, brands and marketers get smarter. This means that every day, they will push the platforms, their partners and the places they rely on for users to be more “performance driven.” More transactional.

Paying for views, bots (Russian or otherwise) or anything other than “dollars” will become less and less popular over time. It’s no secret that Amazon, the world’s most powerful company (imho), relies so heavily on its Associates Program (its home-built partnership and affiliate platform). This channel is the highest performing form of paid acquisition that retailers have, and in fact, it’s rumored that the success of Amazon’s affiliate program led to the development of AWS due to large spikes in partner traffic.

Chinese flag overlooking The Bund, Shanghai, China (Photo: Rolf Bruderer/Getty Images)

When thinking about our digital future, look down and look east. Look down and admire your phone — this will serve as your portal to the digital world for the next decade, and our dependence will only continue to grow. The explosive adoption of this form factor is continuing to outpace any technological trend in history.

Now, look east and recognize that what happens in China will happen here, in the West, eventually. The Chinese market skipped the PC-driven digital revolution — and adopted the digital era via the smartphone. Some really smart investors have built strategies around this thesis and have quietly been reaping rewards due to their clairvoyance.  

China has historically been categorized as a market full of knock-offs and copycats — but times have changed. Some of the world’s largest and most innovative companies have come out of China over the past decade. The entrepreneurial work ethic in China (as praised recently by arguably the world’s greatest investor, Michael Moritz), the speed of innovation and the ability to quickly scale and reach meaningful populations have caused Chinese companies to leapfrog the market cap of many of their U.S. counterparts.  

The most interesting component of the Chinese digital economy’s growth is that it is fundamentally more “pure” than the U.S. market’s. I say this because the Chinese market is inherently “transactional.” As Andreessen Horowitz writes, WeChat, China’s  most valuable company, has become the “starting point” and hub for all user actions. Their revenue diversity is much more “Amazon” than “Google” or “Facebook” — it’s much more pure. They make money off the transactions driven from their platform, and advertising is far less important in their strategy.

The obsession with replicating WeChat took the tech industry by storm two years ago — and for some misplaced reason, everyone thought we needed to build messaging bots to compete.  

What shouldn’t be lost is our obsession with the purity and power of the business models being created in China. The fabric that binds the Chinese digital economy and has fostered its seemingly boundless growth is the magic combination of commerce and mobile. Singles Day, the Chinese version of Black Friday, drove $25 billion in sales on Alibaba — 90 percent of which were on mobile.

The lesson we’ve learned thus far in both the U.S. and in China is that “consumers spending money” creates the most durable consumer businesses. Google, putting aside all its moonshots and heroic mission statements, is a “starting point” powered by a shopping engine. If you disagree, look at where their revenue comes from…

Google’s recent announcement of Shopping Actions and their movement to a “pay per transaction model” signals a turning point that could forever change the landscape of the digital economy.  

Google’s multi-front battle against Apple, Facebook and Amazon is weighted. Amazon is the most threatening. It’s the most durable business of the four — and its model is unbounded on two fronts that almost everyone I know would bet their future on, 1) people buying more online, where Amazon makes a disproportionate amount of every dollar spent, and 2) companies needing more cloud computing power (more servers), where Amazon makes a disproportionate amount of every dollar spent.  

To add insult to injury, Amazon is threatening Google by becoming a starting point itself — 55 percent of product searches now originate at Amazon, up from 30 percent just a year ago.

Google, recognizing consumer behavior was changing in mobile (less searching) and the inferiority of their model when compared to the durability and growth prospects of Amazon, needed to respond. Google needed a model that supported boundless growth and one that created a “win-win” for its advertising partners — one that resembled Amazon’s relationship with its merchants — not one that continued to increase costs to retailers while capitalizing on their monopolization of search traffic.

Google knows that with its position as the starting point — with Google.com, Google Apps and Android — it has to become a part of the transaction to prevail in the long term. With users in mobile demanding fewer ads and more utility (demanding experiences that look and feel a lot more like what has prevailed in China), Google has every reason in the world to look down and to look east — to become a part of the transaction — to take its piece.  

A collision course for Google and the retailers it relies upon for revenue was on the horizon. Search activity per user was declining in mobile and user acquisition costs were growing quarter over quarter. Businesses are repeatedly failing to compete with Amazon, and unless Google could create an economically viable growth model for retailers, no one would stand a chance against the commerce juggernaut — not the retailers nor Google itself. 

As I’ve believed for a long time, becoming a part of the transaction is the most favorable business model for all parties; sources of traffic make money when retailers sell things, and, most importantly, this only happens when users find the things they want.  

Shopping Actions is Google’s first ambitious step to satisfy all three parties — businesses and business models all over the world will feel this impact.  

Good work, Sundar.