Google adds new Singapore data center as Southeast Asia reaches 330M internet users

Google is adding a third data center to its presence in Singapore in response to continued internet growth in Southeast Asia.

It’s been three years since it added a second center in Singapore, and during that time the company estimates that something in the region of 70 million people across Southeast Asia have come online for the first time. That takes the region to over 330 million internet users, but with a population of over 650 million, there’s plenty more to come.

The local data centers don’t exclusively serve their immediate proximity — Asia data centers can handle U.S. traffic, and likewise — but adding more local capacity does help Google services, and companies that run their business on Google’s cloud, run quicker for internet users in that specific region. So not only is it good for locals, but it’s important for Google’s business, which counts the likes of Singapore Airlines, Ninjavan, Wego, Go-Jek and Carousell as notable cloud customers.

The search giant also operates a data center in Taiwan. The company had planned to augment Taiwan and Singapore with a center in Hong Kong, but that project was canned in 2013 due to challenges in securing real estate.

Google opened its first Singapore data center in 2011, and this newest facility will take it to around $850 million spent in Singapore to date, the company confirmed, and to over $1 billion when including Taiwan.

The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open-source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open-source projects out there right now. It sits at the intersection of a number of industry trends, like containers, microservices and serverless computing, and makes it easier for enterprises to embrace them. Istio now has more than 200 contributors and the code has seen more than 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open-source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer — say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open-source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open-source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing it’s only a matter of time before we hear more about where Istio will land.

Tech companies can bid on the Pentagon’s $10 billion cloud contract, starting today

On Thursday, the Pentagon opened bidding for a huge cloud computing contract that could be worth as much as $10 billion. Given its size, the Joint Enterprise Defense Infrastructure contract, known as JEDI, is alluring for major cloud computing companies that might not normally do much business with the Department of Defense.

Announced in March, JEDI is structured as a winner-take-all contract with a potential 10-year term, though the Pentagon clarified that the original award will span just the first two years, so all 10 years aren’t set in stone up front.

While it’s not yet sparked the same level of outcry as Google’s AI contract with the Pentagon known as Project Maven, JEDI isn’t uncontroversial. The now infamous Project Maven was a smaller, more specific contract with direct implications for the military’s use of drones, while JEDI is broader and bigger, seeking a vendor to provide cloud services for all branches of the military. Google has since abandoned plans to renew the contract, but Maven was likely a kind of test run for the company in the lead-up to JEDI.

While Amazon is largely regarded as the most likely winner, Google, Microsoft, IBM and Oracle are also among the major tech companies expected to throw their hats into the ring. Earlier in the process, it looked possible that companies could band together to form unlikely alliances against the perceived frontrunner, though it appears in the final request for proposal that the Pentagon plans to award the contract to a single company capable of handling it. Interested parties will have until September 17 to submit proposals, so in the months to come we can certainly expect to hear more from companies in the running and workers who oppose JEDI involvement.

Google’s Cloud Launcher is now the GCP Marketplace, adds container-based applications

Cloud Launcher has long been Google’s marketplace for cloud-based applications from third-party vendors that lets you deploy them in Google’s cloud with just a few clicks. That name never quite highlighted that Cloud Launcher also included commercial applications and that Google could handle the billing for those and combine it with a user’s regular GCP bill, so the company has now decided to rebrand the services as the GCP Marketplace.

That’s not all, though. With today’s update, Google is adding both commercial and open source container-based applications to the service, which users can easily deploy to the Google Kubernetes Engine (or any other Kubernetes service). Until today, the marketplace only featured traditional virtual machines, but a lot of customers were surely looking for container support, too.

As Google rightly argues, Kubernetes Engine can take a lot of the hassle out of managing containers, but deploying them to a Kubernetes cluster is still often a manual process. Google promises that it’ll only take a click to deploy an application from the marketplace to the Kubernetes Engine or any other Kubernetes deployment.

As Google Cloud product manager Brian Singer told me, his team worked closely with the Kubernetes Engine team to make this integration as seamless as possible. The solutions that are in the marketplace today include developer tools like GitLab, graph database Neo4j, the Kasten data management service, as well as open-source projects like WordPress, Spark, Elasticsearch, Nginx and Cassandra.

 

Google Cloud’s LA region goes online

Google Cloud’s new region in Los Angeles is now online, the company announced today. This isn’t exactly a surprise, given that Google had previously announced a July launch for the region, but it’s a big step for Google, which now boasts five cloud regions in the United States. It was only three years ago that Google opened its second U.S. region and, while it was slow to expand its physical cloud footprint, the company now features 17 regions around the world.

When it first announced this new region, Google positioned it as the ideal region for the entertainment industry. And while that’s surely true, I’m sure we’ll see plenty of other companies use this new region, which features three availability zones, to augment their existing deployments in Google’s other West Coast region in Oregon or as part of their overall global cloud strategy.

The new region is launching with all the core Google Cloud compute services, like App Engine, Compute Engine and Kubernetes Engine, as well as all of Google’s standard database and file storage tools, including the recently launched NAS-like Cloud Filestore service. For businesses that have a physical presence close to L.A., Google also offers two dedicated interconnects to Equinix’s and CoreSite’s local LA1 data centers.

It’s worth nothing that Microsoft, which has long favored a strategy of quickly launching as many regions as possible, already offered its users a region in Southern California. AWS doesn’t currently have a presence in the area, though, unlike Google, AWS does offer a region in Northern California.

Big tech companies are looking at Hollywood as the next stage in their play for the cloud

This week, both Microsoft and Google made moves to woo Hollywood to their cloud computing platforms in the latest act of the unfolding drama over who will win the multi-billion dollar business of the entertainment industry as it moves to the cloud.

Google raised the curtain with a splashy announcement that they’d be setting up their fifth cloud region in the U.S. in Los Angeles. Keeping the focus squarely on tools for artists and designers the company talked up its tools like Zync Render, which Google acquired back in 2014, and Anvato, a video streaming and monetization platform it acquired in 2016.

While Google just launched its LA hub, Microsoft has operated a cloud region in Southern California for a while, and started wooing Hollywood last year at the National Association of Broadcasters conference, according to Tad Brockway, a general manager for Azure’s storage and media business.

Now Microsoft has responded with a play of its own, partnering with the provider of a suite of hosted graphic design and animation software tools called Nimble Collective.

Founded by a former Pixar and DreamWorks animator, Rex Grignon, Nimble launched in 2014 and has raised just under $10 million from investors including the UCLA VC Fund and New Enterprise Associates, according to Crunchbase.

“Microsoft is committed to helping content creators achieve more using the cloud with a partner-focused approach to this industries transformation,” said Tad Brockway, General Manager, Azure Storage, Media and Edge at Microsoft, in a statement. “We’re excited to work with innovators like Nimble Collective to help them transform how animated content is produced, managed and delivered.”

There’s a lot at stake for Microsoft, Google and Amazon as entertainment companies look to migrate to managed computing services. Tech firms like IBM have been pitching the advantages of cloud computing for Hollywood since 2010, but it’s only recently that companies have begun courting the entertainment industry in earnest.

While leaders like Netflix migrated to cloud services in 2012 and 21st Century Fox worked with HP to get its infrastructure on cloud computing, other companies have lagged. Now companies like Microsoft, Google, and Amazon are competing for their business as more companies wake up to the pressures and demands for more flexible technology architectures.

As broadcasters face more demanding consumers, fragmented audiences, and greater time pressures to produce and distribute more content more quickly, cloud architectures for technology infrastructure can provide a solution, tech vendors argue.

Stepping into the breach, cloud computing and technology service providers like Google, Amazon, and Microsoft are trying to buy up startups servicing the entertainment market specifically, or lock in vendors like Nimble through exclusive partnerships that they can leverage to win new customers. For instance, Microsoft bought Avere Systems in January, and Google picked up Anvato in 2016 to woo entertainment companies.

The result should be lower cost tools for a broader swath of the market, and promote more cross-pollination across different geographies, according to Grignon, Nimble’s chief executive.

“That worldwide reach is very important,” Grignon said. “In media and entertainment there are lots of isolated studios around the world. We afford this pathway between the studio in LA and the studio in Bangalore. We open these doorways.”

There are other, more obvious advantages as well. Streaming — exemplified by the relationship between Amazon and Netflix is well understood — but the possibility to bring costs down by moving to cloud architectures holds several other distribution advantages as well as simplifying processes across pre- and post-production, insiders said.

 

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.

Investing in frontier technology is (and isn’t) cleantech all over again

I entered the world of venture investing a dozen years ago.  Little did I know that I was embarking on a journey to master the art of balancing contradictions: building up experience and pattern recognition to identify outliers, emphasizing what’s possible over what’s actual, generating comfort and consensus around a maverick founder with a non-consensus view, seeking the comfort of proof points in startups that are still very early, and most importantly, knowing that no single lesson learned can ever be applied directly in the future as every future scenario will certainly be different.

I was fortunate to start my venture career at a fund specializing in funding “Frontier” technology companies. Real-estate was white hot, banks were practically giving away money, and VCs were hungry to fund hot startups.

I quickly found myself in the same room as mainstream software investors looking for what’s coming after search, social, ad-tech, and enterprise software. Cleantech was very compelling: an opportunity to make money while saving our planet.  Unfortunately for most, neither happened: they lost their money and did little to save the planet.

Fast forward a decade, after investors scored their wins in online lending, cloud storage, and on-demand, I find myself, again, in the same room with consumer and cloud investors venturing into “Frontier Tech”.  The are dazzled by the founders’ presentations, and proud to have a role in funding turning the seemingly impossible to what’s possible through science. However, what lessons did they take away from the Cleantech cycle? What should Frontier Tech founders and investors be thinking about to avoid the same fate?

Coming from a predominantly academic background, I was excited to be part of the emerging trend of funding founders leveraging technology to make how we generate, move, and consume our natural resources more efficient and sustainable. I was thrilled to be digging into technologies underpinning new batteries, photovoltaics, wind turbines, superconductors, and power electronics.  

To prove out their business models, these companies needed to build out factories, supply chains, and distribution channels. It wasn’t long until the core technology development became a small piece of an otherwise complex, expensive operation. The hot energy startup factory started to look and feel mysteriously like a magnetic hard drive factory down the street. Wait a minute, that’s because much of the equipment and staff did come from factories making components for PCs; but this time they were making products for generating, storing, and moving energy more renewably. So what went wrong?

Whether it was solar, wind, or batteries, the metrics were pretty similar: dollars per megawatt, mass per megawatt, or multiplying by time to get dollars and mass per unit energy, whether it was for the factories or the systems. Energy is pretty abundant, so the race was on to to produce and handle a commodity. Getting started as a real competitive business meant going BIG: as many of the metrics above depended on size and scale. Hundreds of millions of dollars of venture money only went so far.

The onus was on banks, private equity, engineering firms, and other entities that do not take technology risk, to take a leap of faith to take a product or factory from 1/10th scale to full-scale. The rest is history: most cleantech startups hit a funding valley of death.  They need to raise big money while sitting at high valuations, without a kernel of a real business to attract investors that write those big checks to scale up businesses.

How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity…

Frontier Tech, like Cleantech, can be capital-intense. Whether its satellite communications, driverless cars, AI chips, or quantum computing; like Cleantech, there is relatively larger amounts of capital needed to take the startups the point where they can demonstrate the kernel of a competitive business.  In other words, they typically need at least tens of millions of dollars to show they can sell something and profitably scale that business into a big market. Some money is dedicated to technology development, but, like cleantech a disproportionate amount will go into building up an operation to support the business. Here are a couple examples:

  • Satellite communications: It takes a few million dollars to demonstrate a new radio and spacecraft. It takes tens of millions of dollars to produce the satellites, put them into orbit, build up ground station infrastructure, the software, systems, and operations needed to serve fickle, enterprise customers. All of this while facing competition from incumbent or in-house efforts. At what point will the economics of the business attract a conventional growth investor to fund expansion? If Cleantech taught us anything, it’s that the big money would prefer to watch from the sidelines for longer than you’d think.
  • Quantum compute: Moore’s law is improving new computers at a breakneck pace, but the way they get implemented as pretty incremental. Basic compute architectures date back to the dawn of computing, and new devices can take decades to find their way into servers. For example, NAND Flash technology dates back to the 80s, found its way into devices in the 90s, and has been slowly penetrating datacenters in the past decade. Same goes for GPUs; even with all the hype around AI. Quantum compute companies can offer a service direct to users, i.e., homomorphic computing, advanced encryption/decryption, or molecular simulations. However, that would one of the rare occasions where novel computing machine company has offered computing as opposed to just selling machines. If I had to guess; building the quantum computers will be relatively quick; building the business will be expensive.
  • Operating systems for driverless cars: Tremendous progress has been made since Google first presented its early work in 2011. Dozens of companies are building software that do some combination of perception, prediction, planning, mapping, and simulations.  Every operator of autonomous cars, whether they are vertical like Zoox, or working in partnerships like GM/Cruise, have their own proprietary technology stacks. Unlike building an iPhone app, where the tools are abundant and the platform is well-understood, integrating a complete software module into an autonomous driving system may take up more effort than putting together the original code in the first place.

How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity: it’s easier to build a Frontier-tech company that doesn’t need to raise big dollars before demonstrating the kernel of an interesting business. On rare occasions, if the Frontier tech startup is a pioneer in its field, then it can be acquired for top dollar for the quality of its results and its team.

Recent examples are Salesforce’s acquisition of Metamind, GM’s acquisition of Cruise, and Intel’s acquisition of Nervana (a Lux investment). However, as more competing companies get to work on a new technology, the sense of urgency to acquire rapidly diminishes as the scarce, emerging technology quickly becomes widely available: there are now scores of AI, autonomous car, and AI chip companies out there. Furthermore, as technology becomes more complex, its cost of integration into a product (think about the driverless car example above) also skyrockets.  Knowing this likely liability, acquirers will tend to pay less.

Creative founding teams will find ways to incrementally build interesting businesses as they are building up their technologies.  

I encourage founders, and investors to emphasize the businesses they are building through their inventions.  I encourage founders to rethink plans that require tens of millions of dollars before being able to sell products, while warning founders not to chase revenue for the sake of revenue.  

I suggest they look closely at their plans and find creative ways to start penetrating, or building exciting markets, hence interesting businesses, with modest amounts of capital. I advise them to work with investors who, regardless of whether they saw how Cleantech unfolded, are convinced that their $$ can take the company to the point where it can engage customers with an interesting product with a sense for how it can scale into an attractive business.

Is America’s national security Facebook and Google’s problem?

Outrage that Facebook made the private data of over 87 million of its U.S. users available to the Trump campaign has stoked fears of big US-based technology companies are tracking our every move and misusing our personal data to manipulate us without adequate transparency, oversight, or regulation.

These legitimate concerns about the privacy threat these companies potentially pose must be balanced by an appreciation of the important role data-optimizing companies like these play in promoting our national security.

In his testimony to the combined US Senate Commerce and Judiciary Committees, Facebook CEO Mark Zuckerberg was not wrong to present his company as a last line of defense in an “ongoing arms race” with Russia and others seeking to spread disinformation and manipulate political and economic systems in the US and around the world.

The vast majority of the two billion Facebook users live outside the United States, Zuckerberg argued, and the US should be thinking of Facebook and other American companies competing with foreign rivals in “strategic and competitive” terms. Although the American public and US political leaders are rightly grappling with critical issues of privacy, we will harm ourselves if we don’t recognize the validity of Zuckerberg’s national security argument.

Facebook CEO and founder Mark Zuckerberg testifies during a US House Committee on Energy and Commerce hearing about Facebook on Capitol Hill in Washington, DC, April 11, 2018. (Photo: SAUL LOEB/AFP/Getty Images)

Examples are everywhere of big tech companies increasingly being seen as a threat. US President Trump has been on a rampage against Amazon, and multiple media outlets have called for the company to be broken up as a monopoly. A recent New York Times article, “The Case Against Google,” argued that Google is stifling competition and innovation and suggested it might be broken up as a monopoly. “It’s time to break up Facebook,” Politico argued, calling Facebook “a deeply untransparent, out-of-control company that encroaches on its users’ privacy, resists regulatory oversight and fails to police known bad actors when they abuse its platform.” US Senator Bill Nelson made a similar point when he asserted during the Senate hearings that “if Facebook and other online companies will not or cannot fix the privacy invasions, then we are going to have to. We, the Congress.”

While many concerns like these are valid, seeing big US technology companies solely in the context of fears about privacy misses the point that these companies play a far broader strategic role in America’s growing geopolitical rivalry with foreign adversaries. And while Russia is rising as a threat in cyberspace, China represents a more powerful and strategic rival in the 21st century tech convergence arms race.

Data is to the 21st century what oil was to the 20th, a key asset for driving wealth, power, and competitiveness. Only companies with access to the best algorithms and the biggest and highest quality data sets will be able to glean the insights and develop the models driving innovation forward. As Facebook’s failure to protect its users’ private information shows, these date pools are both extremely powerful and can be abused. But because countries with the leading AI and pooled data platforms will have the most thriving economies, big technology platforms are playing a more important national security role than ever in our increasingly big data-driven world.

 

BEIJING, CHINA – 2017/07/08: Robots dance for the audience on the expo. On Jul. 8th, Beijing International Consumer electronics Expo was held in Beijing China National Convention Center. (Photo by Zhang Peng/LightRocket via Getty Images)

China, which has set a goal of becoming “the world’s primary AI innovation center” by 2025, occupying “the commanding heights of AI technology” by 2030, and the “global leader” in “comprehensive national strength and international influence” by 2050, understands this. To build a world-beating AI industry, Beijing has kept American tech giants out of the Chinese market for years and stolen their intellectual property while putting massive resources into developing its own strategic technology sectors in close collaboration with national champion companies like Baidu, Alibaba, and Tencent.

Examples of China’s progress are everywhere.

Close to a billion Chinese people use Tencent’s instant communication and cashless platforms. In October 2017, Alibaba announced a three-year investment of $15 billion for developing and integrating AI and cloud-computing technologies that will power the smart cities and smart hospitals of the future. Beijing is investing $9.2 billion in the golden combination of AI and genomics to lead personalized health research to new heights. More ominously, Alibaba is prototyping a new form of ubiquitous surveillance that deploys millions of cameras equipped with facial recognition within testbed cities and another Chinese company, Cloud Walk, is using facial recognition to track individuals’ behaviors and assess their predisposition to commit a crime.

In all of these areas, China is ensuring that individual privacy protections do not get in the way of bringing together the massive data sets Chinese companies will need to lead the world. As Beijing well understands, training technologists, amassing massive high-quality data sets, and accumulating patents are key to competitive and security advantage in the 21st century.

“In the age of AI, a U.S.-China duopoly is not just inevitable, it has already arrived,” said Kai-Fu Lee, founder and CEO of Beijing-based technology investment firm Sinovation Ventures and a former top executive at Microsoft and Google. The United States should absolutely not follow China’s lead and disregard the privacy protections of our citizens. Instead, we must follow Europe’s lead and do significantly more to enhance them. But we also cannot blind ourselves to the critical importance of amassing big data sets for driving innovation, competitiveness, and national power in the future.

UNITED STATES – SEPTEMBER 24: Aerial view of the Pentagon building photographed on Sept. 24, 2017. (Photo By Bill Clark/CQ Roll Call)

In its 2017 unclassified budget, the Pentagon spent about $7.4 billion on AI, big data and cloud-computing, a tiny fraction of America’s overall expenditure on AI. Clearly, winning the future will not be a government activity alone, but there is a big role government can and must play. Even though Google remains the most important AI company in the world, the U.S. still crucially lacks a coordinated national strategy on AI and emerging digital technologies. While the Trump administration has gutted the white house Office of Science and Technology Policy, proposed massive cuts to US science funding, and engaged in a sniping contest with American tech giants, the Chinese government has outlined a “military-civilian integration development strategy” to harness AI to enhance Chinese national power.

FBI Director Christopher Wray correctly pointed out that America has now entered a “whole of society” rivalry with China. If the United States thinks of our technology champions solely within our domestic national framework, we might spur some types of innovation at home while stifling other innovations that big American companies with large teams and big data sets may be better able to realize.

America will be more innovative the more we nurture a healthy ecosystem of big, AI driven companies while also empowering smaller startups and others using blockchain and other technologies to access large and disparate data pools. Because breaking up US technology giants without a sufficient analysis of both the national and international implications of this step could deal a body blow to American prosperity and global power in the 21st century, extreme caution is in order.

America’s largest technology companies cannot and should not be dragooned to participate in America’s growing geopolitical rivalry with China. Based on recent protests by Google employees against the company’s collaboration with the US defense department analyzing military drone footage, perhaps they will not.

But it would be self-defeating for American policymakers to not at least partly consider America’s tech giants in the context of the important role they play in America’s national security. America definitely needs significantly stronger regulation to foster innovation and protect privacy and civil liberties but breaking up America’s tech giants without appreciating the broader role they are serving to strengthen our national competitiveness and security would be a tragic mistake.