Y Combinator invests in non-invasive breast cancer screening bra EVA

According to a report by the American Cancer Society, an estimated 266,120 women will be newly diagnosed with breast cancer in the United States this year and (according to a 2016 estimate) can expect to pay between $60,000 and $134,000 on average for treatment and care. But, after hundreds of thousands of dollars and non-quantifiable emotional stress for them and their families, the American Cancer Society still estimates 40,920 women will lose their battle to the disease this year.

Worldwide, roughly 1.7 million women will be diagnosed with the disease yearly, according to a 2012 estimate by The World Cancer Research Fund International.

While these numbers are stark, they do little to fully capture just how devastating a breast cancer diagnosis is for women and their loved ones. This is a feeling that Higia Technologies‘ co-founder and CEO Julián Ríos Cantú is unfortunately very familiar with.

“My mom is a two-time breast cancer survivor,” Cantú told TechCrunch. “The first time she was diagnosed I was eight years old.”

Cantú says that his mother’s second diagnosis was originally missed through standard screenings because her high breast density obscured the tumors from the X-ray. As a result, she lost both of her breasts, but has since fully recovered.

“At that moment I realized that if that was the case for a woman with private insurance and a prevention mindset, then for most women in developing countries, like Mexico where we’re from, the outcome could’ve not been a mastectomy but death,” said Cantú.

Following his mother’s experience, Cantú resolved to develop a way to improve the value of women’s lives and support them in identifying breast abnormalities and cancers early in order to ensure the highest likelihood of survival.

To do this, at the age of 18 Cantú designed EVA — a bio-sensing bra insert that uses thermal sensing and artificial intelligence to identify abnormal temperatures in the breast that can correlate to tumor growth. Cantú says that EVA is not only an easy tool for self-screening but also fills in gaps in current screening technology.

Today, women have fairly limited options when it comes to breast cancer screening. They can opt for a breast ultrasound (which has lower specificity than other options), or a breast MRI (which has higher associated costs), but the standard option is a yearly or bi-yearly mammogram for women 45 and older. This method requires a visit to a doctor, manual manipulation of the breasts by a technologist and exposure to low-levels of radiation for an X-ray scan of the breast tissue.

While this method is relatively reliable, there are still crucial shortcomings, Higia Technologies’ medical adviser Dr. Richard Kaszynski M.D., PhD told TechCrunch.

“We need to identify a real-world solution to diagnosing breast cancer earlier,” said Dr. Kaszynski. “It’s always a trade-off when we’re talking about mammography because you have the radiation exposure, discomfort and anxiety in regards to exposing yourself to a third-party.”

Dr. Kaszynski continued to say that these yearly or bi-yearly mammograms also leave a gap in care in which interval cancers — cancers that begin to take hold between screenings — have time to grow unhindered.

Additionally, Dr. Kaszynski says mammograms are not highly sensitive when it comes to detecting tumors in dense breast tissue, like that of Cantú’s mom. Dense breast tissue, which is more common in younger women and is present in 40 percent of women globally and 80 percent of Asian women, can mask the presence of tumors in the breast from mammograms.

Through its use of non-invasive, thermal sensors EVA is able to collect thermal data from a variety of breast densities that can enable women of all ages to more easily (and more frequently) perform breast examinations.

Here’s how it works:

To start, the user inserts the thermal sensing cups (which come in three standard sizes ranging from A-D) into a sports bra, open EVA’s associated EVA Health App, follow the instructions and wait for 60 minutes while the cup collects thermal data. From there, EVA will send the data via Bluetooth to the app and an AI will analyze the results to provide the user with an evaluation. If EVA believes the user may have an abnormality that puts them at risk, the app will recommend follow-up steps for further screening with a healthcare professional.

While sacrificing your personal health data to the whims of an AI might seem like a scary (and dangerous, if the device were to be hacked) idea to some, Cantú says Higia Technologies has taken steps to protect its users’ data, including advanced encryption of its server and a HIPAA-compliant privacy infrastructure.

So far, EVA has undergone clinical trials in Mexico, and through these trials has seen 87.9 percent sensibility and 81.7 percent specificity from the device. In Mexico, the company has already sold 5,000 devices and plans to begin shipping the first several hundred by October of this year.

And the momentum for EVA is only increasing. In 2017, Cantú was awarded Mexico’s Presidential Medal for Science and Technology and so far this year Higia Technologies has won first place in the SXSW’s International Pitch Competition, been named one of “30 Most Promising Businesses of 2018” by Forbes Magazine Mexico and this summer received a $120,000 investment from Y Combinator.

Moving forward, the company is looking to enter the U.S. market and has plans to begin clinical trials with Stanford Medicine X in October 2018 that should run for about a year. Following these trials, Dr. Kaszynski says that Higia Technologies will continue the process of seeking FDA approval to sell the inserts first as a medical device, accessible at a doctor’s office, and then as a device that users can have at home.

The final pricing for the device is still being decided, but Cantú says he wants the product to be as affordable and accessible as possible so it can be the first choice for women in developing countries where preventative cancer screening is desperately needed.

UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems,” and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However, stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on its business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap.”

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned U.K. AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data sets. (They even went so far as to get ethical signs-off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However, the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6 million people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the U.K.’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to U.K. hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke U.K. privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” versus where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern.”

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the U.K. government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called “Grand Challenges” where it believes the U.K. can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing, we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form. 

What we know about Google’s Duplex demo so far

The highlight of Google’s I/O keynote earlier this month was the reveal of Duplex, a system that can make calls to set up a salon appointment or a restaurant reservation for you by calling those places, chatting with a human and getting the job done. That demo drew lots of laughs at the keynote, but after the dust settled, plenty of ethical questions popped up because of how Duplex tries to fake being human. Over the course of the last few days, those were joined by questions about whether the demo was staged or edited after Axios asked Google a few simple questions about the demo that Google refused to answer.

We have reached out to Google with a number of very specific questions about this and have not heard back. As far as I can tell, the same is true for other outlets that have contacted the company.

If you haven’t seen the demo, take a look at this before you read on.

So did Google fudge this demo? Here is why people are asking and what we know so far:

During his keynote, Google CEO Sundar Pichai noted multiple times that we were listening to real calls and real conversations (“What you will hear is the Google Assistant actually calling a real salon.”). The company made the same claims in a blog post (“While sounding natural, these and other examples are conversations between a fully automatic computer system and real businesses.”).

Google has so far declined to disclose the name of the businesses it worked with and whether it had permission to record those calls. California is a two-consent state, so our understanding is that permission to record these calls would have been necessary (unless those calls were made to businesses in a state with different laws). So on top of the ethics questions, there are also a few legal questions here.

We have some clues, though. In the blog post, Google Duplex lead Yaniv Leviathan and engineering manager Matan Kalman posted a picture of themselves eating a meal “booked through a call from Duplex.” Thanks to the wonder of crowdsourcing and a number of intrepid sleuths, we know that this restaurant was Hongs Gourmet in Saratoga, California. We called Hongs Gourmet last night, but the person who answered the phone referred us to her manager, who she told us had left for the day. (We’ll give it another try today.)

Sadly, the rest of Google’s audio samples don’t contain any other clues as to which restaurants were called.

What prompted much of the suspicion here is that nobody who answers the calls from the Assistant in Google’s samples identifies their name or the name of the business. My best guess is that Google cut those parts from the conversations, but it’s hard to tell. Some of the audio samples do however sound as if the beginning was edited out.

Google clearly didn’t expect this project to be controversial. The keynote demo was clearly meant to dazzle — and it did so in the moment because, if it really works, this technology represents the culmination of years of work on machine learning. But the company clearly didn’t think through the consequences.

My best guess is that Google didn’t fake these calls. But it surely only presented the best examples of its tests. That’s what you do in a big keynote demo, after all, even though in hindsight, showing the system fail or trying to place a live call would have been even better (remember Steve Job’s Starbucks call?).

For now, we’ll see if we can get more answers, but so far all of our calls and emails have gone unanswered. Google could easily do away with all of those questions around Duplex by simply answering them, but so far, that’s not happening.

The new AI-powered Google News app is now available for iOS

Google teased a new version of its News app with AI smarts at its I/O event last week, and today that revamped app landed for iOS and Android devices in 127 countries. The redesigned app replaces the previous Google Play Newsstand app.

The idea is to make finding and consuming news easier than ever, whilst providing an experience that’s customized to each reader and supportive of media publications. The AI element is designed to learn from what you read to help serve you a better selection of content over time, while the app is presented with a clear and clean layout.

Opening the app brings up the tailored ‘For You’ tab which acts as a quick briefing, serving up the top five stories “of the moment” and a tailored selection of opinion articles and longer reads below it.

The next section — ‘Headlines’ — dives more deeply into the latest news, covering global, U.S., business, technology, entertainment, sports, science and health segments. Clicking a story pulls up ‘Full Coverage’ mode, which surfaces a range of content around a topic including editorial and opinion pieces, tweets, videos and a timeline of events.

 

Favorites is a tab that allows customization set by the user — without AI. It works as you’d imagine, letting you mark out preferred topics, news sources and locations to filter your reads. There’s also an option for saved searches and stories which can be quickly summoned.

The final section is ‘Newsstand’ which, as the name suggests aggregates media. Google said last week that it plans to offer over 1,0000 magazine titles you can follow by tapping a star icon or subscribing to. It currently looks a little sparse without specific magazine titles, but we expect that’ll come soon.

As part of that, another feature coming soon is “Subscribe with Google, which lets publications offer subscription-based content. The process of subscribing will use a user’s Google account, and the payment information they already have on file. Then, the paid content becomes available across Google platforms, including Google News, Google Search and publishers’ own websites.

What do AI and blockchain mean for the rule of law?

Digital services have frequently been in collision — if not out-and-out conflict — with the rule of law. But what happens when technologies such as deep learning software and self-executing code are in the driving seat of legal decisions?

How can we be sure next-gen ‘legal tech’ systems are not unfairly biased against certain groups or individuals? And what skills will lawyers need to develop to be able to properly assess the quality of the justice flowing from data-driven decisions?

While entrepreneurs have been eyeing traditional legal processes for some years now, with a cost-cutting gleam in their eye and the word ‘streamline‘ on their lips, this early phase of legal innovation pales in significance beside the transformative potential of AI technologies that are already pushing their algorithmic fingers into legal processes — and perhaps shifting the line of the law itself in the process.

But how can legal protections be safeguarded if decisions are automated by algorithmic models trained on discrete data-sets — or flowing from policies administered by being embedded on a blockchain?

These are the sorts of questions that lawyer and philosopher Mireille Hildebrandt, a professor at the research group for Law, Science, Technology and Society at Vrije Universiteit Brussels in Belgium, will be engaging with during a five-year project to investigate the implications of what she terms ‘computational law’.

Last month the European Research Council awarded Hildebrandt a grant of 2.5 million to conduct foundational research with a dual technology focus: Artificial legal intelligence and legal applications of blockchain.

Discussing her research plan with TechCrunch, she describes the project as both very abstract and very practical, with a staff that will include both lawyers and computer scientists. She says her intention is to come up with a new legal hermeneutics — so, basically, a framework for lawyers to approach computational law architectures intelligently; to understand limitations and implications, and be able to ask the right questions to assess technologies that are increasingly being put to work assessing us.

“The idea is that the lawyers get together with the computer scientists to understand what they’re up against,” she explains. “I want to have that conversation… I want lawyers who are preferably analytically very sharp and philosophically interested to get together with the computer scientists and to really understand each other’s language.

“We’re not going to develop a common language. That’s not going to work, I’m convinced. But they must be able to understand what the meaning of a term is in the other discipline, and to learn to play around, and to say okay, to see the complexity in both fields, to shy away from trying to make it all very simple.

“And after seeing the complexity to then be able to explain it in a way that the people that really matter — that is us citizens — can make decisions both at a political level and in everyday life.”

Hildebrandt says she included both AI and blockchain technologies in the project’s remit as the two offer “two very different types of computational law”.

There is also of course the chance that the two will be applied in combination — creating “an entirely new set of risks and opportunities” in a legal tech setting.

Blockchain “freezes the future”, argues Hildebrandt, admitting of the two it’s the technology she’s more skeptical of in this context. “Once you’ve put it on a blockchain it’s very difficult to change your mind, and if these rules become self-reinforcing it would be a very costly affair both in terms of money but also in terms of effort, time, confusion and uncertainty if you would like to change that.

“You can do a fork but not, I think, when governments are involved. They can’t just fork.”

That said, she posits that blockchain could at some point in the future be deemed an attractive alternative mechanism for states and companies to settle on a less complex system to determine obligations under global tax law, for example. (Assuming any such accord could indeed be reached.)

Given how complex legal compliance can already be for Internet platforms operating across borders and intersecting with different jurisdictions and political expectations there may come a point when a new system for applying rules is deemed necessary — and putting policies on a blockchain could be one way to respond to all the chaotic overlap.

Though Hildebrandt is cautious about the idea of blockchain-based systems for legal compliance.

It’s the other area of focus for the project — AI legal intelligence — where she clearly sees major potential, though also of course risks too. “AI legal intelligence means you use machine learning to do argumentation mining — so you do natural language processing on a lot of legal texts and you try to detect lines of argumentation,” she explains, citing the example of needing to judge whether a specific person is a contractor or an employee.

“That has huge consequences in the US and in Canada, both for the employer… and for the employee and if they get it wrong the tax office may just walk in and give them an enormous fine plus claw back a lot of money which they may not have.”

As a consequence of confused case law in the area, academics at the University of Toronto developed an AI to try to help — by mining lots of related legal texts to generate a set of features within a specific situation that could be used to check whether a person is an employee or not.

“They’re basically looking for a mathematical function that connected input data — so lots of legal texts — with output data, in this case whether you are either an employee or a contractor. And if that mathematical function gets it right in your data set all the time or nearly all the time you call it high accuracy and then we test on new data or data that has been kept apart and you see whether it continues to be very accurate.”

Given AI’s reliance on data-sets to derive algorithmic models that are used to make automated judgement calls, lawyers are going to need to understand how to approach and interrogate these technology structures to determine whether an AI is legally sound or not.

High accuracy that’s not generated off of a biased data-set cannot just be a ‘nice to have’ if your AI is involved in making legal judgment calls on people.

“The technologies that are going to be used, or the legal tech that is now being invested in, will require lawyers to interpret the end results — so instead of saying ‘oh wow this has 98% accuracy and it outperforms the best lawyers!’ they should say ‘ah, ok, can you please show me the set of performance metrics that you tested on. Ah thank you, so why did you put these four into the drawer because they have low accuracy?… Can you show me your data-set? What happened in the hypothesis space? Why did you filter those arguments out?’

“This is a conversation that really requires lawyers to become interested, and to have a bit of fun. It’s a very serious business because legal decisions have a lot of impact on people’s lives but the idea is that lawyers should start having fun in interpreting the outcomes of artificial intelligence in law. And they should be able to have a serious conversation about the limitations of self-executing code — so the other part of the project [i.e. legal applications of blockchain tech].

“If somebody says ‘immutability’ they should be able to say that means that if after you have put everything in the blockchain you suddenly discover a mistake that mistake is automated and it will cost you an incredible amount of money and effort to get it repaired… Or ‘trustless’ — so you’re saying we should not trust the institutions but we should trust software that we don’t understand, we should trust all sorts of middlemen, i.e. the miners in permissionless, or the other types of middlemen who are in other types of distributed ledgers… ”

“I want lawyers to have ammunition there, to have solid arguments… to actually understand what bias means in machine learning,” she continues, pointing by way of an example to research that’s being done by the AI Now Institute in New York to investigate disparate impacts and treatments related to AI systems.

“That’s one specific problem but I think there are many more problems,” she adds of algorithmic discrimination. “So the purpose of this project is to really get together, to get to understand this.

“I think it’s extremely important for lawyers, not to become computer scientists or statisticians but to really get their finger behind what’s happening and then to be able to share that, to really contribute to legal method — which is text oriented. I’m all for text but we have to, sort of, make up our minds when we can afford to use non-text regulation. I would actually say that that’s not law.

“So how should be the balance between something that we can really understand, that is text, and these other methods that lawyers are not trained to understand… And also citizens do not understand.”

Hildebrandt does see opportunities for AI legal intelligence argument mining to be “used for the good” — saying, for example, AI could be applied to assess the calibre of the decisions made by a particular court.

Though she also cautions that huge thought would need to go into the design of any such systems.

“The stupid thing would be to just give the algorithm a lot of data and then train it and then say ‘hey yes that’s not fair, wow that’s not allowed’. But you could also really think deeply what sort of vectors you have to look at, how you have to label them. And then you may find out that — for instance — the court sentences much more strictly because the police is not bringing the simple cases to court but it’s a very good police and they talk with people, so if people have not done something really terrible they try to solve that problem in another way, not by using the law. And then this particular court gets only very heavy cases and therefore gives far more heavy sentences than other courts that get from their police or public prosecutor all life cases.

“To see that you should not only look at legal texts of course. You have to look also at data from the police. And if you don’t do that then you can have very high accuracy and a total nonsensical outcome that doesn’t tell you anything you didn’t already know. And if you do it another way you can sort of confront people with their own prejudices and make it interesting — challenge certain things. But in a way that doesn’t take too much for granted. And my idea would be that the only way this is going to work is to get a lot of different people together at the design stage of the system — so when you are deciding which data you’re going to train on, when you are developing what machine learners call your ‘hypothesis space’, so the type of modeling you’re going to try and do. And then of course you should test five, six, seven performance metrics.

“And this is also something that people should talk about — not just the data scientists but, for instance, lawyers but also the citizens who are going to be affected by what we do in law. And I’m absolutely convinced that if you do that in a smart way that you get much more robust applications. But then the incentive structure to do it that way is maybe not obvious. Because I think legal tech is going to be used to reduce costs.”

She says one of the key concepts of the research project is legal protection by design — opening up other interesting (and not a little alarming) questions such as what happens to the presumption of innocence in a world of AI-fueled ‘pre-crime’ detectors?

“How can you design these systems in such a way that they offer legal protection from the first minute they come to the market — and not as an add-on or a plug in. And that’s not just about data protection but also about non-discrimination of course and certain consumer rights,” she says.

“I always think that the presumption of innocence has to be connected with legal protection by design. So this is more on the side of the police and the intelligence services — how can you help the intelligence services and the police to buy or develop ICT that has certain constrains which makes it compliant with the presumption of innocence which is not easy at all because we probably have to reconfigure what is the presumption of innocence.”

And while the research is part abstract and solidly foundational, Hildebrandt points out that the technologies being examined — AI and blockchain — are already being applied in legal contexts, albeit in “a state of experimentation”.

And, well, this is one tech-fueled future that really must not be unevenly distributed. The risks are stark.   

“Both the EU and national governments have taken a liking to experimentation… and where experimentation stops and systems are really already implemented and impacting decisions about your and my life is not always so easy to see,” she adds.

Her other hope is that the interpretation methodology developed through the project will help lawyers and law firms to navigate the legal tech that’s coming at them as a sales pitch.

“There’s going to be, obviously, a lot of crap on the market,” she says. “That’s inevitable, this is going to be a competitive market for legal tech and there’s going to be good stuff, bad stuff, and it will not be easy to decide what’s good stuff and bad stuff — so I do believe that by taking this foundational perspective it will be more easy to know where you have to look if you want to make that judgement… It’s about a mindset and about an informed mindset on how these things matter.

“I’m all in favor of agile and lean computing. Don’t do things that make no sense… So I hope this will contribute to a competitive advantage for those who can skip methodologies that are basically nonsensical.”

Google Clips gets better at capturing candids of hugs and kisses (which is not creepy, right?)

Google Clips’ AI-powered “smart camera” just got even smarter, Google announced today, revealing improved functionality around Clips’ ability to automatically capture specific moments — like hugs and kisses. Or jumps and dance moves. You know, in case you want to document all your special, private moments in a totally non-creepy way.

I kid, I kid!

Well, not entirely. Let me explain.

Look, Google Clips comes across to me as more of a proof-of-concept device that showcases the power of artificial intelligence as applied to the world of photography rather than a breakthrough consumer device.

I’m the target market for this camera — a parent and a pet owner (and look how cute she is) — but I don’t at all have a desire for a smart camera designed to capture those tough-to-photograph moments, even though neither my kid nor my pet will sit still for pictures.

I’ve tried to articulate this feeling, and I find it’s hard to say why I don’t want this thing, exactly. It’s not because the photos are automatically uploaded to the cloud or made public — they are not. They are saved to the camera’s 16 GB of onboard storage and can be reviewed later with your phone, where you can then choose to keep them, share them or delete them. And it’s not even entirely because of the price point — though, arguably, even with the recent $50 discount it’s quite the expensive toy at $199.

Maybe it’s just the camera’s premise.

That in order for us to fully enjoy a moment, we have to capture it. And because some moments are so difficult to capture, we spend too much time with phone-in-hand, instead of actually living our lives — like playing with our kids or throwing the ball for the dog, for example. And that the only solution to this problem is more technology. Not just putting the damn phone down.

What also irks me is the broader idea behind Clips that all our precious moments have to be photographed or saved as videos. They do not. Some are meant to be ephemeral. Some are meant to be memories. In aggregate, our hearts and minds tally up all these little life moments — a hug, a kiss, a smile — and then turn them into feelings. Bonds. Love.  It’s okay to miss capturing every single one.

I’m telling you, it’s okay.

At the end of the day, there are only a few times I would have even considered using this product — when baby was taking her first steps, and I was worried it would happen while my phone was away. Or maybe some big event, like a birthday party, where I wanted candids but had too much going on to take photos. But even in these moments, I’d rather prop my phone up and turn on a “Google Clips” camera mode, rather than shell out hundreds for a dedicated device.

Just saying.

You may feel differently. That’s cool. To each their own.

Anyway, what I think is most interesting about Clips is the actual technology. That it can view things captured through a camera lens and determine the interesting bits — and that it’s already getting better at this, only months after its release. That we’re teaching AI to understand what’s actually interesting to us humans, with our subjective opinions. That sort of technology has all kinds of practical applications beyond a physical camera that takes spy shots of Fido.

The improved functionality is rolling out to Clips with the May update, and will soon be followed by support for family pairing, which will let multiple family members connect the camera to their device to view content.

Here’s an intro to Clips, if you missed it the first time. (See below)

Note that it’s currently on sale for $199. Yeah, already. Hmmm. 

Microsoft’s Snip Insights puts A.I. technology into a screenshot-taking tool

A team of Microsoft interns have thought up a new way to put A.I. technology to work – in a screenshot snipping tool. Microsoft today is launching their project, Snip Insights, a Windows desktop app that lets you retrieve intelligent insights – or even turn a scan of a textbook or report into an editable document – when you take a screenshot on your PC.

The team’s manager challenged the interns to think up a way to integrate A.I. into a widely used tool, used by millions.

They decided to try a screenshotting tool, like the Windows Snipping Tool or Snip, a previous project from Microsoft’s internal incubator, Microsoft Garage. The team went with the latter, because it would be easier to release as an independent app.

Their new tool leverages Cloud AI services in order to do more with screenshots – like convert images to translated text, automatically detect and tag image content, and more.

For example, you could screenshot a photo of a great pair of shoes you saw on a friend’s Facebook page, and the tool could search the web to help you find where to buy them. (This part of its functionality is similar to what’s already offered today by Pinterest). 

The tool can also take a scanned image of a document, and turn a screenshot of that into editable text.

And it can identify famous people, places or landmarks in the images you capture with a screenshot.

Although it’s a relatively narrow use case for A.I., the Snip Insights tool is an interesting example of how A.I. technology can be integrated into everyday productivity tools – and the potential that lies ahead as A.I. becomes a part of even simple pieces of software.

The tool is being released as Microsoft Garage project, but it’s open-sourced.

The Snip Insights GitHub repository will be maintained by the Cloud AI team going forward.

 

Barnes & Noble teeters in a post-text world

Barnes & Noble, that once proud anchor to many a suburban mall, is waning. It is not failing all at once, dropping like the savaged corpse of Toys “R” Us, but it also clear that its cultural moment has passed and only drastic measures can save it from joining Waldenbooks and Borders in the great, paper-smelling ark of our book-buying memory. I’m thinking about this because David Leonhardt at the New York Times calls for B&N to be saved. I doubt it can be.

First, there is the sheer weight of real estate and the inexorable slide away from print. B&N is no longer a place to buy books. It is a toy store with a bathroom and a cafe (and now a restaurant?), a spot where you’re more likely to find Han Solo bobbleheads than a Star Wars novel. The old joy of visiting a bookstore and finding a few magical books to drag home is fast being replicated by smaller bookstores where curation and provenance are still important while B&N pulls more and more titles. To wit:

But does all of this matter? Will the written word – what you’re reading right now – survive the next century? Is there any value in a book when VR and AR and other interfaces can recreate what amounts to the implicit value of writing? Why save B&N if writing is doomed?

Indulge me for a moment and then argue in comments. I’m positing that B&N’s failure is indicative of a move towards a post-text society, that AI and new media will redefine how we consume the world and the fact that we see more videos than text on our Facebook feed – ostensibly the world’s social nervous system – is indicative of this change.

First, some thoughts on writing vs. film. In his book of essays, Distrust That Particular Flavor, William Gibson writes about the complexity and education and experience needed to consume various forms of media:

The book has been largely unchanged for centuries. Working in language expressed as a system of marks on a surface, I can induce extremely complex experiences, but only in an audience elaborately educated to experience this. This platform still possesses certain inherent advantages. I can, for instance, render interiority of character with an ease and specificity denied to a screenwriter.

But my audience must be literate, must know what prose fiction is and understand how one accesses it. This requires a complexly cultural education, and a certain socioeconomic basis. Not everyone is afforded the luxury of such an education.

But I remember being taken to my first film, either a Disney animation or a Disney nature documentary (I can’t recall which I saw first), and being overwhelmed by the steep yet almost instantaneous learning curve: In that hour, I learned to watch film.

This is a deeply important idea. First, we must appreciate that writing and film offer various value adds beyond linear storytelling. In the book, the writer can explore the inner space of the character, giving you an imagined world in which people are thinking, not just acting. Film – also a linear medium – offers a visual representation of a story and thoughts are inferred by dint of their humanity. We know a character’s inner life thanks to the emotion we infer from their face and body.

This is why, to a degree, the CGI human was so hard to make. Thanks to books, comics, and film we, as humans, were used to giving animals and enchanted things agency. Steamboat Willie mostly thought like us, we imagined, even though he was a mouse with big round ears. Fast forward to the dawn of CGI humans – think Sid from Toy Story and his grotesque face – and then fly even further into the future Leia looking out over a space battle and mumbling “Hope” and you see the scope of achievement in CGI humans as well as the deep problems with representing humans digitally. A CGI car named Lightning McQueen acts and thinks like us while a CGI Leia looks slightly off. We cannot associate agency with fake humans, and that’s a problem.

Thus we needed books to give us that inner look, that frisson of discovery that we are missing in real life.

But soon – and we can argue that films like Infinity War prove this – there will be no uncanny valley. We will be unable to tell if a human on screen or in VR is real or fake and this allows for an interesting set of possibilities.

First, with VR and other tricks, we could see through a character’s eyes and even hear her thoughts. This interiority, as Gibson writes, is no longer found in the realm of text and is instead an added attraction to an already rich medium. Imagine hopping from character to character, the reactions and thoughts coming hot and heavy as they move through the action. Maybe the story isn’t linear. Maybe we make it up as we go along. Imagine the remix, the rebuild, the restructuring.

Gibson again:

This spreading, melting, flowing together of what once were distinct and separate media, that’s where I imagine we’re headed. Any linear narrative film, for instance, can serve as the armature for what we would think of as a virtual reality, but which Johnny X, eight-year-old end-point consumer, up the line, thinks of as how he looks at stuff. If he discovers, say, Steve McQueen in The Great Escape, he might idly pause to allow his avatar a freestyle Hong Kong kick-fest with the German guards in the prison camp. Just because he can. Because he’s always been able to. He doesn’t think about these things. He probably doesn’t fully understand that that hasn’t always been possible.

In this case B&N and the bookstore don’t need to exist at all. We get the depth of books with the vitality of film melded with the immersion of gaming. What about artisanal book lovers, you argue, they’ll keep things alive because they love the feel of books.

When that feel – the scent, the heft, the old book smell – can be simulated do we need to visit a bookstore? When Amazon and Netflix spend millions to explore new media and are sure to branch out into more immersive forms do you need to immerse yourself in To The Lighthouse? Do we really need the education we once had to gain in order to read a book?

We know that Amazon doesn’t care about books. They used books as a starting point to taking over ecommerce and, while the Kindle is the best system for ebooks in existence, it is an afterthought compared to the rest of the business. In short, the champions of text barely support it.

Ultimately what I posit here depends on a number of changes coming all at once. We must all agree to fall headfirst into some share hallucination the replaces all other media. We must feel that that world is real enough for us to abandon our books.

It’s up to book lovers, then, to decide what they want. They have to support and pay for novels, non-fiction, and news. They have to visit small booksellers and keep demand for books alive. And they have to make it possible to exist as a writer. “Publishers are focusing on big-name writers. The number of professional authors has declined. The disappearance of Borders deprived dozens of communities of their only physical bookstore and led to a drop in book sales that looks permanent,” writes Leonhardt and he’s right. There is no upside for text slingers.

In the end perhaps we can’t save B&N. Maybe we let it collapse into a heap like so many before it. Or maybe we fight for a medium that is quickly losing cachet. Maybe we fight for books and ensure that just because the big guys on the block can’t make a bookstore work the rest of us don’t care. Maybe we tell the world that we just want to read.

I shudder to think what will happen if we don’t.

Facebook has a new job posting calling for chip designers

Facebook has posted a job opening looking for an expert in ASIC and FPGA, two custom silicon designs that companies can gear toward specific use cases — particularly in machine learning and artificial intelligence.

There’s been a lot of speculation in the valley as to what Facebook’s interpretation of custom silicon might be, especially as it looks to optimize its machine learning tools — something that CEO Mark Zuckerberg referred to as a potential solution for identifying misinformation on Facebook using AI. The whispers of Facebook’s customized hardware range depending on who you talk to, but generally center around operating on the massive graph Facebook possesses around personal data. Most in the industry speculate that it’s being optimized for Caffe2, an AI infrastructure deployed at Facebook, that would help it tackle those kinds of complex problems.

FPGA is designed to be a more flexible and modular design, which is being championed by Intel as a way to offer the ability to adapt to a changing machine learning-driven landscape. The downside that’s commonly cited when referring to FPGA is that it is a niche piece of hardware that is complex to calibrate and modify, as well as expensive, making it less of a cover-all solution for machine learning projects. ASIC is similarly a customized piece of silicon that a company can gear toward something specific, like mining cryptocurrency.

Facebook’s director of AI research tweeted about the job posting this morning, noting that he previously worked in chip design:

While the whispers grow louder and louder about Facebook’s potential hardware efforts, this does seem to serve as at least another partial data point that the company is looking to dive deep into custom hardware to deal with its AI problems. That would mostly exist on the server side, though Facebook is looking into other devices like a smart speaker. Given the immense amount of data Facebook has, it would make sense that the company would look into customized hardware rather than use off-the-shelf components like those from Nvidia.

(The wildest rumor we’ve heard about Facebook’s approach is that it’s a diurnal system, flipping between machine training and inference depending on the time of day and whether people are, well, asleep in that region.)

Most of the other large players have found themselves looking into their own customized hardware. Google has its TPU for its own operations, while Amazon is also reportedly working on chips for both training and inference. Apple, too, is reportedly working on its own silicon, which could potentially rip Intel out of its line of computers. Microsoft is also diving into FPGA as a potential approach for machine learning problems.

Still, that it’s looking into ASIC and FPGA does seem to be just that — dipping toes into the water for FPGA and ASIC. Nvidia has a lot of control over the AI space with its GPU technology, which it can optimize for popular AI frameworks like TensorFlow. And there are also a large number of very well-funded startups exploring customized AI hardware, including Cerebras Systems, SambaNova Systems, Mythic, and Graphcore (and that isn’t even getting into the large amount of activity coming out of China). So there are, to be sure, a lot of different interpretations as to what this looks like.

One significant problem Facebook may face is that this job opening may just sit up in perpetuity. Another common criticism of FPGA as a solution is that it is hard to find developers that specialize in FPGA. While these kinds of problems are becoming much more interesting, it’s not clear if this is more of an experiment than Facebook’s full all-in on custom hardware for its operations.

But nonetheless, this seems like more confirmation of Facebook’s custom hardware ambitions, and another piece of validation that Facebook’s data set is becoming so increasingly large that if it hopes to tackle complex AI problems like misinformation, it’s going to have to figure out how to create some kind of specialized hardware to actually deal with it.

A representative from Facebook did not yet return a request for comment.

Is America’s national security Facebook and Google’s problem?

Outrage that Facebook made the private data of over 87 million of its U.S. users available to the Trump campaign has stoked fears of big US-based technology companies are tracking our every move and misusing our personal data to manipulate us without adequate transparency, oversight, or regulation.

These legitimate concerns about the privacy threat these companies potentially pose must be balanced by an appreciation of the important role data-optimizing companies like these play in promoting our national security.

In his testimony to the combined US Senate Commerce and Judiciary Committees, Facebook CEO Mark Zuckerberg was not wrong to present his company as a last line of defense in an “ongoing arms race” with Russia and others seeking to spread disinformation and manipulate political and economic systems in the US and around the world.

The vast majority of the two billion Facebook users live outside the United States, Zuckerberg argued, and the US should be thinking of Facebook and other American companies competing with foreign rivals in “strategic and competitive” terms. Although the American public and US political leaders are rightly grappling with critical issues of privacy, we will harm ourselves if we don’t recognize the validity of Zuckerberg’s national security argument.

Facebook CEO and founder Mark Zuckerberg testifies during a US House Committee on Energy and Commerce hearing about Facebook on Capitol Hill in Washington, DC, April 11, 2018. (Photo: SAUL LOEB/AFP/Getty Images)

Examples are everywhere of big tech companies increasingly being seen as a threat. US President Trump has been on a rampage against Amazon, and multiple media outlets have called for the company to be broken up as a monopoly. A recent New York Times article, “The Case Against Google,” argued that Google is stifling competition and innovation and suggested it might be broken up as a monopoly. “It’s time to break up Facebook,” Politico argued, calling Facebook “a deeply untransparent, out-of-control company that encroaches on its users’ privacy, resists regulatory oversight and fails to police known bad actors when they abuse its platform.” US Senator Bill Nelson made a similar point when he asserted during the Senate hearings that “if Facebook and other online companies will not or cannot fix the privacy invasions, then we are going to have to. We, the Congress.”

While many concerns like these are valid, seeing big US technology companies solely in the context of fears about privacy misses the point that these companies play a far broader strategic role in America’s growing geopolitical rivalry with foreign adversaries. And while Russia is rising as a threat in cyberspace, China represents a more powerful and strategic rival in the 21st century tech convergence arms race.

Data is to the 21st century what oil was to the 20th, a key asset for driving wealth, power, and competitiveness. Only companies with access to the best algorithms and the biggest and highest quality data sets will be able to glean the insights and develop the models driving innovation forward. As Facebook’s failure to protect its users’ private information shows, these date pools are both extremely powerful and can be abused. But because countries with the leading AI and pooled data platforms will have the most thriving economies, big technology platforms are playing a more important national security role than ever in our increasingly big data-driven world.

 

BEIJING, CHINA – 2017/07/08: Robots dance for the audience on the expo. On Jul. 8th, Beijing International Consumer electronics Expo was held in Beijing China National Convention Center. (Photo by Zhang Peng/LightRocket via Getty Images)

China, which has set a goal of becoming “the world’s primary AI innovation center” by 2025, occupying “the commanding heights of AI technology” by 2030, and the “global leader” in “comprehensive national strength and international influence” by 2050, understands this. To build a world-beating AI industry, Beijing has kept American tech giants out of the Chinese market for years and stolen their intellectual property while putting massive resources into developing its own strategic technology sectors in close collaboration with national champion companies like Baidu, Alibaba, and Tencent.

Examples of China’s progress are everywhere.

Close to a billion Chinese people use Tencent’s instant communication and cashless platforms. In October 2017, Alibaba announced a three-year investment of $15 billion for developing and integrating AI and cloud-computing technologies that will power the smart cities and smart hospitals of the future. Beijing is investing $9.2 billion in the golden combination of AI and genomics to lead personalized health research to new heights. More ominously, Alibaba is prototyping a new form of ubiquitous surveillance that deploys millions of cameras equipped with facial recognition within testbed cities and another Chinese company, Cloud Walk, is using facial recognition to track individuals’ behaviors and assess their predisposition to commit a crime.

In all of these areas, China is ensuring that individual privacy protections do not get in the way of bringing together the massive data sets Chinese companies will need to lead the world. As Beijing well understands, training technologists, amassing massive high-quality data sets, and accumulating patents are key to competitive and security advantage in the 21st century.

“In the age of AI, a U.S.-China duopoly is not just inevitable, it has already arrived,” said Kai-Fu Lee, founder and CEO of Beijing-based technology investment firm Sinovation Ventures and a former top executive at Microsoft and Google. The United States should absolutely not follow China’s lead and disregard the privacy protections of our citizens. Instead, we must follow Europe’s lead and do significantly more to enhance them. But we also cannot blind ourselves to the critical importance of amassing big data sets for driving innovation, competitiveness, and national power in the future.

UNITED STATES – SEPTEMBER 24: Aerial view of the Pentagon building photographed on Sept. 24, 2017. (Photo By Bill Clark/CQ Roll Call)

In its 2017 unclassified budget, the Pentagon spent about $7.4 billion on AI, big data and cloud-computing, a tiny fraction of America’s overall expenditure on AI. Clearly, winning the future will not be a government activity alone, but there is a big role government can and must play. Even though Google remains the most important AI company in the world, the U.S. still crucially lacks a coordinated national strategy on AI and emerging digital technologies. While the Trump administration has gutted the white house Office of Science and Technology Policy, proposed massive cuts to US science funding, and engaged in a sniping contest with American tech giants, the Chinese government has outlined a “military-civilian integration development strategy” to harness AI to enhance Chinese national power.

FBI Director Christopher Wray correctly pointed out that America has now entered a “whole of society” rivalry with China. If the United States thinks of our technology champions solely within our domestic national framework, we might spur some types of innovation at home while stifling other innovations that big American companies with large teams and big data sets may be better able to realize.

America will be more innovative the more we nurture a healthy ecosystem of big, AI driven companies while also empowering smaller startups and others using blockchain and other technologies to access large and disparate data pools. Because breaking up US technology giants without a sufficient analysis of both the national and international implications of this step could deal a body blow to American prosperity and global power in the 21st century, extreme caution is in order.

America’s largest technology companies cannot and should not be dragooned to participate in America’s growing geopolitical rivalry with China. Based on recent protests by Google employees against the company’s collaboration with the US defense department analyzing military drone footage, perhaps they will not.

But it would be self-defeating for American policymakers to not at least partly consider America’s tech giants in the context of the important role they play in America’s national security. America definitely needs significantly stronger regulation to foster innovation and protect privacy and civil liberties but breaking up America’s tech giants without appreciating the broader role they are serving to strengthen our national competitiveness and security would be a tragic mistake.