AI training and social network content moderation services bring TaskUs a $250 million windfall

TaskUs, the business process outsourcing service that moderates content, annotates information and handles back office customer support for some of the world’s largest tech companies, has raised $250 million in an investment from funds managed by the New York-based private equity giant, Blackstone Group.

It’s been ten years since TaskUs was founded with a $20,000 investment from its two co-founders, and the new deal, which values the decade-old company at $500 million before the money even comes in, is proof of how much has changed for the service in the years since it was founded.

The Santa Monica-based company, which began as a browser-based virtual assistant company — “You send us a task and we get the task done,” recalled TaskUs chief executive Bryce Maddock — is now one of the main providers in the growing field of content moderation for social networks and content annotation for training the algorithms that power artificial intelligence services around the world.

“What I can tell you is we do content moderation for almost every major social network and it’s the fastest growing part of our business today,” Maddock said.

From a network of offices spanning the globe from Mexico to Taiwan and the Philippines to the U.S., the thirty two year-old co-founders Maddock and Jaspar Weir have created a business that’s largest growth stems from snuffing out the distribution of snuff films; child pornography; inappropriate political content and the trails of human trafficking from the user and advertiser generated content on some of the world’s largest social networks.

(For a glimpse into how horrific that process can be, take a look at this article from Wiredwhich looked at content moderation for the anonymous messaging service, Whisper.)

Maddock estimates that while the vast majority of the business was outsourcing business process services in the company’s early days (whether that was transcribing voice mails to texts for the messaging service PhoneTag, or providing customer service and support for companies like HotelTonight) now about 40% of the business comes from content moderation.

Image courtesy of Getty Images

Indeed, it was the growth in new technology services that attracted Blackstone to the business, according to Amit Dixit, Senior Managing Director at Blackstone.

“The growth in ride sharing, social media, online food delivery, e-commerce and autonomous driving is creating an enormous need for enabling business services,” said Dixit in a statement. “TaskUs has established a leadership position in this domain with its base of marquee customers, unique culture, and relentless focus on customer delivery.”

While the back office business processing services remain the majority of the company’s revenue, Maddock knows that the future belongs to an increasing automation of the company’s core services. That’s why part of the money is going to be invested in a new technology integration and consulting business that advises tech companies on which new automation tools to deploy, along with shoring up the company’s position as perhaps the best employer to work for in the world of content moderation and algorithm training services.

It’s been a long five year journey to get to the place it’s in now, with glowing reviews from employees on Glassdoor and social networks like Facebook, Maddock said. The company pays well above minimum wage in the market it operates in (Maddock estimates at least a 50% premium); and provides a generous package of benefits for what Maddock calls the “frontline” teammates. That includes perks like educational scholarships for one child of employees that have been with the company longer than one year; healthcare plans for the employee and three beneficiaries in the Philippines; and 120 days of maternity leave.

And, as content moderation is becoming more automated, the TaskUs employees are spending less time in the human cesspool that attempts to flood social networks every day.

“Increasingly the work that we’re doing is more nuanced. Does this advertisement have political intent. That type of work is far more engaging and could be seen to be a little bit less taxing,” Maddock said.

But he doesn’t deny that the bulk of the hard work his employees are tasked with is identifying and filtering the excremental trash that people would post online.

“I do think that the work is absolutely necessary. The alternative is that everybody has to look at this stuff. it has to be done in a way thats thoughtful and puts the interests of the people who are on the frontlines at the forefront of that effort,” says Maddock. “There have been multiple people who have been involved in sex trafficking, human trafficking and pedophilia that have been arrested directly because of the work that TaskUs is doing. And the consequence of someone not doing that is a far far worse world.”

Maddock also said that TaskUs now shields its employees from having to perform content moderation for an entire shift. “What we have tried to do universally is that there is a subject matter rotation so that you are not just sitting and doing that work all day.”

And the company’s executive knows how taxing the work can be because he said he does it himself. “I try to spend a day a quarter doing the work of our frontline teammates. I spend half my time in our offices,” Maddock said.

Now, with the new investment, TaskUs is looking to expand into additional markets in the UK, Europe, India, and Latin America, Maddock said.

“So far all we’ve been doing is hiring as fast as we possibly can,” said Maddock. “At some point in the future, there’s going to be a point when companies like ours will see the effects of automation,” he added, but that’s why the company is investing in the consulting business… so it can stay ahead of the trends in automation.

Even with the threat that automation could pose to the company’s business, TaskUs had no shortage of other suitors for the massive growth equity round, according to one person familiar with the company. Indeed, Goldman Sachs and Softbank were among the other bidders for a piece of TaskUs, the source said.

Currently, the company has over 11,000 employees (including 2,000 in the U.S.) and is looking to expand.

“We chose to partner with Blackstone because they have a track record of building category defining businesses. Our goal is to build TaskUs into the world’s number one provider of tech enabled business services.  This partnership will help us dramatically increase our investment in consulting, technology and innovation to support our customer’s efforts to streamline and refine their customer experience,” said Maddock in a statement.

The transaction is expected to close in the fourth quarter of 2018, subject to regulatory approvals and customary closing conditions.

BMW’s Alexa integration gets it right

BMW will in a few days start rolling out to many of its drivers support for Amazon’s Alexa voice assistant. The fact that BWM is doing this doesn’t come as a surprise, given that it has long talked about its plans to bring Alexa — and potentially other personal assistants like Cortana and the Google Assistant — to its cars. Ahead of its official launch in Germany, Austria, the U.S. and U.K. (with other countries following at a later date), I went to Munich to take a look at what using Alexa in a BMW is all about.

As Dieter May, BMW’s senior VP for digital products told me earlier this year, the company has long held that in-car digital assistants have to be more than just an “Echo Dot in a cup holder,” meaning that they have to be deeply integrated into the experience and the rest of the technology in the car. And that’s exactly what BMW has done here — and it has done it really well.

What maybe surprised me the most was that we’re not just talking about the voice interface here. BMW is working directly with the Alexa team at Amazon to also integrate visual responses from Alexa. Using the tablet-like display you find above the center console of most new BMWs, the service doesn’t just read out the answer but also shows additional facts or graphs when warranted. That means Alexa in a BMW is a lot more like using an Echo Show than a Dot (though you’re obviously not going to be able to watch any videos on it).

In the demo I saw, in a 2015 BMW X5 that was specifically rigged to run Alexa ahead of the launch, the display would activate when you ask for weather information, for example, or for queries that returned information from a Wikipedia post.

What’s cool here is that the BMW team styled these responses using the same design language that also governs the company’s other in-car products. So if you see the weather forecast from Alexa, that’ll look exactly like the weather forecast from BMW’s own Connected Drive system. The only difference is the “Alexa” name at the top-left of the screen.

All of this sounds easy, but I’m sure it took a good bit of negotiation with Amazon to build a system like this, especially because there’s an important second part to this integration that’s quite unique. The queries, which you start by pushing the usual “talk” button in the car (in newer models, the Alexa wake word feature will also work), are first sent to BMW’s servers before they go to Amazon. BMW wants to keep control over the data and ensure its users’ privacy, so it added this proxy in the middle. That means there’s a bit of an extra lag in getting responses from Amazon, but the team is working hard on reducing this, and for many of the queries we tried during my demo, it was already negligible.

As the team told me, the first thing it had to build was a way to switch that can route your queries to the right service. The car, after all, already has a built-in speech recognition service that lets you set directions in the navigation system, for example. Now, it has to recognize that the speaker said “Alexa” at the beginning of the query, then route it to the Alexa service. The team also stressed that we’re talking about a very deep integration here. “We’re not just streaming everything through your smartphone or using some plug-and-play solution,” a BMW spokesperson noted.

“You get what you’d expect from BMW, a deep integration, and to do that, we use the technology we already have in the car, especially the built-in SIM card.”

One of the advantages of Alexa’s open ecosystem is its skills. Not every skill makes sense in the context of the car, and some could be outright distracting, so the team is curating a list of skills that you’ll be able to use in the car.

It’s no secret that BMW is also working with Microsoft (and many of its cloud services run on Azure). BMW argues that Alexa and Cortana have different strengths, though, with Cortana being about productivity and a connection to Office 365, for example. It’s easy to imagine a future where you could call up both Alexa and Cortana from your car — and that’s surely why BMW built its own system for routing voice commands and why it wants to have control over this process.

BMW tells me that it’ll look at how users will use the new service and tune it accordingly. Because a lot of the functionality runs in the cloud, updates are obviously easy and the team can rapidly release new features — just like any other software company.

Google Assistant’s news feeds are getting smarter

News is far and away the feature I use the most with Google Assistant. Every morning, I ask the Assistant “what’s in the news,” and it dutifully cycles through some pre-recorded news briefs from NPR, CNN and the like. It does the job, but it’s not much for specificity.

Google, however, is introducing tools to help developers target specific content based on queries. Per the example given in a new blog post, publishers can highlight a snippet of a story that will be read aloud when a user makes a request along the lines of “Hey Google, what’s the latest news on NASA?”Assistant will then read that portion aloud. The link to the full article is sent to the user’s mobile device and, once done, Assistant will ask if they want another.

It’s interesting to watch companies like Google and Amazon play around with these news reads. It seems no one has quite figured out the ideal length for audible news digests, but it appears to fall somewhere between a headline and full story. Or maybe it’s something more akin to bullet points, with the option to read on if the user wants more information.

Organizations like NPR and CNN do appear to have something of a head start, since a smart speaker briefing isn’t entirely dissimilar from getting your information from cable news or public radio. Short, distilled snippets certainly seem like the way to go. As more people use the service and the AIs become more advanced, it will be easier to tailor that information to specific users.

At the very least, this should provide a way to further customize those feeds — not to mention giving Google even more insight into what its users are searching for. The feature will only be available for U.S. English speakers at launch. 

The Google Assistant app will walk you through your day

Google’s Assistant app mostly functions as a surrogate for its line of connected Home devices. But what about all of that information it’s aggregating? The company will start putting that to use, providing a “visual snapshot” of the day to come.

The new feature, rolling out to the app on Android and iOS today, pulls in a bunch of relevant information from across Google services, including calendar, reminders, stocks, package deliveries, flights, restaurant/movie reservations and suggested Actions. It also provides travel times to and from appointments to offer up a better idea of when to leave.

More features will be rolling out soon, including notes from Google Keep,  Any.do, Bring and Todoist, along with parking reminders, nearby activities and recommendations for music and podcasts. In other words, the Google Assistant app is angling to become your one stop shop for — well, basically ever single thing you do in a given day.

While Amazon’s Alexa play has been centered around commerce, it’s pretty clear that Google’s in it for the same reason as always: information. Using a voice controlled smart speaker is yet another way to gather all of that data from a user, and now it’s being out to use in a single spot.

It’s an obvious play — and an important reminder of just how much information these companies have on us at a given time.

Digging deeper into smart speakers reveals two clear paths

In a truly fascinating exploration into two smart speakers – the Sonos One and the Amazon Echo – BoltVC’s Ben Einstein has found some interesting differences in the way a traditional speaker company and an infrastructure juggernaut look at their flagship devices.

The post is well worth a full read but the gist is this: Sonos, a very traditional speaker company, has produced a good speaker and modified its current hardware to support smart home features like Alexa and Google Assistant. The Sonos One, notes Einstein, is a speaker first and smart hardware second.

“Digging a bit deeper, we see traditional design and manufacturing processes for pretty much everything. As an example, the speaker grill is a flat sheet of steel that’s stamped, rolled into a rounded square, welded, seams ground smooth, and then powder coated black. While the part does look nice, there’s no innovation going on here,” he writes.

The Amazon Echo, on the other hand, looks like what would happen if an engineer was given an unlimited budget and told to build something that people could talk to. The design decisions are odd and intriguing and it is ultimately less a speaker than a home conversation machine. Plus it is very expensive to make.

Pulling off the sleek speaker grille, there’s a shocking secret here: this is an extruded plastic tube with a secondary rotational drilling operation. In my many years of tearing apart consumer electronics products, I’ve never seen a high-volume plastic part with this kind of process. After some quick math on the production timelines, my guess is there’s a multi-headed drill and a rotational axis to create all those holes. CNC drilling each hole individually would take an extremely long time. If anyone has more insight into how a part like this is made, I’d love to see it! Bottom line: this is another surprisingly expensive part.

Sonos, which has been making a form of smart speaker for 15 years, is a CE company with cachet. Amazon, on the other hand, sees its devices as a way into living rooms and a delivery system for sales and is fine with licensing its tech before making its own. Therefore to compare the two is a bit disingenuous. Einstein’s thesis that Sonos’ trajectory is troubled by the fact that it depends on linear and closed manufacturing techniques while Amazon spares no expense to make its products is true. But Sonos makes speakers that work together amazingly well. They’ve done this for a decade and a half. If you compare their products – and I have – with competing smart speakers an non-audiophile “dumb” speakers you will find their UI, UX, and sound quality surpass most comers.

Amazon makes things to communicate with Amazon. This is a big difference.

Where Einstein is correct, however, is in his belief that Sonos is at a definite disadvantage. Sonos chases smart technology while Amazon and Google (and Apple, if their HomePod is any indication) lead. That said, there is some value to having a fully-connected set of speakers with add-on smart features vs. having to build an entire ecosystem of speaker products that can take on every aspect of the home theatre.

On the flip side Amazon, Apple, and Google are chasing audio quality while Sonos leads. While we can say that in the future we’ll all be fine with tinny round speakers bleating out Spotify in various corners of our room, there is something to be said for a good set of woofers. Whether this nostalgic love of good sound survives this generation’s tendency to watch and listen to low resolution media is anyone’s bet, but that’s Amazon’s bet to lose.

Ultimately Sonos is strong and fascinating company. An upstart that survived the great CE destruction wrought by Kickstarter and Amazon, it produces some of the best mid-range speakers I’ve used. Amazon makes a nice – almost alien – product, but given that it can be easily copied and stuffed into a hockey puck that probably costs less than the entire bill of materials for the Amazon Echo it’s clear that Amazon’s goal isn’t to make speakers.

Whether the coming Sonos IPO will be successful depends partially on Amazon and Google playing ball with the speaker maker. The rest depends on the quality of product and the dedication of Sonos users. This good will isn’t as valuable as a signed contract with major infrastructure players but Sonos’ good will is far more than Amazon and Google have with their popular but potentially intrusive product lines. Sonos lives in the home while Google and Amazon want to invade it. That is where Sonos wins.

Apple’s Shortcuts will flip the switch on Siri’s potential

At WWDC, Apple pitched Shortcuts as a way to ”take advantage of the power of apps” and ”expose quick actions to Siri.” These will be suggested by the OS, can be given unique voice commands, and will even be customizable with a dedicated Shortcuts app.

But since this new feature won’t let Siri interpret everything, many have been lamenting that Siri didn’t get much better — and is still lacking compared to Google Assistant or Amazon Echo.

But to ignore Shortcuts would be missing out on the bigger picture. Apple’s strengths have always been the device ecosystem and the apps that run on them.

With Shortcuts, both play a major role in how Siri will prove to be a truly useful assistant and not just a digital voice to talk to.

Your Apple devices just got better

For many, voice assistants are a nice-to-have, but not a need-to-have.

It’s undeniably convenient to get facts by speaking to the air, turning on the lights without lifting a finger, or triggering a timer or text message – but so far, studies have shown people don’t use much more than these on a regular basis.

People don’t often do more than that because the assistants aren’t really ready for complex tasks yet, and when your assistant is limited to tasks inside your home or commands spoken inton your phone, the drawbacks prevent you from going deep.

If you prefer Alexa, you get more devices, better reliability, and a breadth of skills, but there’s not a great phone or tablet experience you can use alongside your Echo. If you prefer to have Google’s Assistant everywhere, you must be all in on the Android and Home ecosystem to get the full experience too.

Plus, with either option, there are privacy concerns baked into how both work on a fundamental level – over the web.

In Apple’s ecosystem, you have Siri on iPhone, iPad, Apple Watch, AirPods, HomePod, CarPlay, and any Mac. Add in Shortcuts on each of those devices (except Mac, but they still have Automator) and suddenly you have a plethora of places to execute these all your commands entirely by voice.

Each accessory that Apple users own will get upgraded, giving Siri new ways to fulfill the 10 billion and counting requests people make each month (according to Craig Federighi’s statement on-stage at WWDC).

But even more important than all the places where you can use your assistant is how – with Shortcuts, Siri gets even better with each new app that people download. There’s the other key difference: the App Store.

Actions are the most important part of your apps

iOS has always had a vibrant community of developers who create powerful, top-notch applications that push the system to its limits and take advantage of the ever-increasing power these mobile devices have.

Shortcuts opens up those capabilities to Siri – every action you take in an app can be shared out with Siri, letting people interact right there inline or using only their voice, with the app running everything smoothly in the background.

Plus, the functional approach that Apple is taking with Siri creates new opportunities for developers provide utility to people instead of requiring their attention. The suggestions feature of Shortcuts rewards “acceleration”, showing the apps that provide the most time savings and use for the user more often.

This opens the door to more specialized types of apps that don’t necessarily have to grow a huge audience and serve them ads – if you can make something that helps people, Shortcuts can help them use your app more than ever before (and without as much effort). Developers can make a great experience for when people visit the app, but also focus on actually doing something useful too.

This isn’t a virtual assistant that lives in the cloud, but a digital helper that can pair up with the apps uniquely taking advantage of Apple’s hardware and software capabilities to truly improve your use of the device.

In the most groan-inducing way possible, “there’s an app for that” is back and more important than ever. Not only are apps the centerpiece of the Siri experience, but it’s their capabilities that extend Siri’s – the better the apps you have, the better Siri can be.

Control is at your fingertips

Importantly, Siri gets all of this Shortcuts power while keeping the control in each person’s hands.

All of the information provided to the system is securely passed along by individual apps – if something doesn’t look right, you can just delete the corresponding app and the information is gone.

Siri will make recommendations based on activities deemed relevant by the apps themselves as well, so over-active suggestions shouldn’t be common (unless you’re way too active in some apps, in which case they added Screen Time for you too).

Each of the voice commands is custom per user as well, so people can ignore their apps suggestions and set up the phrases to their own liking. This means nothing is already “taken” because somebody signed up for the skill first (unless you’ve already used it yourself, of course).

Also, Shortcuts don’t require the web to work – the voice triggers might not work, but the suggestions and Shortcuts app give you a place to use your assistant voicelessly. And importantly, Shortcuts can use the full power of the web when they need to.

This user-centric approach paired with the technical aspects of how Shortcuts works gives Apple’s assistant a leg up for any consumers who find privacy important. Essentially, Apple devices are only listening for “Hey Siri”, then the available Siri domains + your own custom trigger phrases.

Without exposing your information to the world or teaching a robot to understand everything, Apple gave Siri a slew of capabilities that in many ways can’t be matched. With Shortcuts, it’s the apps, the operating system, and the variety of hardware that will make Siri uniquely qualified come this fall.

Plus, the Shortcuts app will provide a deeper experience for those who want to chain together actions and customize their own shortcuts.

There’s lots more under the hood to experiment with, but this will allow anyone to tweak & prod their Siri commands until they have a small army of custom assistant tasks at the ready.

Hey Siri, let’s get started

Siri doesn’t know all, Can’t perform any task you bestow upon it, and won’t make somewhat uncanny phone calls on your behalf.

But instead of spending time conversing with a somewhat faked “artificial intelligence”, Shortcuts will help people use Siri as an actual digital assistant – a computer to help them get things done better than they might’ve otherwise.

With Siri’s new skills extendeding to each of your Apple products (except for Apple TV and the Mac, but maybe one day?), every new device you get and every new app you download can reveal another way to take advantage of what this technology can offer.

This broadening of Siri may take some time to get used to – it will be about finding the right place for it in your life.

As you go about your apps, you’ll start seeing and using suggestions. You’ll set up a few voice commands, then you’ll do something like kick off a truly useful shortcut from your Apple Watch without your phone connected and you’ll realize the potential.

This is a real digital assistant, your apps know how to work with it, and it’s already on many of your Apple devices. Now, it’s time to actually make use of it.

The Sonos Beam is the soundbar evolved

Sonos has always gone its own way. The speaker manufacturer dedicated itself to network-connected speakers before there were home networks and they sold a tablet-like remote control before there were tablets. Their surround sound systems install quickly and run seamlessly. You can buy a few speakers, tap a few buttons and have 5.1 sound in less time than it takes to pull a traditional home audio system out of its shipping box.

This latest model is an addition to the Sonos line and is sold alongside the Playbase — a lumpen soundbar designed to sit directly underneath TVs not attached to the wall — and the Playbar, a traditionally styled soundbar that preceded the Beam. Both products had all of the Sonos highlights — great sound, amazing interfaces and easy setup — but the Base had too much surface area for more elegant installations and the Bar was too long while still sporting an aesthetic that harkened back to 2008 Crutchfield catalogs.

The $399 Beam is Sonos’ answer to that, and it is more than just a pretty box. The speaker includes Alexa — and promises Google Assistant support — and it improves your TV sound immensely. Designed as an add-on to your current TV, it can stand alone or connect with the Sonos subwoofer and a few satellite surround speakers for a true surround sound experience. It truly shines alone, however, thanks to its small size and more than acceptable audio range.

To use the Beam you bring up an iOS or Android app to display your Spotify, Apple Music, Amazon and Pandora accounts (this is a small sampling; Sonos supports more). You select a song or playlist and start listening. Then, when you want to watch TV, the speaker automatically flips to TV mode — including speech enhancement features that actually work — when the TV is turned on. An included tuning system turns your phone into a scanner that improves the room audio automatically.

The range is limited by the Beam’s size and shape and there is very little natural bass coming out of this thing. However, in terms of range, the Beam is just fine. It can play an action movie with a bit of thump and then go on to play some light jazz or pop. I’ve had some surprisingly revelatory sessions with the Beam when listening to classic rock and more modern fare and it’s very usable as a home audio center.

The Beam is two feet long and three inches tall. It comes in black or white and is very unobtrusive in any home theater setup. Interestingly, the product supports HDMI-ARC aka HDMI Audio Return Channel. This standard, introduced in TVs made in the past five years, allows the TV to automatically output audio and manage volume controls via a single HDMI cable. What this means, however, is you’re going to have a bad time if you don’t have HDMI-ARC.

Sonos includes an adapter that can also accept optical audio output, but setup requires you to turn off your TV speakers and route all the sound to the optical out. This is a bit of a mess, and if you don’t have either of those outputs — HDMI-ARC or optical — then you’re probably in need of a new TV. That said, HDMI-ARC is a bit jarring for first timers, but Sonos is sure that enough TVs support it that they can use it instead of optical-only.

The Beam doesn’t compete directly with other “smart” speakers like the HomePod. It is very specifically a consumer electronics device, even though it supports AirPlay 2 and Alexa. Sonos makes speakers, and good ones at that, and that goal has always been front and center. While other speakers may offer a more fully featured sound in a much smaller package, the Beam offers both great TV audio and great music playback for less than any other higher end soundbar. Whole room audio does get expensive — about $1,200 for a Sub and two satellites — but you can simply add on pieces as you go. One thing, however, is clear: Sonos has always been the best wireless speaker for the money and the Beam is another win for the scrappy and innovative speaker company.



Duplex shows Google failing at ethical and creative AI design

Google CEO Sundar Pichai milked the woos from a clappy, home-turf developer crowd at its I/O conference in Mountain View this week with a demo of an in-the-works voice assistant feature that will enable the AI to make telephone calls on behalf of its human owner.

The so-called ‘Duplex’ feature of the Google Assistant was shown calling a hair salon to book a woman’s hair cut, and ringing a restaurant to try to book a table — only to be told it did not accept bookings for less than five people.

At which point the AI changed tack and asked about wait times, earning its owner and controller, Google, the reassuring intel that there wouldn’t be a long wait at the elected time. Job done.

The voice system deployed human-sounding vocal cues, such as ‘ums’ and ‘ahs’ — to make the “conversational experience more comfortable“, as Google couches it in a blog about its intentions for the tech.

The voices Google used for the AI in the demos were not synthesized robotic tones but distinctly human-sounding, in both the female and male flavors it showcased.

Indeed, the AI pantomime was apparently realistic enough to convince some of the genuine humans on the other end of the line that they were speaking to people.

At one point the bot’s ‘mm-hmm’ response even drew appreciative laughs from a techie audience that clearly felt in on the ‘joke’.

But while the home crowd cheered enthusiastically at how capable Google had seemingly made its prototype robot caller — with Pichai going on to sketch a grand vision of the AI saving people and businesses time — the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration.

One it does not allow to trouble the trajectory of its engineering ingenuity.

A consideration which only seems to get a look in years into the AI dev process, at the cusp of a real-world rollout — which Pichai said would be coming shortly.

Deception by design

“Google’s experiments do appear to have been designed to deceive,” agreed Dr Thomas King, a researcher at the Oxford Internet Institute’s Digital Ethics Lab, discussing the Duplex demo. “Because their main hypothesis was ‘can you distinguish this from a real person?’. In this case it’s unclear why their hypothesis was about deception and not the user experience… You don’t necessarily need to deceive someone to give them a better user experience by sounding naturally. And if they had instead tested the hypothesis ‘is this technology better than preceding versions or just as good as a human caller’ they would not have had to deceive people in the experiment.

“As for whether the technology itself is deceptive, I can’t really say what their intention is — but… even if they don’t intend it to deceive you can say they’ve been negligent in not making sure it doesn’t deceive… So I can’t say it’s definitely deceptive, but there should be some kind of mechanism there to let people know what it is they are speaking to.”

“I’m at a university and if you’re going to do something which involves deception you have to really demonstrate there’s a scientific value in doing this,” he added, agreeing that, as a general principle, humans should always be able to know that an AI they’re interacting with is not a person.

Because who — or what — you’re interacting with “shapes how we interact”, as he put it. “And if you start blurring the lines… then this can sew mistrust into all kinds of interactions — where we would become more suspicious as well as needlessly replacing people with meaningless agents.”

No such ethical conversations troubled the I/O stage, however.

Yet Pichai said Google had been working on the Duplex technology for “many years”, and went so far as to claim the AI can “understand the nuances of conversation” — albeit still evidently in very narrow scenarios, such as booking an appointment or reserving a table or asking a business for its opening hours on a specific date.

“It brings together all our investments over the years in natural language understanding, deep learning, text to speech,” he said.

What was yawningly absent from that list, and seemingly also lacking from the design of the tricksy Duplex experiment, was any sense that Google has a deep and nuanced appreciation of the ethical concerns at play around AI technologies that are powerful and capable enough of passing off as human — thereby playing lots of real people in the process.

The Duplex demos were pre-recorded, rather than live phone calls, but Pichai described the calls as “real” — suggesting Google representatives had not in fact called the businesses ahead of time to warn them its robots might be calling in.

“We have many of these examples where the calls quite don’t go as expected but our assistant understands the context, the nuance… and handled the interaction gracefully,” he added after airing the restaurant unable-to-book example.

So Google appears to have trained Duplex to be robustly deceptive — i.e. to be able to reroute around derailed conversational expectations and still pass itself off as human — a feature Pichai lauded as ‘graceful’.

And even if the AI’s performance was more patchy in the wild than Google’s demo suggested it’s clearly the CEO’s goal for the tech.

While trickster AIs might bring to mind the iconic Turing Test — where chatbot developers compete to develop conversational software capable of convincing human judges it’s not artificial — it should not.

Because the application of the Duplex technology does not sit within the context of a high profile and well understood competition. Nor was there a set of rules that everyone was shown and agreed to beforehand (at least so far as we know — if there were any rules Google wasn’t publicizing them). Rather it seems to have unleashed the AI onto unsuspecting business staff who were just going about their day jobs. Can you see the ethical disconnect?

“The Turing Test has come to be a bellwether of testing whether your AI software is good or not, based on whether you can tell it apart from a human being,” is King’s suggestion on why Google might have chosen a similar trick as an experimental showcase for Duplex.

“It’s very easy to say look how great our software is, people cannot tell it apart from a real human being — and perhaps that’s a much stronger selling point than if you say 90% of users preferred this software to the previous software,” he posits. “Facebook does A/B testing but that’s probably less exciting — it’s not going to wow anyone to say well consumers prefer this slightly deeper shade of blue to a lighter shade of blue.”

Had Duplex been deployed within Turing Test conditions, King also makes the point that it’s rather less likely it would have taken in so many people — because, well, those slightly jarringly timed ums and ahs would soon have been spotted, uncanny valley style.

Ergo, Google’s PR flavored ‘AI test’ for Duplex is also rigged in its favor — to further supercharge a one-way promotional marketing message around artificial intelligence. So, in other words, say hello to yet another layer of fakery.

How could Google introduce Duplex in a way that would be ethical? King reckons it would need to state up front that it’s a robot and/or use an appropriately synthetic voice so it’s immediately clear to anyone picking up the phone the caller is not human.

“If you were to use a robotic voice there would also be less of a risk that all of your voices that you’re synthesizing only represent a small minority of the population speaking in ‘BBC English’ and so, perhaps in a sense, using a robotic voice would even be less biased as well,” he adds.

And of course, not being up front that Duplex is artificial embeds all sorts of other knock-on risks, as King explained.

“If it’s not obvious that it’s a robot voice there’s a risk that people come to expect that most of these phone calls are not genuine. Now experiments have shown that many people do interact with AI software that is conversational just as they would another person but at the same time there is also evidence showing that some people do the exact opposite — and they become a lot ruder. Sometimes even abusive towards conversational software. So if you’re constantly interacting with these bots you’re not going to be as polite, maybe, as you normally would, and that could potentially have effects for when you get a genuine caller that you do not know is real or not. Or even if you know they’re real perhaps the way you interact with people has changed a bit.”

Safe to say, as autonomous systems get more powerful and capable of performing tasks that we would normally expect a human to be doing, the ethical considerations around those systems scale as exponentially large as the potential applications. We’re really just getting started.

But if the world’s biggest and most powerful AI developers believe it’s totally fine to put ethics on the backburner then risks are going to spiral up and out and things could go very badly indeed.

We’ve seen, for example, how microtargeted advertising platforms have been hijacked at scale by would-be election fiddlers. But the overarching risk where AI and automation technologies are concerned is that humans become second class citizens vs the tools that are being claimed to be here to help us.

Pichai said the first — and still, as he put it, experimental — use of Duplex will be to supplement Google’s search services by filling in information about businesses’ opening times during periods when hours might inconveniently vary, such as public holidays.

Though for a company on a general mission to ‘organize the world’s information and make it universally accessible and useful’ what’s to stop Google from — down the line — deploying vast phalanx of phone bots to ring and ask humans (and their associated businesses and institutions) for all sorts of expertise which the company can then liberally extract and inject into its multitude of connected services — monetizing the freebie human-augmented intel via our extra-engaged attention and the ads it serves alongside?

During the course of writing this article we reached out to Google’s press line several times to ask to discuss the ethics of Duplex with a relevant company spokesperson. But ironically — or perhaps fittingly enough — our hand-typed emails received only automated responses.

Pichai did emphasize that the technology is still in development, and said Google wants to “work hard to get this right, get the user experience and the expectation right for both businesses and users”.

But that’s still ethics as a tacked on afterthought — not where it should be: Locked in place as the keystone of AI system design.

And this at a time when platform-fueled AI problems, such as algorithmically fenced fake news, have snowballed into huge and ugly global scandals with very far reaching societal implications indeed — be it election interference or ethnic violence.

You really have to wonder what it would take to shake the ‘first break it, later fix it’ ethos of some of the tech industry’s major players…

Ethical guidance relating to what Google is doing here with the Duplex AI is actually pretty clear if you bother to read it — to the point where even politicians are agreed on foundational basics, such as that AI needs to operate on “principles of intelligibility and fairness”, to borrow phrasing from just one of several political reports that have been published on the topic in recent years.

In short, deception is not cool. Not in humans. And absolutely not in the AIs that are supposed to be helping us.

Transparency as AI standard

The IEEE technical professional association put out a first draft of a framework to guide ethically designed AI systems at the back end of 2016 — which included general principles such as the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable. 

In the same year the UK’s BSI standards body developed a specific standard — BS 8611 Ethics design and application robots — which explicitly names identity deception (intentional or unintentional) as a societal risk, and warns that such an approach will eventually erode trust in the technology.  

“Avoid deception due to the behaviour and/or appearance of the robot and ensure transparency of robotic nature,” the BSI’s standard advises.

It also warns against anthropomorphization due to the associated risk of misinterpretation — so Duplex’s ums and ahs don’t just suck because they’re fake but because they are misleading and so deceptive, and also therefore carry the knock-on risk of undermining people’s trust in your service but also more widely still, in other people generally.

“Avoid unnecessary anthropomorphization,” is the standard’s general guidance, with the further steer that the technique be reserved “only for well-defined, limited and socially-accepted purposes”. (Tricking workers into remotely conversing with robots probably wasn’t what they were thinking of.)

The standard also urges “clarification of intent to simulate human or not, or intended or expected behaviour”. So, yet again, don’t try and pass your bot off as human; you need to make it really clear it’s a robot.

For Duplex the transparency that Pichai said Google now intends to think about, at this late stage in the AI development process, would have been trivially easy to achieve: It could just have programmed the assistant to say up front: ‘Hi, I’m a robot calling on behalf of Google — are you happy to talk to me?’

Instead, Google chose to prioritize a demo ‘wow’ factor — of showing Duplex pulling the wool over busy and trusting humans’ eyes — and by doing so showed itself tonedeaf on the topic of ethical AI design.

Not a good look for Google. Nor indeed a good outlook for the rest of us who are subject to the algorithmic whims of tech giants as they flick the control switches on their society-sized platforms.

“As the development of AI systems grows and more research is carried out, it is important that ethical hazards associated with their use are highlighted and considered as part of the design,” Dan Palmer, head of manufacturing at BSI, told us. “BS 8611 was developed… alongside ​scientists, academics, ethicists, philosophers and users​. It explains that any autonomous system or robot should be accountable, truthful and unprejudiced.

“The standard raises a number of potential ethical hazards that are relevant to the Google Duplex; one of these is the risk of AI machines becoming sexist or racist due to a biased data feed. This surfaced prominently when ​Twitter users influenced Microsoft’s AI chatbot, Tay, to spew out offensive messages.

​”Another contentious subject is whether forming an emotional bond with a robot is desirable, especially if the voice assistant interacts with the elderly or children. Other guidelines on new hazards that should be considered include: robot deception, robot addiction and the potential for a learning system to exceed its remit.

“Ultimately, it must always be transparent who is responsible for the behavior of any voice assistant or robot, even if it behaves autonomously.”

Yet despite all the thoughtful ethical guidance and research that’s already been produced, and is out there for the reading, here we are again being shown the same tired tech industry playbook applauding engineering capabilities in a shiny bubble, stripped of human context and societal consideration, and dangled in front of an uncritical audience to see how loud they’ll cheer.

Leaving important questions — over the ethics of Google’s AI experiments and also, more broadly, over the mainstream vision of AI assistance it’s so keenly trying to sell us — to hang and hang.

Questions like how much genuine utility there might be for the sorts of AI applications it’s telling us we’ll all want to use, even as it prepares to push these apps on us, because it can — as a consequence of its great platform power and reach.

A core ‘uncanny valley-ish’ paradox may explain Google’s choice of deception for its Duplex demo: Humans don’t necessarily like speaking to machines. Indeed, oftentimes they prefer to speak to other humans. It’s just more meaningful to have your existence registered by a fellow pulse-carrier. So if an AI reveals itself to be a robot the human who picked up the phone might well just put it straight back down again.

“Going back to the deception, it’s fine if it’s replacing meaningless interactions but not if it’s intending to replace meaningful interactions,” King told us. “So if it’s clear that it’s synthetic and you can’t necessarily use it in a context where people really want a human to do that job. I think that’s the right approach to take.

“It matters not just that your hairdresser appears to be listening to you but that they are actually listening to you and that they are mirroring some of your emotions. And to replace that kind of work with something synthetic — I don’t think it makes much sense.

“But at the same time if you reveal it’s synthetic it’s not likely to replace that kind of work.”

So really Google’s Duplex sleight of hand may be trying to conceal the fact AIs won’t be able to replace as many human tasks as technologists like to think they will. Not unless lots of currently meaningful interactions are rendered meaningless. Which would be a massive human cost that societies would have to — at very least — debate long and hard.

Trying to avoid such a debate from taking place by pretending there’s nothing ethical to see here is, hopefully, not Google’s designed intention.

King also makes the point that the Duplex system is (at least for now) computationally costly. “Which means that Google cannot and should not just release this as software that anyone can run on their home computers.

“Which means they can also control how it is used, and in what contexts — and they can also guarantee it will only be used with certain safeguards built in. So I think the experiments are maybe not the best of signs but the real test will be how they release it — and will they build the safeguards that people demand into the software,” he adds.

As well as a lack of visible safeguards in the Duplex demo, there’s also — I would argue — a curious lack of imagination on display.

Had Google been bold enough to reveal its robot interlocutor it might have thought more about how it could have designed that experience to be both clearly not human but also fun or even funny. Think of how much life can be injected into animated cartoon characters, for example, which are very clearly not human yet are hugely popular because people find them entertaining and feel they come alive in their own way.

It really makes you wonder whether, at some foundational level, Google lacks trust in both what AI technology can do and in its own creative abilities to breath new life into these emergent synthetic experiences.

The NEEO universal remote is a modern Logitech Harmony alternative

The advanced universal remote market is not a very crowded market. In fact, for a while now, Logitech’s Harmony line has been pretty much the only game in town. Newcomer NEEO wants to upset that monopoly with its new NEEO Remote and NEEO Brain combo ($369), which is a system that can connect to just about any AV system, along with a smorgasbord of connected smart devices including Nest, Philips Hue, Sonos and more.

NEEO’s two-part system includes the Brain, which, true to its name, handles all of the heavy lifting. This is a puck-shaped device with 360-degree IR blasters dotting its outside perimeter, and which has one IR extender out (there’s one in the box) for connecting devices held within a closed AV cabinet, for instance. This central hub also connects to your Wi-Fi network, and setup requires plugging it into your router via Ethernet to get everything squared away, similar to how you initially set up Sonos speakers, if you’re familiar with that process.

Most of the setup work you need to do to get NEEO working happens on your phone, and that’s where it becomes apparent that this smart remote was designed for a modern context. Logitech’s Harmony software has come a long way, and now you can do everything you need to do from the iOS and Android app, but it’s still somewhat apparent that its legacy is as something you initially setup using a desktop and somewhat awkward web-based software. The NEEO feels at home on mobile, and it makes the setup and configuration process much better overall.

The other core component of the NEEO system is the NEEO Remote. This is a fantastic piece of industrial design, first of all. It’s a sleek rectangle crafted from aerospace-grade aluminum that oozes charm, in a way that nothing in the current Logitech Harmony lineup can come close to matching. The minimalist design still doesn’t suffer from the ‘which way is up?’ problem that the Apple Remote faces, because of subtle design cues including bottom weighting and the presence of ample physical buttons.

A NEEO Remote isn’t necessary for the system to work – you can just use the Brain along with the companion app for iPhone or Android, but the remote is a joy to hold and use, thanks to its unique design, and it features a super high density display that’s extremely responsive to touch input and pleasingly responsive to touch. NEEO took a lot of time to get this touchscreen experience right, and it pays off, delivering a clear and simple control interface that shifts to suit the needs of whatever activity you’re running at the time.

The NEEO Remote also has an “SOS” feature so that you can locate it if you happen to misplace it, and it can even be configured to recognize different hands if you want to set profiles for distinct members of the household, or set parental control profiles limiting access to certain content or devices. This kind of thing is where NEEO’s feature set exceeds the competition, and shows a particular attention to modern device use cases.

One NEEO Remote can also control multiple NEEO Brains, which is another limitation of the completion. That means you can set up NEEO Brains in each room where you have devices to control, and carry your remote from place to place instead of having to have multiple. The NEEO Brain is still $200 on its own, however, so it’s definitely still a barrier to entry.

NEEO otherwise does pretty much everything you’d expect a smart remote to do in 2018: You can set recipes on the deice itself, including with triggers like time-based alarms or motion detection (without using IFTTT). You can connect it to Alexa, though that functionality is limited at the moment, with more updates promised in future to make this better.

The bottom line is that NEEO offers a competent, intelligent alternative the big dog on the block, Logitech’s Harmony system. Logitech’s offering is still more robust and mature in terms of delivering Alexa and Google Assistant compatibility, as well as rock solid performance, but NEEO has some clever ideas and unique takes that will serve more patient and tech-forward users better over time.

Google makes it easier to create custom Assistant commands for devices

 Not really the showiest of SXSW announcements, but Google’s got a nice little update for developers looking to differentiate Assistant integration on a product level. Custom Device Actions are way to add specific functions to products. In a blog post, the company gives the specific example of activating a specific color cycle on an Assistant-enabled washing machine machine. The new… Read More