Keeping artificial intelligence accountable to humans

As a teenager in Nigeria, I tried to build an artificial intelligence system. I was inspired by the same dream that motivated the pioneers in the field: That we could create an intelligence of pure logic and objectivity that would free humanity from human error and human foibles.

I was working with weak computer systems and intermittent electricity, and needless to say my AI project failed. Eighteen years later — as an engineer researching artificial intelligence, privacy and machine-learning algorithms — I’m seeing that so far, the premise that AI can free us from subjectivity or bias is also disappointing. We are creating intelligence in our own image. And that’s not a compliment.

Researchers have known for awhile that purportedly neutral algorithms can mirror or even accentuate racial, gender and other biases lurking in the data they are fed. Internet searches on names that are more often identified as belonging to black people were found to prompt search engines to generate ads for bail bondsmen. Algorithms used for job-searching were more likely to suggest higher-paying jobs to male searchers than female. Algorithms used in criminal justice also displayed bias.

Five years later, expunging algorithmic bias is turning out to be a tough problem. It takes careful work to comb through millions of sub-decisions to figure out why the algorithm reached the conclusion it did. And even when that is possible, it is not always clear which sub-decisions are the culprits.

Yet applications of these powerful technologies are advancing faster than the flaws can be addressed.

Recent research underscores this machine bias, showing that commercial facial-recognition systems excel at identifying light-skinned males, with an error rate of less than 1 percent. But if you’re a dark-skinned female, the chance you’ll be misidentified rises to almost 35 percent.

AI systems are often only as intelligent — and as fair — as the data used to train them. They use the patterns in the data they have been fed and apply them consistently to make future decisions. Consider an AI tasked with sorting the best nurses for a hospital to hire. If the AI has been fed historical data — profiles of excellent nurses who have mostly been female — it will tend to judge female candidates to be better fits. Algorithms need to be carefully designed to account for historical biases.

Occasionally, AI systems get food poisoning. The most famous case was Watson, the AI that first defeated humans in 2011 on the television game show Jeopardy. Watson’s masters at IBM needed to teach it language, including American slang, so they fed it the contents of the online Urban Dictionary. But after ingesting that colorful linguistic meal, Watson developed a swearing habit. It began to punctuate its responses with four-letter words.

We have to be careful what we feed our algorithms. Belatedly, companies now understand that they can’t train facial-recognition technology by mainly using photos of white men. But better training data alone won’t solve the underlying problem of making algorithms achieve fairness.

Algorithms can already tell you what you might want to read, who you might want to date and where you might find work. When they are able to advise on who gets hired, who receives a loan or the length of a prison sentence, AI will have to be made more transparent — and more accountable and respectful of society’s values and norms.

Accountability begins with human oversight when AI is making sensitive decisions. In an unusual move, Microsoft president Brad Smith recently called for the U.S. government to consider requiring human oversight of facial-recognition technologies.

The next step is to disclose when humans are subject to decisions made by AI. Top-down government regulation may not be a feasible or desirable fix for algorithmic bias. But processes can be created that would allow people to appeal machine-made decisions — by appealing to humans. The EU’s new General Data Protection Regulation establishes the right for individuals to know and challenge automated decisions.

Today people who have been misidentified — whether in an airport or an employment data base — have no recourse. They might have been knowingly photographed for a driver’s license, or covertly filmed by a surveillance camera (which has a higher error rate). They cannot know where their image is stored, whether it has been sold or who can access it. They have no way of knowing whether they have been harmed by erroneous data or unfair decisions.

Minorities are already disadvantaged by such immature technologies, and the burden they bear for the improved security of society at large is both inequitable and uncompensated. Engineers alone will not be able to address this. An AI system is like a very smart child just beginning to understand the complexities of discrimination.

To realize the dream I had as a teenager, of an AI that can free humans from bias instead of reinforcing bias, will require a range of experts and regulators to think more deeply not only about what AI can do, but what it should do — and then teach it how. 

AI spots legal problems with tech T&Cs in GDPR research project

Technology is the proverbial double-edged sword. And an experimental European research project is ensuring this axiom cuts very close to the industry’s bone indeed by applying machine learning technology to critically sift big tech’s privacy policies — to see whether AI can automatically identify violations of data protection law.

The still-in-training privacy policy and contract parsing tool — which is called ‘Claudette‘: Aka (automated) clause detector — is being developed by researchers at the European University Institute in Florence.

They’ve also now got support from European consumer organization BEUC — for a ‘Claudette meets GDPR‘ project — which specifically applies the tool to evaluate compliance with the EU’s General Data Protection Regulation.

Early results from this project have been released today, with BEUC saying the AI was able to automatically flag a range of problems with the language being used in tech T&Cs.

The researchers set Claudette to work analyzing the privacy policies of 14 companies in all — namely: Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, AirBnB, Booking, Skyscanner, Netflix, Steam and Epic Games — saying this group was selected to cover a range of online services and sectors.

And also because they are among the biggest online players and — I quote — “should be setting a good example for the market to follow”. Ehem, should.

The AI analysis of the policies was carried out in June, after the update to the EU’s data protection rules had come into force. The regulation tightens requirements on obtaining consent for processing citizens’ personal data by, for example, increasing transparency requirements — basically requiring that privacy policies be written in clear and intelligible language, explaining exactly how the data will be used, in order that people can make a genuine, informed choice to consent (or not consent).

In theory, all 15 parsed privacy policies should have been compliant with GDPR by June, as it came into force on May 25. However some tech giants are already facing legal challenges to their interpretation of ‘consent’. And it’s fair to say the law has not vanquished the tech industry’s fuzzy language and logic overnight. Where user privacy is concerned, old, ugly habits die hard, clearly.

But that’s where BEUC is hoping AI technology can help.

It says that out of a combined 3,659 sentences (80,398 words) Claudette marked 401 sentences (11.0%) as containing unclear language, and 1,240 (33.9%) containing “potentially problematic” clauses or clauses providing “insufficient” information.

BEUC says identified problems include:

  • Not providing all the information which is required under the GDPR’s transparency obligations. “For example companies do not always inform users properly regarding the third parties with whom they share or get data from”
  • Processing of personal data not happening according to GDPR requirements. “For instance, a clause stating that the user agrees to the company’s privacy policy by simply using its website”
  • Policies are formulated using vague and unclear language (i.e. using language qualifiers that really bring the fuzz — such as “may”, “might”, “some”, “often”, and “possible”) — “which makes it very hard for consumers to understand the actual content of the policy and how their data is used in practice”

The bolstering of the EU’s privacy rules, with GDPR tightening the consent screw and supersizing penalties for violations, was exactly intended to prevent this kind of stuff. So it’s pretty depressing — though hardly surprising — to see the same, ugly T&C tricks continuing to be used to try to sneak consent by keeping users in the dark.

We reached out to two of the largest tech giants whose policies Claudette parsed — Google and Facebook — to ask if they want to comment on the project or its findings.

A Google spokesperson said: “We have updated our Privacy Policy in line with the requirements of the GDPR, providing more detail on our practices and describing the information that we collect and use, and the controls that users have, in clear and plain language. We’ve also added new graphics and video explanations, structured the Policy so that users can explore it more easily, and embedded controls to allow users to access relevant privacy settings directly.”

At the time of writing Facebook had not responded to our request for comment.

Commenting in a statement, Monique Goyens, BEUC’s director general, said: “A little over a month after the GDPR became applicable, many privacy policies may not meet the standard of the law. This is very concerning. It is key that enforcement authorities take a close look at this.”

The group says it will be sharing the research with EU data protection authorities, including the European Data Protection Board. And is not itself ruling out bringing legal actions against law benders.

But it’s also hopeful that automation will — over the longer term — help civil society keep big tech in legal check.

Although, where this project is concerned, it also notes that the training data-set was small — conceding that Claudette’s results were not 100% accurate — and says more privacy policies would need to be manually analyzed before policy analysis can be fully conducted by machines alone.

So file this one under ‘promising research’.

“This innovative research demonstrates that just as Artificial Intelligence and automated decision-making will be the future for companies from all kinds of sectors, AI can also be used to keep companies in check and ensure people’s rights are respected,” adds Goyens. “We are confident AI will be an asset for consumer groups to monitor the market and ensure infringements do not go unnoticed.

“We expect companies to respect consumers’ privacy and the new data protection rights. In the future, Artificial Intelligence will help identify infringements quickly and on a massive scale, making it easier to start legal actions as a result.”

For more on the AI-fueled future of legal tech, check out our recent interview with Mireille Hildebrandt.

To truly protect citizens, lawmakers need to restructure their regulatory oversight of big tech

If members of the European Parliament thought they could bring Mark Zuckerberg to heel with his recent appearance, they underestimated the enormous gulf between 21st century companies and their last-century regulators.

Zuckerberg himself reiterated that regulation is necessary, provided it is the “right regulation.”

But anyone who thinks that our existing regulatory tools can reign in our digital behemoths is engaging in magical thinking. Getting to “right regulation” will require us to think very differently.

The challenge goes far beyond Facebook and other social media: the use and abuse of data is going to be the defining feature of just about every company on the planet as we enter the age of machine learning and autonomous systems.

So far, Europe has taken a much more aggressive regulatory approach than anything the US was contemplating before or since Zuckerberg’s testimony.

The European Parliament’s Global Data Protection Regulation (GDPR) is now in force, which extends data privacy rights to all European citizens regardless of whether their data is processed by companies within the EU or beyond.

But I’m not holding my breath that the GDPR will get us very far on the massive regulatory challenge we face. It is just more of the same when it comes to regulation in the modern economy: a lot of ambiguous costly-to-interpret words and procedures on paper that are outmatched by rapidly evolving digital global technologies.

Crucially, the GDPR still relies heavily on the outmoded technology of user choice and consent, the main result of which has seen almost everyone in Europe (and beyond) inundated with emails asking them to reconfirm permission to keep their data. But this is an illusion of choice, just as it is when we are ostensibly given the option to decide whether to agree to terms set by large corporations in standardized take-it-or-leave-it click-to-agree documents.  

There’s also the problem of actually tracking whether companies are complying. It is likely that the regulation of online activity requires yet more technology, such as blockchain and AI-powered monitoring systems, to track data usage and implement smart contract terms.

As the EU has already discovered with the right to be forgotten, however, governments lack the technological resources needed to enforce these rights. Search engines are required to serve as their own judge and jury in the first instance; Google at last count was doing 500 a day.  

The fundamental challenge we face, here and throughout the modern economy, is not: “what should the rules for Facebook be?” but rather, “how can we can innovate new ways to regulate effectively in the global digital age?”

The answer is that we need to find ways to harness the same ingenuity and drive that built Facebook to build the regulatory systems of the digital age. One way to do this is with what I call “super-regulation” which involves developing a market for licensed private regulators that serve two masters: achieving regulatory targets set by governments but also facing the market incentive to compete for business by innovating more cost-effective ways to do that.  

Imagine, for example, if instead of drafting a detailed 261-page law like the EU did, a government instead settled on the principles of data protection, based on core values, such as privacy and user control.

Private entities, profit and non-profit, could apply to a government oversight agency for a license to provide data regulatory services to companies like Facebook, showing that their regulatory approach is effective in achieving these legislative principles.  

These private regulators might use technology, big-data analysis, and machine learning to do that. They might also figure out how to communicate simple options to people, in the same way that the developers of our smartphone figured that out. They might develop effective schemes to audit and test whether their systems are working—on pain of losing their license to regulate.

There could be many such regulators among which both consumers and Facebook could choose: some could even specialize in offering packages of data management attributes that would appeal to certain demographics – from the people who want to be invisible online, to those who want their every move documented on social media.

The key here is competition: for-profit and non-profit private regulators compete to attract money and brains the problem of how to regulate complex systems like data creation and processing.

Zuckerberg thinks there’s some kind of “right” regulation possible for the digital world. I believe him; I just don’t think governments alone can invent it. Ideally, some next generation college kid would be staying up late trying to invent it in his or her dorm room.

The challenge we face is not how to get governments to write better laws; it’s how to get them to create the right conditions for the continued innovation necessary for new and effective regulatory systems.

Facebook, Google face first GDPR complaints over “forced consent”

After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response.

The European privacy campaigner most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights in an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation for defending data protection rights.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build committee a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that big tech is able to use to extract consent on account of the reach and power of their platforms — arguing there’s a competition concern that GDPR should also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he writes. “This is especially important so that monopolies have no advantage over small businesses.”

Image credit: noyb.eu

Instapaper on pause in Europe to fix GDPR compliance “issue”

Remember Instapaper? The Pinterest-owned, read-it-later bookmarking service is taking a break in Europe — apparently while it works on achieving compliance with the region’s updated privacy framework, GDPR, which will start being applied from tomorrow.

Instapaper’s notification does not say how long the self-imposed outage will last.

The European Union’s General Data Protection Regulation updates the bloc’s privacy framework, most notably by bringing in supersized fines for data violations, which in the most serious cases can scale up to 4% of a company’s global annual turnover.

So it significantly ramps up the risk of, for example, having sloppy security, or consent flows that aren’t clear and specific enough (if indeed consent is the legal basis you’re relying on for processing people’s personal information).

That said, EU regulators are clearly going to tread softly on the enforcement front in the short term. And any major fines are only going to hit the most serious violations and violators — and only down the line when data protection authorities have received complaints and conducted thorough investigations.

So it’s not clear exactly why Instapaper believes it needs to pause its service to European users. It’s also had plenty of time to prepare to be compliant — given the new framework was agreed at the back end of 2015. We’ve reached out to Pinterest with questions and will update this story with any response.

In an exchange on Twitter, Pinterest product engineering manager Brian Donohue — who, prior to acquisition was Instapaper’s CEO — flagged that the product’s privacy policy “hasn’t been changed in several years”. But he declined to specify exactly what it feels its compliance issue is — saying only: “We’re actively working to resolve the issue.”

In a customer support email that we reviewed, the company also told one European user: “We’ve been advised to undergo an assessment of the Instapaper service to determine what, if any, changes may be appropriate but to restrict access to IP addresses in the EU as the best course of action.”

“We’re really sorry for any inconvenience, and we are actively working on bringing the service back online for residents in Europe,” it added.

The product’s privacy policy is one of the clearer T&Cs we’ve seen. It also states that users can already access “all your personally identifiable information that we collect online and maintain”, as well as saying people can “correct factual errors in your personally identifiable information by changing or deleting the erroneous information” — which, assuming those statements are true, looks pretty good for complying with portions of GDPR that are intended to give consumers more control over their personal information.

Instapaper also already lets users delete their accounts. And if they do that it specifies that “all account information and saved page data is deleted from the Instapaper service immediately” (though it also cautions that “deleted data may persist in backups and logs until they are deleted”).

In terms of what Instapaper does with users’ data, its privacy policy claims it does not share the information “with outside parties except to the extent necessary to accomplish Instapaper’s functionality”.

But it’s also not explicitly clear from the policy whether or not it’s passing information to its parent company Pinterest, for example, so perhaps it feels it needs to add more detail there.

Another possibility is Instapaper is working on compliance with GDPR’s data portability requirement. Though the service has offered exports options for years. But perhaps it feels these need to be more comprehensive.

As is inevitable ahead of a major regulatory change there’s a good deal of confusion about what exactly must be done to comply with the new rules. And that’s perhaps the best explanation for what’s going on with Instapaper’s pause.

Though, again, there’s plenty of official and detailed guidance from data protection agencies to help.

Unfortunately it’s also true that there’s a lot of unofficial and dubious quality advice from a cottage industry of self-styled ‘GDPR consultants’ that have sprung up with the intention of profiting off of the uncertainty. So — as ever — do your due diligence when it comes to the ‘experts’ you choose.

EU parliament pushes for Zuckerberg hearing to be live-streamed

There’s confusion about whether a meeting between Facebook founder Mark Zuckerberg and the European Union’s parliament — which is due to take place next Tuesday — will go ahead as planned or not.

The meeting was confirmed by the EU parliament’s president this week, and is the latest stop on Zuckerberg’s contrition tour, following the Cambridge Analytics data misuse story that blew up into a major public scandal in mid March. 

However the discussion with MEPs that Facebook agreed to was due to take place behind closed doors. A private format that’s not only ripe with irony but was also unpalatable to a large number of MEPs. It even drew criticism from some in the EU’s unelected executive body, the European Commission, which further angered parliamentarians.

Now, as the FT reports, MEPs appear to have forced the parliament’s president, Antonio Tajani, to agree to livestreaming the event.

Guy Verhofstadt — the leader of the Alliance of Liberals and Democrats group of MEPs, who had said he would boycott the meeting if it took place in private — has also tweeted that a majority of the parliament’s groups have pushed for the event to be streamed online.

And a Green Group MEP, Sven Giegold, who posted an online petition calling for the meeting not to be held in secret — has also tweeted that there is now a majority among the groups wanting to change the format. At the time of writing Giegold’s petition has garnered more than 25,000 signatures.

MEP Claude Moraes, chair of the EU parliament’s Civil Liberties, Justice and Home Affairs (LIBE) committee — and one of the handful of parliamentarians set to question Zuckerberg (assuming the meeting goes ahead as planned) — told TechCrunch this morning that there were efforts afoot among political group leaders to try to open up the format. Though any changes would clearly depend on Facebook agreeing to them.

After speaking to Moraes, we asked Facebook to confirm whether it’s open to Zuckerberg’s meeting being streamed online — say, via a Facebook Live. Seven hours laters we’re still waiting for a response, including to a follow up email asking if it will accept the majority decision among MEPs for the hearing to be livestreamed.

The Libe committee had been pushing for a fully open hearing with the Facebook founder — a format which would also have meant it being open to members of the public. But that was before a small majority of the parliament’s political groups accepted the council of presidents’ (COP) decision on a closed meeting.

Although now that decision looks to have been rowed back, with a majority of the groups pushing the president to agree to the event being streamed — putting the ball back in Facebook’s court to accept the new format.

Of course democracy can be a messy process at times, something Zuckerberg surely has a pretty sharp appreciation of these days. And if the Facebook founder pulls out of meeting simply because a majority of MEPs have voted to do the equivalent of ‘Facebook Live’ the hearing, well, it’s hard to see a way for the company to salvage any face at all.

Zuckerberg has agreed to be interviewed on stage at the VivaTech conference in Paris next Thursday, and is scheduled to have lunch with French president Emmanuel Macron the same week. So pivoting to a last minute snub of the EU parliament would be a pretty high stakes game for the company to play. (Though it’s continued to deny a UK parliamentary committee any facetime with Zuckerberg for months now.)

The EU Facebook agenda

The substance of the meeting between Zuckerberg and the EU parliament — should it go ahead — will include discussion about Facebook’s impact on election processes. That was the only substance detail flagged by Tajani in the statement on Wednesday when he confirmed Zuckerberg had accepted the invitation to talk to representatives of the EU’s 500 million citizens.

Moraes says he also intends to ask Zuckerberg wider questions — relating to how its business model impacts people’s privacy. And his hope is this discussion could help unblock negotiations around an update to the EU’s rules around online tracking technologies and the privacy of digital communications.

“One of the key things is that [Zuckerberg] gets a particular flavor of the genuine concern — not just about what Facebook is doing, but potentially other tech companies — on the interference in elections. Because I think that is a genuine, big, sort of tech vs real life and politics concern,” he says, discussing the questions he wants to ask.

“And the fact is he’s not going to go before the House of Commons. He’s not going to go before the Bundestag. And he needs to answer this question about Cambridge Analytica — in a little bit more depth, if possible, than we even saw in Congress. Because he needs to get straight from us the deepest concerns about that.

“And also this issue of processing for algorithmic targeting, and for political manipulation — some in depth questions on this.

“And we need to go more in depth and more carefully about what safeguards there are — and what he’s prepared to do beyond those safeguards.

“We’re aware of how poor US data protection law is. We know that GDPR is coming in but it doesn’t impact on the Facebook business model that much. It does a little bit but not sufficiently — I mean ePrivacy probably far more — so we need to get to a point where we understand what Facebook is willing to change about the way it’s been behaving up til now.

“And we have a real locus there — which is we have more Facebook users, and we have the clout as well because we have potential legislation, and we have regulation beyond that too. So I think for those reasons he needs to answer.”

“The other things that go beyond the obvious Cambridge Analytica questions and the impact on elections, are the consequences of the business model, data-driven advertising, and how that’s going to work, and there we need to go much more in depth,” he continues.

“Facebook on the one hand, it’s complying with GDPR [the EU’s incoming General Data Protection Regulation] which is fine — but we need to think about what the further protections are. So for example, how justified we are with the ePrivacy Regulation, for example, and its elements, and I think that’s quite important.

“I think he needs to talk to us about that. Because that legislation at the moment it’s seen as controversial, it’s blocked at the moment, but clearly would have more relevance to the problems that are currently being created.”

Negotiations between the EU parliament and the European Council to update the ePrivacy Directive — which governs the use of personal telecoms data and also regulates tracking cookies — and replace it with a regulation that harmonizes the rules with the incoming GDPR and expands the remit to include Internet companies and cover both content and metadata of digital comms are effectively stalled for now, as EU Member States are still trying to reach agreement. The directive was last updated in 2009.

“When the Cambridge Analytica case happened, I was slightly concerned about people thinking GDPR is the panacea to this — it’s not,” argues Moraes. “It only affects Facebook’s business model a little bit. ePrivacy goes far more in depth — into data-driven advertising, personal comms and privacy.

“That tool was there because people were aware that this kind of thing can happen. But because of that the Privacy directive will be seen as controversial but I think people now need to look at it carefully and say look at the problems created in the Facebook situation — and not just Facebook — and then analyze whether ePrivacy has got merits. I think that’s quite an important discussion to happen.”

While Moraes believes Facebook-Cambridge Analytica could help unblock the log jam around ePrivacy, as the scandal makes some of the risks clear and underlines what’s at stake for politicians and democracies, he concedes there are still challenging barriers to getting the right legislation in place — given the fine-grained layers of complexity involved with imposing checks and balances on what are also poorly understood technologies outside their specific industry niches.

“This Facebook situation has happened when ePrivacy is more or less blocked because its proportionality is an issue. But the essence of it — which is all the problems that happened with the Facebook case, the Cambridge Analytica case, and data-driven advertising business model — that needs checks and balances… So we need to now just review the ePrivacy situation and I think it’s better that everyone opens this discussion up a bit.

“ePrivacy, future legislation on artificial intelligence, all of which is in our committee, it will challenge people because sometimes they just won’t want to look at it. And it speaks to parliamentarians without technical knowledge which is another issue in Western countries… But these are all wider issues about the understanding of these files which are going to come up.  

“This is the discussion we need to have now. We need to get that discussion right. And I think Facebook and other big companies are aware that we are legislating in these areas — and we’re legislating for more than one countries and we have the economies of scale — we have the user base, which is bigger than the US… and we have the innovoation base, and I think those companies are aware of that.”

Moraes also points out that US lawmakers raised the difference between the EU and US data protection regimes with Zuckerberg last month — arguing there’s a growing awareness that US law in this area “desperately needs to be modernized”.

So he sees an opportunity for EU regulators to press on their counterparts over the pond.

“We have international agreements that just aren’t going to work in the future and they’re the basis of a lot of economic activity, so it is becoming critical… So the Facebook debate should, if it’s pushed in the correct direction, give us a better handle on ePrivacy, on modernizing data protection standards in the US in particular. And modernizing safeguards for consumers,” he argues.

“Our parliaments across Europe are still filled with people who don’t have tech backgrounds and knowledge but we need to ensure that we get out of this mindset and start understanding exactly what the implications here are of these cases and what the opportunities are.”

In the short term, discussions are also continuing for a full meeting between the Libe committee and Facebook.

Though that’s unlikely to be Zuckerberg himself. Moraes says the committee is “aiming for Sheryl Sandberg”, though he says other names have been suggested. No firm date has been conformed yet either — he’ll only say he “hopes it will take place as soon as possible”.

Threats are not on the agenda though. Moraes is unimpressed with the strategy the DCMS committee has pursued in trying (and so far failing) to get Zuckerberg to testify in front of the UK parliament, arguing threats of a summons were counterproductive. Libe is clearly playing a longer game.

“Threatening him with a summons in UK law really was not the best approach. Because it would have been extremely important to have him in London. But I just don’t see why he would do that. And I’m sure there’s an element of him understanding that the European Union and parliament in particular is a better forum,” he suggests.

“We have more Facebook users than the US, we have the regulatory framework that is significant to Facebook — the UK is simply implementing GDPR and following Brexit it will have an adequacy agreement with the EU so I think there’s an understanding in Facebook where the regulation, the legislation and the audience is.”

“I think the quaint ways of the British House of Commons need to be thought through,” he adds. “Because I really don’t think that would have engendered much enthusiasm in [Zuckerberg] to come and really interact with the House of Commons which would have been a very positive thing. Particularly on the specifics of Cambridge Analytics, given that that company is in the UK. So that locus was quite important, but the approach… was not positive at all.”

Facebook faces fresh criticism over ad targeting of sensitive interests

Is Facebook trampling over laws that regulate the processing of sensitive categories of personal data by failing to ask people for their explicit consent before it makes sensitive inferences about their sex life, religion or political beliefs? Or is the company merely treading uncomfortably and unethically close to the line of the law?

An investigation by the Guardian and the Danish Broadcasting Corporation has found that Facebook’s platform allows advertisers to target users based on interests related to political beliefs, sexuality and religion — all categories that are marked out as sensitive information under current European data protection law.

And indeed under the incoming GDPR, which will apply across the bloc from May 25.

The joint investigation found Facebook’s platform had made sensitive inferences about users — allowing advertisers to target people based on inferred interests including communism, social democrats, Hinduism and Christianity. All of which would be classed as sensitive personal data under EU rules.

And while the platform offers some constraints on how advertisers can target people against sensitive interests — not allowing advertisers to exclude users based on a specific sensitive interest, for example (Facebook having previously run into trouble in the US for enabling discrimination via ethnic affinity-based targeting) — such controls are beside the point if you take the view that Facebook is legally required to ask for a user’s explicit consent to processing this kind of sensitive data up front, before making any inferences about a person.

Indeed, it’s very unlikely that any ad platform can put people into buckets with sensitive labels like ‘interested in social democrat issues’ or ‘likes communist pages’ or ‘attends gay events’ without asking them to let it do so first.

And Facebook is not asking first.

Facebook argues otherwise, of course — claiming that the information it gathers about people’s affinities/interests, even when they entail sensitive categories of information such as sexuality and religion, is not personal data.

In a response statement to the media investigation, a Facebook spokesperson told us:

Like other Internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data. This means that someone could have an ad interest listed as ‘Gay Pride’ because they have liked a Pride associated Page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality. People are able to manage their Ad Preferences tool, which clearly explains how advertising works on Facebook and provides a way to tell us if you want to see ads based on specific interests or not. When interests are removed, we show people the list of removed interests so that they have a record they can access, but these interests are no longer used for ads. Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.

Expect Facebook’s argument to be tested in the courts — likely in the very near future.

As we’ve said before, the GDPR lawsuits are coming for the company, thanks to beefed up enforcement of EU privacy rules, with the regulation providing for fines as large as 4% of a company’s global turnover.

Facebook is not the only online people profiler, of course, but it’s a prime target for strategic litigation both because of its massive size and reach (and the resulting power over web users flowing from a dominant position in an attention-dominating category), but also on account of its nose-thumbing attitude to compliance with EU regulations thus far.

The company has faced a number of challenges and sanctions under existing EU privacy law — though for its operations outside the US it typically refuses to recognize any legal jurisdiction except corporate-friendly Ireland, where its international HQ is based.

And, from what we’ve seen so far, Facebook’s response to GDPR ‘compliance’ is no new leaf. Rather it looks like privacy-hostile business as usual; a continued attempt to leverage its size and power to force a self-serving interpretation of the law — bending rules to fit its existing business processes, rather than reconfiguring those processes to comply with the law.

The GDPR is one of the reasons why Facebook’s ad microtargeting empire is facing greater scrutiny now, with just weeks to go before civil society organizations are able to take advantage of fresh opportunities for strategic litigation allowed by the regulation.

“I’m a big fan of the GDPR. I really believe that it gives us — as the court in Strasbourg would say — effective and practical remedies,” law professor Mireille Hildebrandt tells us. “If we go and do it, of course. So we need a lot of public litigation, a lot of court cases to make the GDPR work but… I think there are more people moving into this.

“The GDPR created a market for these sort of law firms — and I think that’s excellent.”

But it’s not the only reason. Another reason why Facebook’s handling of personal data is attracting attention is the result of tenacious press investigations into how one controversial political consultancy, Cambridge Analytica, was able to gain such freewheeling access to Facebook users’ data — as a result of Facebook’s lax platform policies around data access — for, in that instance, political ad targeting purposes.

All of which eventually blew up into a major global privacy storm, this March, though criticism of Facebook’s privacy-hostile platform policies dates back more than a decade at this stage.

The Cambridge Analytica scandal at least brought Facebook CEO and founder Mark Zuckerberg in front of US lawmakers, facing questions about the extent of the personal information it gathers; what controls it offers users over their data; and how he thinks Internet companies should be regulated, to name a few. (Pro tip for politicians: You don’t need to ask companies how they’d like to be regulated.)

The Facebook founder has also finally agreed to meet EU lawmakers — though UK lawmakers’ calls have been ignored.

Zuckerberg should expect to be questioned very closely in Brussels about how his platform is impacting European’s fundamental rights.

Sensitive personal data needs explicit consent

Facebook infers affinities linked to individual users by collecting and processing interest signals their web activity generates, such as likes on Facebook Pages or what people look at when they’re browsing outside Facebook — off-site intel it gathers via an extensive network of social plug-ins and tracking pixels embedded on third party websites. (According to information released by Facebook to the UK parliament this week, during just one week of April this year its Like button appeared on 8.4M websites; the Share button appeared on 931,000 websites; and its tracking Pixels were running on 2.2M websites.)

But here’s the thing: Both the current and the incoming EU legal framework for data protection sets the bar for consent to processing so-called special category data equally high — at “explicit” consent.

What that means in practice is Facebook needs to seek and secure separate consents from users (such as via a dedicated pop-up) for collecting and processing this type of sensitive data.

The alternative is for it to rely on another special condition for processing this type of sensitive data. However the other conditions are pretty tightly drawn — relating to things like the public interest; or the vital interests of a data subject; or for purposes of “preventive or occupational medicine”.

None of which would appear to apply if, as Facebook is, you’re processing people’s sensitive personal information just to target them with ads.

Ahead of GDPR, Facebook has started asking users who have chosen to display political opinions and/or sexuality information on their profiles to explicitly consent to that data being public.

Though even there its actions are problematic, as it offers users a take it or leave it style ‘choice’ — saying they either remove the info entirely or leave it and therefore agree that Facebook can use it to target them with ads.

Yet EU law also requires that consent be freely given. It cannot be conditional on the provision of a service.

So Facebook’s bundling of service provisions and consent will also likely face legal challenges, as we’ve written before.

“They’ve tangled the use of their network for socialising with the profiling of users for advertising. Those are separate purposes. You can’t tangle them like they are doing in the GDPR,” says Michael Veale, a technology policy researcher at University College London, emphasizing that GDPR allows for a third option that Facebook isn’t offering users: Allowing them to keep sensitive data on their profile but that data not be used for targeted advertising.

“Facebook, I believe, is quite afraid of this third option,” he continues. “It goes back to the Congressional hearing: Zuckerberg said a lot that you can choose which of your friends every post can be shared with, through a little in-line button. But there’s no option there that says ‘do not share this with Facebook for the purposes of analysis’.”

Returning to how the company synthesizes sensitive personal affinities from Facebook users’ Likes and wider web browsing activity, Veale argues that EU law also does not recognize the kind of distinction Facebook is seeking to draw — i.e. between inferred affinities and personal data — and thus to try to redraw the law in its favor.

“Facebook say that the data is not correct, or self-declared, and therefore these provisions do not apply. Data does not have to be correct or accurate to be personal data under European law, and trigger the protections. Indeed, that’s why there is a ‘right to rectification’ — because incorrect data is not the exception but the norm,” he tells us.

“At the crux of Facebook’s challenge is that they are inferring what is arguably “special category” data (Article 9, GDPR) from non-special category data. In European law, this data includes race, sexuality, data about health, biometric data for the purposes of identification, and political opinions. One of the first things to note is that European law does not govern collection and use as distinct activities: Both are considered processing.

“The pan-European group of data protection regulators have recently confirmed in guidance that when you infer special category data, it is as if you collected it. For this to be lawful, you need a special reason, which for most companies is restricted to separate, explicit consent. This will be often different than the lawful basis for processing the personal data you used for inference, which might well be ‘legitimate interests’, which didn’t require consent. That’s ruled out if you’re processing one of these special categories.”

“The regulators even specifically give Facebook like inference as an example of inferring special category data, so there is little wiggle room here,” he adds, pointing to an example used by regulators of a study that combined Facebook Like data with “limited survey information” — and from which it was found that researchers could accurately predict a male user’s sexual orientation 88% of the time; a user’s ethnic origin 95% of the time; and whether a user was Christian or Muslim 82% of the time.

Which underlines why these rules exist — given the clear risk of breaches to human rights if big data platforms can just suck up sensitive personal data automatically, as a background process.

The overarching aim of GDPR is to give consumers greater control over their personal data not just to help people defend their rights but to foster greater trust in online services — and for that trust to be a mechanism for greasing the wheels of digital business. Which is pretty much the opposite approach to sucking up everything in the background and hoping your users don’t realize what you’re doing.

Veale also points out that under current EU law even an opinion on someone is their personal data… (per this Article 29 Working Party guidance, emphasis ours):

From the point of view of the nature of the information, the concept of personal data includes any sort of statements about a person. It covers “objective” information, such as the presence of a certain substance in one’s blood. It also includes “subjective” information, opinions or assessments. This latter sort of statements make up a considerable share of personal data processing in sectors such as banking, for the assessment of the reliability of borrowers (“Titius is a reliable borrower”), in insurance (“Titius is not expected to die soon”) or in employment (“Titius is a good worker and merits promotion”).

We put that specific point to Facebook — but at the time of writing we’re still waiting for a response. (Nor would Facebook provide a public response to several other questions we asked around what it’s doing here, preferring to limit its comment to the statement at the top of this post.)

Veale adds that the WP29 guidance has been upheld in recent CJEU cases such as Nowak — which he says emphasized that, for example, annotations on the side of an exam script are personal data.

He’s clear about what Facebook should be doing to comply with the law: “They should be asking for individuals’ explicit, separate consent for them to infer data including race, sexuality, health or political opinions. If people say no, they should be able to continue using Facebook as normal without these inferences being made on the back-end.”

“They need to tell individuals about what they are doing clearly and in plain language,” he adds. “Political opinions are just as protected here, and this is perhaps more interesting than race or sexuality.”

“They certainly should face legal challenges under the GDPR,” agrees Paul Bernal, senior lecturer in law at the University of East Anglia, who is also critical of how Facebook is processing sensitive personal information. “The affinity concept seems to be a pretty transparent attempt to avoid legal challenges, and one that ought to fail. The question is whether the regulators have the guts to make the point: It undermines a quite significant part of Facebook’s approach.”

“I think the reason they’re pushing this is that they think they’ll get away with it, partly because they think they’ve persuaded people that the problem is Cambridge Analytica, as rogues, rather than Facebook, as enablers and supporters. We need to be very clear about this: Cambridge Analytica are the symptom, Facebook is the disease,” he adds.

“I should also say, I think the distinction between ‘targeting’ being OK and ‘excluding’ not being OK is also mostly Facebook playing games, and trying to have their cake and eat it. It just invites gaming of the systems really.”

Facebook claims its core product is social media, rather than data-mining people to run a highly lucrative microtargeted advertising platform.

But if that’s true why then is it tangling its core social functions with its ad-targeting apparatus — and telling people they can’t have a social service unless they agree to interest-based advertising?

It could support a service with other types of advertising, which don’t depend on background surveillance that erodes users’ fundamental rights.  But it’s choosing not to offer that. All you can ‘choose’ is all or nothing. Not much of a choice.

Facebook telling people that if they want to opt out of its ad targeting they must delete their account is neither a route to obtain meaningful (and therefore lawful) consent — nor a very compelling approach to counter criticism that its real business is farming people.

The issues at stake here for Facebook, and for the shadowy background data-mining and brokering of the online ad targeting industry as a whole, are clearly far greater than any one data misuse scandal or any one category of sensitive data. But Facebook’s decision to retain people’s sensitive personal data for ad targeting without asking for consent up-front is a telling sign of something gone very wrong indeed.

If Facebook doesn’t feel confident asking its users whether what it’s doing with their personal data is okay or not, maybe it shouldn’t be doing it in the first place.

At very least it’s a failure of ethics. Even if the final judgement on Facebook’s self-serving interpretation of EU privacy rules will have to wait for the courts to decide.

Unroll.me to close to EU users saying it can’t comply with GDPR

Put on your best unsurprised face: Unroll.me, a company that has, for years, used the premise of ‘free’ but not very useful ’email management’ services to gain access to people’s email inboxes in order to data-mine the contents for competitive intelligence — and controversially flog the gleaned commercial insights to the likes of Uber — is to stop serving users in Europe ahead of a new data protection enforcement regime incoming under GDPR, which applies from May 25.

In a section on its website about the regional service shutdown, the company writes that “unfortunately we can no longer support users from the EU as of the 23rd of May”, before asking whether a visitor lives in the EU or not.

Clicking ‘no’ doesn’t seem to do anything but clicking ‘yes’ brings up another info screen where Unroll.me writes that this is its “last month in the EU” — because it says it will be unable to comply with “all GDPR requirements” (although it does not specify which portions of the regulation it cannot comply with).

Any existing EU user accounts will be deleted by May 24, it adds:

The EU is implementing new data privacy rules, known as General Data Protection Regulation (GDPR). Unfortunately, our service is intended to serve users in the U.S. Because it was not designed to comply with all GDPR requirements, Unroll.Me will not be available to EU residents. This means we may not serve users we believe are residents of the EU, and we must delete any EU user accounts by May 24. We are truly sorry that we are unable to offer our service to you.

While Unroll.me, which is owned by Slice Technologies, also claims on the very same website that its parent company “strips away personal information” (i.e. after it has passed personal data attached to commercial and transactional emails found in users’ inboxes) — to “build anonymized market research products that analyze and track consumer trends” — it has been criticized for not being transparent about how it parses and sells people’s personal information.

And in fact if you go to the trouble of reading the small print of Unroll.me’s privacy policy it says it can share users’ personal information how it pleases — not just with its parent entity (and direct affiliates) but with any other ‘partners’ it chooses…

We may share personal information we collect with our parent company, other affiliated companies, and trusted business partners. We also will share personal information with service providers that perform services on our behalf. Our non-affiliated business partners and service providers are not authorized by us to use or disclose the information except as necessary to perform services on our behalf or comply with legal requirements.

So it’s not hard to see why Unroll.me has decided it must shut up shop in the EU, given this ‘hand-in-the-cookie-jar’ approach to private data. (In a GDPR FAQ on its site it tries to suggest it needs more time to comply with the enforcement requirements — couching the regulation as “so vast and appropriately comprehensive” it simply hasn’t had time to get its ducks in order; yet the final text of GDPR was agreed at the end of 2015, and the regulation was proposed three years before that, so all companies handling personal data in the EU have had years to get aware and get prepared.)

The move also flags up contradictions in Unroll.me’s messaging to its users. For instance we’ve asked the company why it’s shutting down in the EU if — as it claims on its website — it “respects your privacy”. We’re not holding our breath for a response.

The market exit also looks like a tacit admission that Unroll.me has essentially been ignoring the EU’s existing privacy regime. Because GDPR does not introduce privacy rules to the region. Rather the regulation updates and builds on a data protection framework that’s more than two decades old at this point — mostly by ramping up enforcement, with penalties for privacy violations that can scale as high as 4% of a company’s global annual turnover.

So suddenly the EU is getting privacy regs with teeth. And just as suddenly Unroll.me is deciding it needs to shut up the local shop… 🤔 (And nor is it the only one… )

It’s true that GDPR does tighten existing consent requirements for processing personal data — but only slightly. Current EU rules already require that consent be freely given, specific and informed. GDPR adds that it must also be a “clear affirmative act” and “unambiguous”, along with requiring data controllers are able to demonstrate that a service user whose personal data is being processed has given consent for that to happen.

But the core EU requirement of ‘freely given, specific and informed’ consent stands. Which does rather suggest that Unroll.me was already trampling over the privacy rights of EU users — given it’s the threat of big fines that’s the shiny new thing here…

GDPR also takes aim at the practice of burying information that users need to decide whether or not to consent to their personal data being processed in difficult to find and read dense legalese.

And the regulation’s requirements on that front are forcing companies to be more up front about what exactly they intend to do with people’s data. (Even if some tech giants are still trying their hand at socially engineering and manipulating ‘consent‘.)

“Consent [under GDPR] must also now be separable from other written agreements, and in an intelligible and easily accessible form, using clear and plain language,” data protection expert Jon Baines, an advisor at UK law firm Mishcon de Reya LLP, told us recently. “If these requirements are enforced by data protection supervisory authorities and the courts, then we could well see a significant shift in habits and practices.”

As well as signs of shifts in business processes, it looks like some of the changes that GDPR can take (early) credit for include expedited market exits by companies with business models that rely on not being adequately up front with their users.

In the case of Unroll.me, any non-EU users should really be asking themselves if they need this ‘service’ — and/or asking the company lots of questions about what it’s doing with their private information; who it’s selling their information to; and what those third parties are using their data for?

Google accused of using GDPR to impose unfair terms on publishers

A group of European and international publishers have accused Google of using an incoming update to the European Union’s data protection framework to try to push “draconian” new terms on them in exchange for continued access to its ad network — which many publishers rely on to monetize their content online.

Google trailed the terms as incoming in late March, while the new EU regulation — GDPR — is due to apply from May 25.

“[W]e find it especially troubling that you would wait until the last-minute before the GDPR comes into force to announce these terms as publishers have now little time to assess the legality or fairness of your proposal and how best to consider its impact on their own GDPR compliance plans which have been underway for a long time,” they write in a letter to the company dated April 30. “Nor do we believe that this meets the test of creating a fair, transparent and predictable business environment of the kind required by the draft Regulation COM (2018) 238 final published 26 April 2018 [an EU proposal which relates to business users of online intermediation services].”

The GDPR privacy framework both tightens consent requirements for processing the personal data of EU users and beefs up enforcement for data protection violations, with fines able to scale as high as four per cent of a company’s global annual turnover — substantially inflating the legal liabilities around the handling of any personal data which falls under its jurisdiction.

And while the law is intended to strengthen EU citizens’ fundamental rights by giving them more control over how their data is used, publishers are accusing Google of attempting to use the incoming framework as an opportunity to enforce an inappropriate “one-size fits all” approach to compliance on its publisher customers and their advertisers.

“Your proposal severely falls short on many levels and seems to lay out a framework more concerned with protecting your existing business model in a manner that would undermine the fundamental purposes of the GDPR and the efforts of publishers to comply with the letter and spirit of the law,” the coalition of publishers write to Google.

One objection they have is that Google is apparently intending to switch its status from that of a data processor of publishers’ data — i.e. the data Google receives from publishers and collects from their sites — to a data controller which they claim will enable it to “make unilateral decisions about how a publisher’s data is used”.

Though for other Google services, such as its web analytics product, the company has faced the opposite accusation: i.e. that it’s claiming it’s merely a data processor — yet giving itself expansive rights to use the data that’s gathered, rather like a data controller…

The publishers also say Google wants them to obtain valid legal consent from users to the processing of their data on its behalf — yet isn’t providing them with information about its intended uses of people’s data, which they would need to know in order to obtain valid consent under GDPR.

“[Y]ou refuse to provide publishers with any specific information about how you will collect, share and use the data. Placing the full burden of obtaining new consent on the publisher is untenable without providing the publisher with the specific information needed to provide sufficient transparency or to obtain the requisite specific, granular, and informed consent under the GDPR,” they write.

“If publishers agree to obtain consent on your behalf, then you must provide the publisher with detailed information for each use of the personal data for which you want publishers to ask for legally valid consent and model language to obtain consent for your activities.”

Nor do individual publishers necessarily want to have to use consent as the legal basis for processing their users personal data (other options are available under the law, though a legal basis is always required) — but they argue that Google’s one-size proposal doesn’t allow for alternatives.

“Some publishers may want to rely upon legitimate interest as a legal basis and since the GDPR calls for balancing several factors, it may be appropriate for publishers to process data under this legal basis for some purposes,” they note. “Our members, as providers of the news, have different purposes and interests for participating in the digital advertising ecosystem. Yet, Google’s imposition of an essentially self-prescribed one-size-fits-all approach doesn’t seem to take into account or allow for the different purposes and interests publishers have.”

They are also concerned Google is trying to transfer liability for obtaining consent onto publishers — asserting: “Given that your now-changed terms are incorporated by reference into many contracts under which publishers indemnify Google, these terms could result in publishers indemnifying Google for potentially ruinous fines. We strongly encourage you to revise your proposal to include mutual indemnification provisions and limitations on liability. While the exact allocation of liability should be negotiated by individual publishers, your current proposal represents a ‘take it or leave it’ disproportionate approach.”

They also accuse Google of risking acting in an anti-competitive manner because the proposed terms state that Google may stop serving ads on on publisher sites if it deems a publisher’s consent mechanism to be “insufficient”.

“If Google then dictates how that mechanism would look and prescribes the number of companies a publisher can work with, this would limit the choice of companies that any one publisher can gather consent for, or integrate with, to a very small number defined by Google. This gives rise to grave concerns in terms of anti-competitive behavior as Google is in effect dictating to the market which companies any publisher can do business with,” they argue.

They end the letter, which is addressed to Google’s CEO Sundar Pichai, with a series of questions for the company which they say they need answers to — including how and why Google believes its legal relationship to publishers’ data would be a data controller; whether it will seek publisher input ahead of making future changes to its terms for accessing its advertiser services; and how Google’s services could be integrated into an industry-wide consent management platform — should publishers decide to make use of one.

Commenting in a statement, Angela Mills Wade, executive director of the European Publishers Council and one of the signatories to the letter, said: “As usual, Google wants to have its cake and eat it. It wants to be data controller — of data provided by publishers — without any of the legal liability — and with apparently total freedom to do what they like with that data. Publishers have trusted relationships with their readers and advertisers — how can we get consent from them without being in a position to tell them what they are consenting to? And why should we be legally liable for any abuses when we have no control or prior knowledge? By imposing their own standard for regulatory compliance, Google effectively prevents publishers from being able to choose which partners to work with.”

The other publishers signing the letter are Digital Content Next, News Media Alliance and News Media Association.

We put some of their questions to Google — and the company rejected that it’s seeking additional rights over publishers’ data, sending us the following statement:

Guidance about the GDPR is that consent is required for personalised advertising. We have always asked publishers to get consent for the use of our ad tech on their sites, and now we’re simply updating that requirement in line with the GDPR. Because we make decisions on data processing to help publishers optimize ad revenue, we will operate as a controller across our publisher products in line with GDPR requirements, but this designation does not give us any additional rights to their data. We’re working closely with our publisher partners and are committed to providing a range of tools to help them gather user consent.

A spokesperson for the company also noted that, under GDPR, controller status merely reflects that an involved entity is more than a data processor for a specific service, also pointing out that Google’s contracts define the limits of what can be done with data in such instances.

The spokesperson further emphasized that Google is not asking publishers to obtain consent from Google’s users, but for their own users on their own sites and for the use of ad tech on those sites — noting this could be one of Google’s ad products or someone else’s.

In terms of timing the Google rep added the company would have liked to put the new ad policy out earlier but said that guidance on consent from the EU’s Article 29 Working Party only came out in draft in December, noting also that this continues to be revised. 

Snap to change how Snap Map operates in Europe ahead of GDPR

Snapchat is making changes to the information it collects about under-16s in Europe as it works to comply with an update to the EU’s data protection rules. The changes could see its location tracking Snap Map being disabled for younger teen users in the region.

The messaging app, which is most popular with teens, has faced criticism in Europe for how it processes and exposes the location of children on the Snap Map feature which launched last summer.

Following its launch some European schools wrote to parents warning them of safeguarding concerns over the feature. Police forces have also raised concerns about Snap Map.

The FT reports that the messaging app will stop gathering the location data of younger European teens. A spokesperson for the company told the newspaper it will generally no longer process any data that might require parental consent. Although the company also said Snapchat does not intent to put an outright bar on 13-year-olds signing up to its service.

The latter decision stands in contrast to a move by Facebook-owned WhatsApp, which earlier this week revealed it’s raising its minimum user age to 16 for European users, also as a GDPR compliance step — although WhatsApp did not detail any plans to actively enforce this new limit, i.e. beyond asking users to state they are over 16.

GDPR includes a new provision on children’s personal data, setting a 16-year-old age limit on kids’ ability to consent to their data being processed. Although Member States can choose to derogate from this (and some have) by writing a lower age limit into their laws.

The hard cap is set at 13-years-old — making that the defacto standard for children to be able to sign up to digital services.

The new privacy framework will apply in just over a month’s time.

In an FAQ on its website related to GDPR compliance and obtaining parental consent for users under the age of 16, Snap writes: “To the extent Snap relies on consent to process personal data of users between 13 and 16, we will make the reasonable efforts required to confirm that consent has been given by someone who holds parental responsibility while respecting the need to minimize further data collection.”

We’ve reached out to Snap to ask whether it will be entirely disabling Snap Map for under-16s in the region. It’s possible the company might try to come up with a compromise that obfuscates under-16s’ location on the map, although any such move would undermine the utility of the feature — and may not entirely assuage privacy concerns related to it either.

The level of detail on Snap Map has been flagged as a major privacy concern, because it can show the precise location of users (the location only updates when the app is open). It can even detail activities — such as showing a person is in a car or at an airport. Even Snapchat users colloquially refer to the feature as a tool to “stalk” their friends.

And while users do need to opt in to share their location, the Snap Map feature was actively pushed out as a new feature notification when it launched — meaning the company actively solicited opt-ins from users.

Once Snap Map has been enabled, there are controls which enable users to switch on a so-called ‘ghost mode’ — which removes their location-pinpointed avatar from the map. However some users have reported that subsequent updates to the app can disable this setting — rendering them visible again, and meaning they would need to notice that and revisit the setting to switch invisibility back on.

Users can also choose to share their location with a certain sub-set of friends, rather than with all their friends. However there is also a public version of Snap Map where Stories that users have shared publicly at a particular location can be viewed by anyone using the Internet, even if they’re not themselves a Snapchat user.

The social map feature was inspired by a similar offering made by French startup, Zenly . Snap later acquired the startup for between $250M and $350M — although Zenly’s own social map was left to run independently.

Zenly’s current privacy policy makes not mention of GDPR — citing only French DP law at this stage — and it’s not clear whether it will also be amending its data collection practices to comply with the regulation when it comes into force in a month’s time.

We’ve also reached out to the team with questions. The current minimum age for usage of its app is 13.