Facebook quietly relaunches apps for Groups platform after lockdown

Facebook is becoming a marketplace for enterprise apps that help Group admins manage their communities.

To protect itself and its users in the wake of the Cambridge Analytica scandal, Facebook locked down the Groups API for building apps for Groups. These apps had to go through a human-reviewed approval process, and lost access to Group member lists, plus the names and profile pics of people who posted. Now, approved Groups apps are reemerging on Facebook, accessible to admins through a new in-Facebook Groups apps browser that gives the platform control over discoverability.

Facebook confirmed the new Groups apps browser after our inquiry, telling TechCrunch, “What you’re seeing today is related to changes we announced in April that require developers to go through an updated app review process in order to use the Groups API. As part of this, some developers who have gone through the review process are now able to access the Groups API.”

Facebook wouldn’t comment further, but this Help Center article details how Groups can now add apps. Matt Navarra first spotted the new Groups apps option and tipped us off. Previously, admins would have to find Group management tools outside of Facebook and then use their logged-in Facebook account to give the app permissions to access their Group’s data.

Groups are often a labor of love for admins, but generate tons of engagement for the social network. That’s why the company recently began testing Facebook subscription Groups that allow admins to charge a monthly fee. With the right set of approved partners, the platform offers Group admins some of the capabilities usually reserved for big brands and businesses that pay for enterprise tools to manage their online presences.

Becoming a gateway to enterprise tool sets could make Facebook Groups more engaging, generating more time on site and ad views from users. This also positions Facebook as a natural home for ad campaigns promoting different enterprise tools. And one day, Facebook could potentially try to act more formally as a Groups App Store and try to take a cut of software-as-a-service subscription fees the tool makers charge.

Facebook can’t build every tool that admins might need, so in 2010 it launched the Groups API to enlist some outside help. Moderating comments, gathering analytics and posting pre-composed content were some of the popular capabilities of Facebook Groups apps. But in April, it halted use of the API, announcing that “there is information about people and conversations in groups that we want to make sure is better protected. Going forward, all third-party apps using the Groups API will need approval from Facebook and an admin to ensure they benefit the group.”

Now apps that have received the necessary approval are appearing in this Groups apps browser. It’s available to admins through their Group Settings page. The apps browser lets them pick from a selection of tools like Buffer and Sendible for scheduling posts to their Group, and others for handling commerce messages.

Facebook is still trying to bar the windows of its platform, ensuring there are no more easy ways to slurp up massive amounts of sensitive user data. Yesterday it shut down more APIs and standalone apps in what appears to be an attempt to streamline the platform so there are fewer points of risk and more staff to concentrate on safeguarding the most popular and powerful parts of its developer offering.

The Cambridge Analytica scandal has subsided to some degree, with Facebook’s share price recovering and user growth maintaining at standard levels. However, a new report from The Washington Post says the FBI, FTC and SEC will be investigating Facebook, Cambridge Analytica and the social network’s executives’ testimony to Congress. Facebook surely wants to get back to concentrating on product, not politics, but must take it slow and steady. There are too many eyes on it to move fast or break anything.

Zuckerberg avoided tough questions thanks to short EU testimony format

Mark Zuckerberg got to cherry-pick the questions he wanted to answer from EU Parliament after it spent an hour taking turns rattling off queries in bulk before leaving just a half-hour for his batched responses. Zuckerberg immediately trotted out his dorm room story of not expecting Facebook’s current duty to safety and democracy, and repeated his pledge to broaden the company’s responsibility. While he’s vowed to have his team follow-up with point-by-point replies, he managed to escape the televised testimony without any newsworthy gaffes.

The public will have to wait for canned, written responses to the toughest questions about why Facebook didn’t disclose the Cambridge Analytica issue immediately, how it uses shadow profiles and what he thinks about Facebook, Instagram and WhatsApp being broken up. If Zuckerberg played it safe during his U.S. congressional testimony by being boring, he dodged scandal here by using the abbreviated format to bend the testimony toward his most defensible positions.

Future testimonies by technology industry executives will be much more productive for the public if officials keep questions succinct and only ask the hard ones, executives are given ample time to answer them all and they use a question-answer format. No more of this question-question-question-question-answer-answer-goodbye.

Facebook CEO Mark Zuckerberg testifies before EU Parliament

Facebook CEO Zuckerberg is testifying before European Parliament, and he is expected to face questions about privacy and the Cambridge Analytica data scandal.

Posted by CNNMoney on Tuesday, May 22, 2018

Zuckerberg initially resisted the Brussels meeting with Parliament (technically not a “testimony”). Then it was slated to be private before public outcry led to the livestreaming of the session. While the questions were more pointed than those asked by U.S. congress, the overall feel with Zuckerberg seated next to Parliament members rather than in the hotseat before them gave the meeting a less consequential tone.

The Facebook CEO used his short answer period to explain that he feels like there’s plenty of new competition for Facebook, and that it actually aids competition by offering tools to enable small businesses to challenge big brands online. He cited that “dozens of percents” of European users have gone through Facebook’s GDPR settings, rolling them early so they’re dismissible until the May 25th deadline because, “The last thing we want is for people to go through the flows quicker than they need to and just hit OK.” That ignores the dark pattern designs built into that GDPR privacy flow, that while temporarily dismissible, does coerce users to consent by visually downplaying the buttons to opt out of giving Facebook data.

Zuckerberg laid out his thoughts about the future of regulation for social networks, noting that “Some sort of regulation is important and inevitable, and the important thing is to get this right.” He said that regulations would need to “allow for innovation, don’t inadvertently prevent new technologies like AI from being able to develop, and of course to make sure that new startups — the next student sitting in a college dorm room like I was — doesn’t have an undue burden in being able to build the next great product.” That’s positive, since blunt regulation could create a moat for Facebook.

But when Zuckerberg concluded his testimony, noting “I want to be sensitive to time because we are 15 minutes over” the scheduled 75-minute session length, several EU officials spoke up, angry that they felt their questions had been ignored. “Will you allow users to escape targeted advertising? I asked you six yes-or-no questions and got not a single answer, and of course, well, you asked for this format for a reason,” stated one member of Parliament. “I’ll make sure we follow up and get you answers to those,” Zuckerberg coldly responded. “We’re going to have someone come to do a full hearing soon to answer more of the technical questions as well.”

The combative atmosphere at the conclusion of the testimony means Facebook could encounter soured regulators in the future who might be emboldened by their disappointment in his appearance. Zuckerberg might have avoided losing the minds of the EU by dodging damning topics, but he sure didn’t win the hearts of Europe’s lawmakers.

Facebook faces fresh criticism over ad targeting of sensitive interests

Is Facebook trampling over laws that regulate the processing of sensitive categories of personal data by failing to ask people for their explicit consent before it makes sensitive inferences about their sex life, religion or political beliefs? Or is the company merely treading uncomfortably and unethically close to the line of the law?

An investigation by the Guardian and the Danish Broadcasting Corporation has found that Facebook’s platform allows advertisers to target users based on interests related to political beliefs, sexuality and religion — all categories that are marked out as sensitive information under current European data protection law.

And indeed under the incoming GDPR, which will apply across the bloc from May 25.

The joint investigation found Facebook’s platform had made sensitive inferences about users — allowing advertisers to target people based on inferred interests including communism, social democrats, Hinduism and Christianity. All of which would be classed as sensitive personal data under EU rules.

And while the platform offers some constraints on how advertisers can target people against sensitive interests — not allowing advertisers to exclude users based on a specific sensitive interest, for example (Facebook having previously run into trouble in the US for enabling discrimination via ethnic affinity-based targeting) — such controls are beside the point if you take the view that Facebook is legally required to ask for a user’s explicit consent to processing this kind of sensitive data up front, before making any inferences about a person.

Indeed, it’s very unlikely that any ad platform can put people into buckets with sensitive labels like ‘interested in social democrat issues’ or ‘likes communist pages’ or ‘attends gay events’ without asking them to let it do so first.

And Facebook is not asking first.

Facebook argues otherwise, of course — claiming that the information it gathers about people’s affinities/interests, even when they entail sensitive categories of information such as sexuality and religion, is not personal data.

In a response statement to the media investigation, a Facebook spokesperson told us:

Like other Internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data. This means that someone could have an ad interest listed as ‘Gay Pride’ because they have liked a Pride associated Page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality. People are able to manage their Ad Preferences tool, which clearly explains how advertising works on Facebook and provides a way to tell us if you want to see ads based on specific interests or not. When interests are removed, we show people the list of removed interests so that they have a record they can access, but these interests are no longer used for ads. Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.

Expect Facebook’s argument to be tested in the courts — likely in the very near future.

As we’ve said before, the GDPR lawsuits are coming for the company, thanks to beefed up enforcement of EU privacy rules, with the regulation providing for fines as large as 4% of a company’s global turnover.

Facebook is not the only online people profiler, of course, but it’s a prime target for strategic litigation both because of its massive size and reach (and the resulting power over web users flowing from a dominant position in an attention-dominating category), but also on account of its nose-thumbing attitude to compliance with EU regulations thus far.

The company has faced a number of challenges and sanctions under existing EU privacy law — though for its operations outside the US it typically refuses to recognize any legal jurisdiction except corporate-friendly Ireland, where its international HQ is based.

And, from what we’ve seen so far, Facebook’s response to GDPR ‘compliance’ is no new leaf. Rather it looks like privacy-hostile business as usual; a continued attempt to leverage its size and power to force a self-serving interpretation of the law — bending rules to fit its existing business processes, rather than reconfiguring those processes to comply with the law.

The GDPR is one of the reasons why Facebook’s ad microtargeting empire is facing greater scrutiny now, with just weeks to go before civil society organizations are able to take advantage of fresh opportunities for strategic litigation allowed by the regulation.

“I’m a big fan of the GDPR. I really believe that it gives us — as the court in Strasbourg would say — effective and practical remedies,” law professor Mireille Hildebrandt tells us. “If we go and do it, of course. So we need a lot of public litigation, a lot of court cases to make the GDPR work but… I think there are more people moving into this.

“The GDPR created a market for these sort of law firms — and I think that’s excellent.”

But it’s not the only reason. Another reason why Facebook’s handling of personal data is attracting attention is the result of tenacious press investigations into how one controversial political consultancy, Cambridge Analytica, was able to gain such freewheeling access to Facebook users’ data — as a result of Facebook’s lax platform policies around data access — for, in that instance, political ad targeting purposes.

All of which eventually blew up into a major global privacy storm, this March, though criticism of Facebook’s privacy-hostile platform policies dates back more than a decade at this stage.

The Cambridge Analytica scandal at least brought Facebook CEO and founder Mark Zuckerberg in front of US lawmakers, facing questions about the extent of the personal information it gathers; what controls it offers users over their data; and how he thinks Internet companies should be regulated, to name a few. (Pro tip for politicians: You don’t need to ask companies how they’d like to be regulated.)

The Facebook founder has also finally agreed to meet EU lawmakers — though UK lawmakers’ calls have been ignored.

Zuckerberg should expect to be questioned very closely in Brussels about how his platform is impacting European’s fundamental rights.

Sensitive personal data needs explicit consent

Facebook infers affinities linked to individual users by collecting and processing interest signals their web activity generates, such as likes on Facebook Pages or what people look at when they’re browsing outside Facebook — off-site intel it gathers via an extensive network of social plug-ins and tracking pixels embedded on third party websites. (According to information released by Facebook to the UK parliament this week, during just one week of April this year its Like button appeared on 8.4M websites; the Share button appeared on 931,000 websites; and its tracking Pixels were running on 2.2M websites.)

But here’s the thing: Both the current and the incoming EU legal framework for data protection sets the bar for consent to processing so-called special category data equally high — at “explicit” consent.

What that means in practice is Facebook needs to seek and secure separate consents from users (such as via a dedicated pop-up) for collecting and processing this type of sensitive data.

The alternative is for it to rely on another special condition for processing this type of sensitive data. However the other conditions are pretty tightly drawn — relating to things like the public interest; or the vital interests of a data subject; or for purposes of “preventive or occupational medicine”.

None of which would appear to apply if, as Facebook is, you’re processing people’s sensitive personal information just to target them with ads.

Ahead of GDPR, Facebook has started asking users who have chosen to display political opinions and/or sexuality information on their profiles to explicitly consent to that data being public.

Though even there its actions are problematic, as it offers users a take it or leave it style ‘choice’ — saying they either remove the info entirely or leave it and therefore agree that Facebook can use it to target them with ads.

Yet EU law also requires that consent be freely given. It cannot be conditional on the provision of a service.

So Facebook’s bundling of service provisions and consent will also likely face legal challenges, as we’ve written before.

“They’ve tangled the use of their network for socialising with the profiling of users for advertising. Those are separate purposes. You can’t tangle them like they are doing in the GDPR,” says Michael Veale, a technology policy researcher at University College London, emphasizing that GDPR allows for a third option that Facebook isn’t offering users: Allowing them to keep sensitive data on their profile but that data not be used for targeted advertising.

“Facebook, I believe, is quite afraid of this third option,” he continues. “It goes back to the Congressional hearing: Zuckerberg said a lot that you can choose which of your friends every post can be shared with, through a little in-line button. But there’s no option there that says ‘do not share this with Facebook for the purposes of analysis’.”

Returning to how the company synthesizes sensitive personal affinities from Facebook users’ Likes and wider web browsing activity, Veale argues that EU law also does not recognize the kind of distinction Facebook is seeking to draw — i.e. between inferred affinities and personal data — and thus to try to redraw the law in its favor.

“Facebook say that the data is not correct, or self-declared, and therefore these provisions do not apply. Data does not have to be correct or accurate to be personal data under European law, and trigger the protections. Indeed, that’s why there is a ‘right to rectification’ — because incorrect data is not the exception but the norm,” he tells us.

“At the crux of Facebook’s challenge is that they are inferring what is arguably “special category” data (Article 9, GDPR) from non-special category data. In European law, this data includes race, sexuality, data about health, biometric data for the purposes of identification, and political opinions. One of the first things to note is that European law does not govern collection and use as distinct activities: Both are considered processing.

“The pan-European group of data protection regulators have recently confirmed in guidance that when you infer special category data, it is as if you collected it. For this to be lawful, you need a special reason, which for most companies is restricted to separate, explicit consent. This will be often different than the lawful basis for processing the personal data you used for inference, which might well be ‘legitimate interests’, which didn’t require consent. That’s ruled out if you’re processing one of these special categories.”

“The regulators even specifically give Facebook like inference as an example of inferring special category data, so there is little wiggle room here,” he adds, pointing to an example used by regulators of a study that combined Facebook Like data with “limited survey information” — and from which it was found that researchers could accurately predict a male user’s sexual orientation 88% of the time; a user’s ethnic origin 95% of the time; and whether a user was Christian or Muslim 82% of the time.

Which underlines why these rules exist — given the clear risk of breaches to human rights if big data platforms can just suck up sensitive personal data automatically, as a background process.

The overarching aim of GDPR is to give consumers greater control over their personal data not just to help people defend their rights but to foster greater trust in online services — and for that trust to be a mechanism for greasing the wheels of digital business. Which is pretty much the opposite approach to sucking up everything in the background and hoping your users don’t realize what you’re doing.

Veale also points out that under current EU law even an opinion on someone is their personal data… (per this Article 29 Working Party guidance, emphasis ours):

From the point of view of the nature of the information, the concept of personal data includes any sort of statements about a person. It covers “objective” information, such as the presence of a certain substance in one’s blood. It also includes “subjective” information, opinions or assessments. This latter sort of statements make up a considerable share of personal data processing in sectors such as banking, for the assessment of the reliability of borrowers (“Titius is a reliable borrower”), in insurance (“Titius is not expected to die soon”) or in employment (“Titius is a good worker and merits promotion”).

We put that specific point to Facebook — but at the time of writing we’re still waiting for a response. (Nor would Facebook provide a public response to several other questions we asked around what it’s doing here, preferring to limit its comment to the statement at the top of this post.)

Veale adds that the WP29 guidance has been upheld in recent CJEU cases such as Nowak — which he says emphasized that, for example, annotations on the side of an exam script are personal data.

He’s clear about what Facebook should be doing to comply with the law: “They should be asking for individuals’ explicit, separate consent for them to infer data including race, sexuality, health or political opinions. If people say no, they should be able to continue using Facebook as normal without these inferences being made on the back-end.”

“They need to tell individuals about what they are doing clearly and in plain language,” he adds. “Political opinions are just as protected here, and this is perhaps more interesting than race or sexuality.”

“They certainly should face legal challenges under the GDPR,” agrees Paul Bernal, senior lecturer in law at the University of East Anglia, who is also critical of how Facebook is processing sensitive personal information. “The affinity concept seems to be a pretty transparent attempt to avoid legal challenges, and one that ought to fail. The question is whether the regulators have the guts to make the point: It undermines a quite significant part of Facebook’s approach.”

“I think the reason they’re pushing this is that they think they’ll get away with it, partly because they think they’ve persuaded people that the problem is Cambridge Analytica, as rogues, rather than Facebook, as enablers and supporters. We need to be very clear about this: Cambridge Analytica are the symptom, Facebook is the disease,” he adds.

“I should also say, I think the distinction between ‘targeting’ being OK and ‘excluding’ not being OK is also mostly Facebook playing games, and trying to have their cake and eat it. It just invites gaming of the systems really.”

Facebook claims its core product is social media, rather than data-mining people to run a highly lucrative microtargeted advertising platform.

But if that’s true why then is it tangling its core social functions with its ad-targeting apparatus — and telling people they can’t have a social service unless they agree to interest-based advertising?

It could support a service with other types of advertising, which don’t depend on background surveillance that erodes users’ fundamental rights.  But it’s choosing not to offer that. All you can ‘choose’ is all or nothing. Not much of a choice.

Facebook telling people that if they want to opt out of its ad targeting they must delete their account is neither a route to obtain meaningful (and therefore lawful) consent — nor a very compelling approach to counter criticism that its real business is farming people.

The issues at stake here for Facebook, and for the shadowy background data-mining and brokering of the online ad targeting industry as a whole, are clearly far greater than any one data misuse scandal or any one category of sensitive data. But Facebook’s decision to retain people’s sensitive personal data for ad targeting without asking for consent up-front is a telling sign of something gone very wrong indeed.

If Facebook doesn’t feel confident asking its users whether what it’s doing with their personal data is okay or not, maybe it shouldn’t be doing it in the first place.

At very least it’s a failure of ethics. Even if the final judgement on Facebook’s self-serving interpretation of EU privacy rules will have to wait for the courts to decide.

Toward transitive data privacy and securing the data you don’t share

We are spending a lot of time discussing what happens to data when you explicitly or implicitly share it. But what about data that you have never ever shared?

Your cousin’s DNA

We all share DNA  —  after all, it seems we are all descendants of a few tribes. But the more closely related you are, the closer the DNA match. While we all know we share 50 percent DNA with siblings, and 25 percent with first cousins  —  there is still some meaningful match even between distant relatives (depending on the family tree distance).

In short, if you have never taken a DNA test but one or more of your blood relatives has, and shared that data  —  some of your DNA is effectively now available for a match.

While this may have seemed like theory a few weeks ago, the cops caught the Golden State Killer by using this method.

Cambridge Analytica

A similar thing happened when data was mis-used by Cambridge Analytica . Even if you never used the quiz app on the Facebook platform but your friends did, they essentially revealed private information about you without your consent or knowledge.

The number of users that took the quiz was shockingly small  —  only 300,000 users participated. And yet, upwards of 50 million (as many as 87 million) people eventually had their data collected by Cambridge Analytica.

And all of this was done legally and while complying with the platform requirements at that time.

Transitive data privacy

The word transitive simply means if A is related to B in a certain way, and B to C  — then A is related to C. For example, cousins is a transitive property. If Alice and Bob are cousins, and Bob and Chamath are cousins, then Alice and Chamath are cousins.

As private citizens, and corporations, we now have to think about transitive data privacy loss.

The simplest version of this is if your boyfriend or girlfriend forwards your private photo or conversation screenshot to someone else.

Transitive sharing upside

While we have discussed a couple of clear negative examples, there are many ways transitive data relationships help us.

Every time you ask a friend to connect you to someone on LinkedIn for a job or fundraise, you are leveraging the transitive relationship graph.

The DNA databases being created are primarily for social good  —  to help us connect with our roots and family, detect disease early and help medical research.

In fact, you could argue that a lot of challenges we face today require more data sharing, not less. If your hospital cannot share data with your primary care doctor at the right time, or your clinical trial data cannot be accessed to monitor downstream effects, we cannot take care of our citizens’ health as we should. Organizations like NIH and the VA and CMS (Medicare) are working hard to encourage appropriate easier sharing by healthcare providers.

Further, the good news is that there have been significant advances in security in encryption and hashing that enable companies to protect against the unintended side effects. More research is definitely called for. We can anonymize data, we can perturb data, and apply these techniques for protection while still being able to derive value and help customers.

UK watchdog orders Cambridge Analytica to give up data in US voter test case

Another big development in the personal data misuse saga attached to the controversial Trump campaign-linked UK-based political consultancy, Cambridge Analytica — which could lead to fresh light being shed on how the company and its multiple affiliates acquired and processed US citizens’ personal data to build profiles on millions of voters for political targeting purposes.

The UK’s data watchdog, the ICO, has today announced that it’s served an enforcement notice on Cambridge Analytica affiliate SCL Elections, under the UK’s 1998 Data Protection Act.

The company has been ordered to give up all the data it holds on one US academic within 30 days — with the ICO warning that: “Failure to do so is a criminal offence, punishable in the courts by an unlimited fine.”

The notice follows a subject access request (SAR) filed in January last year by US-based academic, David Carroll after he became suspicious about how the company was able to build psychographic profiles of US voters. And while Carroll is not a UK citizen, he discovered his personal data had been processed in the UK — so decided to bring a test case by requesting his personal data under UK law.

Carroll’s complaint, and the ICO’s decision to issue an enforcement notice in support of it, looks to have paved the way for millions of US voters to also ask Cambridge Analytica for their data (the company claimed to have up to 7,000 data points on the entire US electorate, circa 240M people — so just imagine the class action that could be filed here… ).

The Guardian reports that Cambridge Analytica had tried to dismiss Carroll’s argument by claiming he had no more rights “than a member of the Taliban sitting in a cave in the remotest corner of Afghanistan”. The ICO clearly disagrees.

Cambridge Analytica/SCL Group responded to Carroll’s original SAR in March 2017 but he was unimpressed by the partial data they sent him — which ranked his interests on a selection of topics (including gun rights, immigration, healthcare, education and the environment) yet did not explain how the scores had been calculated.

It also listed his likely partisanship and propensity to vote in the 2016 US election — again without explaining how those predictions had been generated.

So Carroll complained to the UK’s data watchdog in September 2017 — which began sending its own letters to CA/SCL, leading to further unsatisfactory responses.

“The company’s reply refused to address the ICO’s questions and incorrectly stated Prof Caroll had no legal entitlement to it because he wasn’t a UK citizen or based in this country. The ICO reiterated this was not legally correct in a letter to SCL the following month,” the ICO writes today. “In November 2017, the company replied, denying that the ICO had any jurisdiction or that Prof Carroll was legally entitled to his data, adding that SCL did “.. not expect to be further harassed with this sort of correspondence”.”

In a strongly worded statement, information commissioner Elizabeth Denham further adds:

The company has consistently refused to co-operate with our investigation into this case and has refused to answer our specific enquiries in relation to the complainant’s personal data — what they had, where they got it from and on what legal basis they held it.

The right to request personal data that an organisation holds about you is a cornerstone right in data protection law and it is important that Professor Carroll, and other members of the public, understand what personal data Cambridge Analytica held and how they analysed it.

We are aware of recent media reports concerning Cambridge Analytica’s future but whether or not the people behind the company decide to fold their operation, a continued refusal to engage with the ICO will potentially breach an Enforcement Notice and that then becomes a criminal matter.

Since mid-March this year, Cambridge Analytica’s name (along with the names of various affiliates) has been all over headlines relating to a major Facebook data misuse scandal, after press reports revealed in granular detail how an app developer had used the social media’s platform’s 2014 API structure to extract and process large amounts of users’ personal data, passing psychometrically modeled scores on US voters to Cambridge Analytica for political targeting.

But Carroll’s curiosity about what data Cambridge Analytica might hold about him predates the scandal blowing up last month. Although journalists had actually raised questions about the company as far back as December 2015 — when the Guardian reported that the company was working for the Ted Cruz campaign, using detailed psychological profiles of voters derived from tens of millions of Facebook users’ data.

Though it was not until last month that Facebook confirmed as many as 87 million users could have had personal data misappropriated.

Carroll, who has studied the Internet ad tech industry as part of his academic work, reckons Facebook is not the sole source of the data in this case, telling the Guardian he expects to find a whole host of other companies are also implicated in this murky data economy where people’s personal information is quietly traded and passed around for highly charged political purposes — bankrolled by billionaires.

“I think we’re going to find that this goes way beyond Facebook and that all sorts of things are being inferred about us and then used for political purposes,” he told the newspaper.

Under mounting political, legal and public pressure, Cambridge Analytica claimed to be shutting down this week — but the move appears more like a rebranding exercise, as parent entity, SCL Group, maintains a sprawling network of companies and linked entities. (Such as one called Emerdata, which was founded in mid-2017 and is listed at the same address as SCL Elections, and has many of the same investors and management as Cambridge Analytica… But presumably hasn’t yet been barred from social media giants’ ad platforms, as its predecessor has.)

Closing one of the entities embroiled in the scandal could also be a tactic to impede ongoing investigations, such as the one by the ICO — as Denham’s statement alludes, by warning that any breach of the enforcement notice could lead to criminal proceedings being brought against the owners and operators of Cambridge Analytica’s parent entity.

In March ICO officials obtained a warrant to enter and search Cambridge Analytica’s London offices, removing documents and computers for examination as part of a wider, year-long investigation into the use of personal data and analytics by political campaigns, parties, social media companies and other commercial actors. And last month the watchdog said 30 organizations — including Facebook — were now part of that investigation.

The Guardian also reports that the ICO has suggested to Cambridge Analytica that if it has difficulties complying with the enforcement notice it should hand over passwords for the servers seized during the March raid on its London office – raising questions about how much data the watchdog has been able to retrieve from the seized servers.

SCL Group’s website contains no obvious contact details beyond a company LinkedIn profile — a link which appears to be defunct. But we reached out to SCL Group’s CEO Nigel Oakes, who has maintained a public LinkedIn presence, to ask if he has any response to the ICO enforcement notice.

Meanwhile Cambridge Analytica continues to use its public Twitter account to distribute a stream of rebuttals and alternative ‘facts’.

What we learned from Facebook’s latest data misuse grilling

Facebook’s CTO Mike Schroepfer has just undergone almost five hours of often forensic and frequently awkward questions from members of a UK parliament committee that’s investigating online disinformation, and whose members have been further fired up by misinformation they claim Facebook gave it.

The veteran senior exec, who’s clocked up a decade at the company, also as its VP of engineering, is the latest stand-in for CEO Mark Zuckerberg who keeps eschewing repeat requests to appear.

The DCMS committee’s enquiry began last year as a probe into ‘fake news’ but has snowballed in scope as the scale of concern around political disinformation has also mounted — including, most recently, fresh information being exposed by journalists about the scale of the misuse of Facebook data for political targeting purposes.

During today’s session committee chair Damian Collins again made a direct appeal for Zuckerberg to testify, pausing the flow of questions momentarily to cite news reports suggesting the Facebook founder has agreed to fly to Brussels to testify before European Union lawmakers in relation to the Cambridge Analytica Facebook data misuse scandal.

“We’ll certainly be renewing our request for him to give evidence,” said Collins. “We still do need the opportunity to put some of these questions to him.”

Committee members displayed visible outrage during the session, accusing Facebook of concealing the truth or at very least concealing evidence from it at a prior hearing that took place in Washington in February — when the company sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field questions.

During questioning Milner and Bickert failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015 — after the company had learned (via an earlier Guardian article) that Facebook user data had been passed to the company by the developer of an app running on its platform.

Milner also told the committee that Cambridge Analytica could not have any Facebook data — yet last month the company admitted data on up to 87 million of its users had indeed been passed to the firm.

Schroepfer said he wasn’t sure whether Milner had been “specifically informed” about the agreement Facebook already had with Cambridge Analytica — adding: “I’m guessing he didn’t know”. He also claimed he had only himself become aware of it “within the last month”.

Who knows? Who knows about what the position was with Cambridge Analytica in February of this year? Who was in charge of this?” pressed one committee member.

“I don’t know all of the names of the people who knew that specific information at the time,” responded Schroepfer.

“We are a parliamentary committee. We went to Washington for evidence and we raised the issue of Cambridge Analytica. And Facebook concealed evidence to us as an organization on that day. Isn’t that the truth?” rejoined the committee member, pushing past Schroepfer’s claim to be “doing my best” to provide it with information.

A pattern of evasive behavior

“You are doing your best but the buck doesn’t stop with you does it? Where does the buck stop?”

“It stops with Mark,” replied Schroepfer — leading to a quick fire exchange where he was pressed about (and avoided answering) what Zuckerberg knew and why the Facebook founder wouldn’t come and answer the committee’s questions himself.

“What we want is the truth. We didn’t get the truth in February… Mr Schroepfer I remain to be convinced that your company has integrity,” was the pointed conclusion after a lengthy exchange on this.

“What’s been frustrating for us in this enquiry is a pattern of behavior from the company — an unwillingness to engage, and a desire to hold onto information and not disclose it,” said Collins, returning to the theme at another stage of the hearing — and also accusing Facebook of not providing it with “straight answers” in Washington.

“We wouldn’t be having this discussion now if this information hadn’t been brought into the light by investigative journalists,” he continued. “And Facebook even tried to stop that happening as well [referring to a threat by the company to sue the Guardian ahead of publication of its Cambridge Analytica exposé]… It’s a pattern of behavior, of seeking to pretend this isn’t happening.”

The committee expressed further dissatisfaction with Facebook immediately following the session, emphasizing that Schroepfer had “failed to answer fully on nearly 40 separate points”.

“Mr Schroepfer, Mark Zuckerberg’s right hand man whom we were assured could represent his views, today failed to answer many specific and detailed questions about Facebook’s business practices,” said Collins in a statement after the hearing.

“We will be asking him to respond in writing to the committee on these points; however, we are mindful that it took a global reputational crisis and three months for the company to follow up on questions we put to them in Washington D.C. on February 8

“We believe that, given the large number of outstanding questions for Facebook to answer, Mark Zuckerberg should still appear in front of the Committee… and will request that he appears in front of the DCMS Committee before the May 24.”

We reached out to Facebook for comment — but at the time of writing the company had not responded.

Palantir’s data use under review

Schroepfer was questioned on a wide range of topics during today’s session. And while he was fuzzy on many details, giving lots of partial answers and promises to “follow up”, one thing he did confirm was that Facebook board member Peter Thiel’s secretive big data analytics firm, Palantir, is one of the companies Facebook is investigating as part of a historical audit of app developers’ use of its platform.

Have there ever been concerns raised about Palantir’s activity, and about whether it has gained improper access to Facebook user data, asked Collins.

“I think we are looking at lots of different things now. Many people have raised that concern — and since it’s in the public discourse it’s obviously something we’re looking into,” said Schroepfer.

“But it’s part of the review work that Facebook’s doing?” pressed Collins.

“Correct,” he responded.

The historical app audit was announced in the wake of last month’s revelations about how much Facebook data Cambridge Analytica was given by app developer (and Cambridge University academic), Dr Aleksandr Kogan — in what the company couched as a “breach of trust”.

However Kogan, who testified to the committee earlier this week, argues he was just using Facebook’s platform as it was architected and intended to be used — going so far as to claim its developer terms are “not legally valid”. (“For you to break a policy it has to exist. And really be their policy, The reality is Facebook’s policy is unlikely to be their policy,” was Kogan’s construction, earning him a quip from a committee member that he “should be a professor of semantics”.)

Schroepfer said he disagreed with Kogan’s assessment that Facebook didn’t have a policy, saying the goal of the platform has been to foster social experiences — and that “those same tools, because they’re easy and great for the consumer, can go wrong”. So he did at least indirectly confirm Kogan’s general point that Facebook’s developer and user terms are at loggerheads.

“This is why we have gone through several iterations of the platform — where we have effectively locked down parts of the platform,” continued Schroepfer. “Which increases friction and makes it less easy for the consumer to use these things but does safeguard that data more. And been a lot more proactive in the review and enforcement of these things. So this wasn’t a lack of care… but I’ll tell you that our primary product is designed to help people share safety with a limited audience.

“If you want to say it to the world you can publish it on a blog or on Twitter. If you want to share it with your friends only, that’s the primary thing Facebook does. We violate that trust — and that data goes somewhere else — we’re sort of violating the core principles of our product. And that’s a big problem. And this is why I wanted to come to you personally today to talk about this because this is a serious issue.”

“You’re not just a neutral platform — you are players”

The same committee member, Paul Farrelly, who earlier pressed Kogan about why he hadn’t bothered to find out which political candidates stood to be the beneficiary of his data harvesting and processing activities for Cambridge Analytica, put it to Schroepfer that Facebook’s own actions in how it manages its business activities — and specifically because it embeds its own staff with political campaigns to help them use its tools — amounts to the company being “Dr Kogan writ large”.

“You’re not just a neutral platform — you are players,” said Farrelly.

“The clear thing is we don’t have an opinion on the outcome of these elections. That is not what we are trying to do. We are trying to offer services to any customer of ours who would like to know how to use our products better,” Schroepfer responded. “We have never turned away a political party because we didn’t want to help them win an election.

“We believe in strong open political discourse and what we’re trying to do is make sure that people can get their messages across.”

However in another exchange the Facebook exec appeared not to be aware of a basic tenet of UK election law — which prohibits campaign spending by foreign entities.

“How many UK Facebook users and Instagram users were contacted in the UK referendum by foreign, non-UK entities?” asked committee member Julie Elliott.

“We would have to understand and do the analysis of who — of all the ads run in that campaign — where is the location, the source of all of the different advertisers,” said Schroepfer, tailing off with a “so…” and without providing a figure. 

“But do you have that information?” pressed Elliott.

“I don’t have it on the top of my head. I can see if we can get you some more of it,” he responded.

“Our elections are very heavily regulated, and income or monies from other countries can’t be spent in our elections in any way shape or form,” she continued. “So I would have thought that you would have that information. Because your company will be aware of what our electoral law is.”

“Again I don’t have that information on me,” Schroepfer said — repeating the line that Facebook would “follow up with the relevant information”.

The Facebook CTO was also asked if the company could provide it with an archive of adverts that were run on its platform around the time of the Brexit referendum by Aggregate IQ — a Canadian data company that’s been linked to Cambridge Analytica/SCL, and which received £3.5M from leave campaign groups in the run up to the 2016 referendum (and has also been described by leave campaigners as instrumental to securing their win). It’s also under joint investigation by Canadian data watchdogs, along with Facebook.

In written evidence provided to the committee today Facebook says it has been helping ongoing investigations into “the Cambridge Analytica issue” that are being undertaken by the UK’s Electoral Commission and its data protection watchdog, the ICO. Here it writes that its records show AIQ spent “approximately $2M USD on ads from pages that appear to be associated with the 2016 Referendum”.

Schroepfer’s responses on several requests by the committee for historical samples of the referendum ads AIQ had run amounted to ‘we’ll see what we can do’ — with the exec cautioning that he wasn’t entirely sure how much data might have been retained.

“I think specifically in Aggregate IQ and Cambridge Analytica related to the UK referendum I believe we are producing more extensive information for both the Electoral Commission and the Information Commissioner,” he said at one point, adding it would also provide the committee with the same information if it’s legally able to. “I think we are trying to do — give them all the data we have on the ads and what they spent and what they’re like.”

Collins asked what would happen if an organization or an individual had used a Facebook ad account to target dark ads during the referendum and then taken down the page as soon as the campaign was over. “How would you be able to identify that activity had ever taken place?” he asked.

“I do believe, uh, we have — I would have to confirm, but there is a possibility that we have a separate system — a log of the ads that were run,” said Schroepfer, displaying some of the fuzziness that irritated the committee. “I know we would have the page itself if the page was still active. If they’d run prior campaigns and deleted the page we may retain some information about those ads — I don’t know the specifics, for example how detailed that information is, and how long retention is for that particular set of data.”

Dark ads a “major threat to democracy”

Collins pointed out that a big part of UK (and indeed US) election law relates to “declaration of spent”, before making the conjoined point that if someone is “hiding that spend” — i.e. by placing dark ads that only the recipient sees, and which can be taken offline immediately after the campaign — it smells like a major risk to the democratic process.

“If no one’s got the ability to audit that, that is a major threat to democracy,” warned Collins. “And would be a license for a major breach of election law.”

“Okay,” responded Schroepfer as if the risk had never crossed his mind before. “We can come back on the details on that.”

On the wider app audit that Facebook has committed to carrying out in the wake of the scandal, Schroepfer was also asked how it can audit apps or entities that are no longer on the platform — and he admitted this is “a challenge” and said Facebook won’t have “perfect information or detail”.

“This is going to be a challenge again because we’re dealing with historic events so we’re not going to have perfect information or detail on any of these things,” he said. “I think where we start is — it very well may be that this company is defunct but we can look at how they used the platform. Maybe there’s two people who used the app and they asked for relatively innocuous data — so the chance that that is a big issue is a lot lower than an app that was widely in circulation. So I think we can at least look at that sort of information. And try to chase down the trail.

“If we have concerns about it even if the company is defunct it’s possible we can find former employees of the company who might have more information about it. This starts with trying to identify where the issues might be and then run the trail down as much as we can. As you highlight, though, there are going to be limits to what we can find. But our goal is to understand this as best as we can.”

The committee also wanted to know if Facebook had set a deadline for completing the audit — but Schroepfer would only say it’s going “as fast as we can”.

He claimed Facebook is sharing “a tremendous amount of information” with the UK’s data protection watchdog — as it continues its (now) year-long investigation into the use of digital data for political purposes.

“I would guess we’re sharing information on this too,” he said in reference to app audit data. “I know that I personally shared a bunch of details on a variety of things we’re doing. And same with the Electoral Commission [which is investigating whether use of digital data and social media platforms broke campaign spending rules].”

In Schroepfer’s written evidence to the committee Facebook says it has unearthed some suggestive links between Cambridge Analytica/SCL and Aggegrate IQ: “In the course of our ongoing review, we also found certain billing and administration connections between SCL/Cambridge Analytica and AIQ”, it notes.

Both entities continue to deny any link exists between them, claiming they are entirely separate entities — though the former Cambridge Analytica employee turned whistleblower, Chris Wylie, has described AIQ as essentially the Canadian arm of SCL.

“The collaboration we saw was some billing and administrative contacts between the two of them, so you’d see similar people show up in each of the accounts,” said Schroepfer, when asked for more detail about what it had found, before declining to say anything else in a public setting on account of ongoing investigations — despite the committee pointing out other witnesses it has heard from have not held back on that front.

Another piece of information Facebook has included in the written evidence is the claim that it does not believe AIQ used Facebook data obtained via Kogan’s apps for targeting referendum ads — saying it used email address uploads for “many” of its ad campaigns during the referendum.

The data gathered through the TIYDL [Kogan’s thisisyourdigitallife] app did not include the email addresses of app installers or their friends. This means that AIQ could not have obtained these email addresses from the data TIYDL gathered from Facebook,” Facebook asserts. 

Schroepfer was questioned on this during the session and said that while there was some overlap in terms of individuals who had downloaded Kogan’s app and who had been in the audiences targeted by AIQ this was only 3-4% — which he claimed was statistically insignificant, based on comparing with other Facebook apps of similar popularity to Kogan’s.

“AIQ must have obtained these email addresses for British voters targeted in these campaigns from a different source,” is the company’s conclusion.

“We are investigating Mr Chancellor’s role right now”

The committee also asked several questions about Joseph Chancellor, the co-director of Kogan’s app company, GSR, who became an employee of Facebook in 2015 after he had left GSR. Its questions included what Chancellor’s exact role at Facebook is and why Kogan has been heavily criticized by the company yet his GSR co-director apparently remains gainfully employed by it.

Schroepfer initially claimed Facebook hadn’t known Chancellor was a director of GSR prior to employing him, in November 2015 — saying it had only become aware of that specific piece of his employment history in 2017.

But after a break in the hearing he ‘clarified’ this answer — adding: “In the recruiting process, people hiring him probably saw a CV and may have known he was part of GSR. Had someone known that — had we connected all the dots to when this thing happened with Mr Kogan, later on had he been mentioned in the documents that we signed with the Kogan party — no. Is it possible that someone knew about this and the right other people in the organization didn’t know about it, that is possible.”

A committee member then pressed him further. “We have evidence that shows that Facebook knew in November 2016 that Joseph Chancellor had formed the company, GSR, with Aleksandr Kogan which obviously then went on to provide the information to Cambridge Analytica. I’m very unclear as to why Facebook have taken such a very direct and critical line… with Kogan but have completely ignored Joseph Chancellor.”

At that point Schroepfer revealed Facebook is currently investigating Chancellor as a result of the data scandal.

“I understand your concern. We are investigating Mr Chancellor’s role right now,” he said. “There’s an employment investigation going on right now.

In terms of the work Chancellor is doing for Facebook, Schroepfer said he thought he had worked on VR for the company — but emphasized he has not been involved with “the platform”.

The issue of the NDA Kogan claimed Facebook had made him sign also came up. But Schroepfer counter claimed that this was not an NDA but just a “standard confidentiality clause” in the agreement to certify Kogan had deleted the Facebook data and its derivatives.

“We want him to be able to be open. We’re waiving any confidentiality there if that’s not clear from a legal standpoint,” he said later, clarifying it does not consider Kogan legally gagged.

Schroepfer also confirmed this agreement was signed with Kogan in June 2016, and said the “core commitments” were to confirm the deletion of data from himself and three others Kogan had passed it to: Former Cambridge Analytica CEO Alexander Nix; Wylie, for a company he had set up after leaving Cambridge Analytica; and Dr Michael Inzlicht from the Toronto Laboratory for Social Neuroscience (Kogan mentioned to the committee earlier this week he had also passed some of the Facebook data to a fellow academic in Canada).

Asked whether any payments had been made between Facebook and Kogan as part of the contract, Schroepfer said: “I believe there was no payment involved in this at all.”

‘Radical’ transparency is its fix for regulation

Other issues raised by the committee included why Facebook does not provide an overall control or opt-out for political advertising; why it does not offer a separate feed for ads but chooses to embed them into the Newsfeed; how and why it gathers data on non-users; the addictiveness engineered into its product; what it does about fake accounts; why it hasn’t recruited more humans to help with the “challenges” of managing content on a platform that’s scaled so large; and aspects of its approach to GDPR compliance.

On the latter, Schroepfer was queried specifically on why Facebook had decided to shift the data controller of ~1.5BN non-EU international users from Ireland to the US. On this he claimed the GDPR’s stipulation that there be a “lead regulator” conflicts with Facebook’s desire to be more responsive to local concerns in its non-EU international markets.

“US law does not have a notion of a lead regulator so the US does not become the lead regulator — it opens up the opportunity for us to have local markets have them, regions, be the lead and final regulator for the users in that area,” he claimed.

Asked whether he thinks the time has come for “robust regulation and empowerment of consumers over their information”, Schroepfer demurred that new regulation is needed to control data flowing over consumer platforms. “Whether, through regulation or not, making sure consumers have visibility, control and can access and take their information with you, I agree 100%,” he said, agreeing only to further self-regulation not to the need for new laws.

“In terms of regulation there are multiple laws and regulatory bodies that we are under the guise of right now. Obviously the GDPR is coming into effect just next month. We have been regulated in Europe by the Irish DPC whose done extensive audits of our systems over multiple years. In the US we’re regulated by the FTC, Privacy Commissioner in Canada and others. So I think the question isn’t ‘if’, the question is honestly how do we ensure the regulations and the practices achieve the goals you want. Which is consumers have safety, they have transparency, they understand how this stuff works, and they have control.

“And the details of implementing that is where all the really hard work is.”

His stock response to the committee’s concerns about divisive political ads was that Facebook believes “radical transparency” is the fix — also dropping one tidbit of extra news on that front in his written testimony by saying Facebook will roll out an authentication process for political advertisers in the UK in time for the local elections in May 2019.

Ads will also be required to be labeled as “political” and disclose who paid for the ad. And there will be a searchable archive — available for seven years — which will include the ads themselves plus some associated data (such as how many times an ad may have been seen, how much money was spent, and the kinds of people who saw it).

Collins asked Schroepfer whether Facebook’s ad transparency measures will also include “targeting data” — i.e. “will I understand not just who the advertiser was and what other adverts they’d run but why they’d chose to advertise to me”?

“I believe among the things you’ll see is spend (how much was spent on this ad); you will see who they were trying to advertise to (what is the audience they were trying to reach); and I believe you will also be able to see some basic information on how much it was viewed,” Schroepfer replied — avoiding yet another straight answer.

Facebook points finger at Google and Twitter for data collection

“Other companies suck in your data too,” Facebook explained in many, many words today with a blog post detailing how it gathers information about you from around the web.

Facebook product management director David Baser wrote, “Twitter, Pinterest and LinkedIn all have similar Like and Share buttons to help people share things on their services. Google has a popular analytics service. And Amazon, Google and Twitter all offer login features. These companies — and many others — also offer advertising services. In fact, most websites and apps send the same information to multiple companies each time you visit them.” Describing how Facebook receives cookies, IP address, and browser info about users from other sites, he noted, “when you see a YouTube video on a site that’s not YouTube, it tells your browser to request the video from YouTube. YouTube then sends it to you.”

It seems Facebook is tired of being singled-out. The tacked on “them too!” statements at the end of its descriptions of opaque data collection practices might have been trying to normalize the behavior, but comes off feeling a bit petty.

The blog post also fails to answer one of the biggest lines of questioning from CEO Mark Zuckerberg’s testimonies before Congress last week. Zuckerberg was asked by Representative Ben Lujan about whether Facebook constructs “shadow profiles” of ad targeting data about non-users.

Today’s blog post merely notes that “When you visit a site or app that uses our services, we receive information even if you’re logged out or don’t have a Facebook account. This is because other apps and sites don’t know who is using Facebook. Many companies offer these types of services and, like Facebook, they also get information from the apps and sites that use them.”

Facebook has a lot more questions to answer about this practice, since most of its privacy and data controls are only accessible to users who’ve signed up.

The data privacy double-standard

That said, other tech companies have gotten off light. Whether it’s because Apple and Google aren’t CEO’d by their founders any more, or we’ve grown to see iOS and Android as such underlying platforms that they aren’t responsible for what third-party developers do, scrutiny has focused on Zuckerberg and Facebook.

The Cambridge Analytica scandal emerged from Facebook being unable to enforce its policies that prohibit developers from sharing or selling data they pull from Facebook users. Yet it’s unclear whether Apple and Google do a better job at this policing. And while Facebook let users give their friends’ names and interests to Dr. Aleksandr Kogan, who sold it to Cambridge Analytica, iOS and Android apps routinely ask you to give them your friends’ phone numbers, and we don’t see mass backlash about that.

At least not yet.

How Facebook gives an asymmetric advantage to negative messaging

Few Facebook critics are as credible as Roger McNamee, the managing partner at Elevation Partners. As an early investor in Facebook, McNamee was only only a mentor to Mark Zuckerberg but also introduce him to Sheryl Sandberg.

So it’s hard to underestimate the significance of McNamee’s increasingly public criticism of Facebook over the last couple of years, particularly in the light of the growing Cambridge Analytica storm.

According to McNamee, Facebook pioneered the building of a tech company on “human emotions”. Given that the social network knows all of our “emotional hot buttons”, McNamee believes, there is “something systemic” about the way that third parties can “destabilize” our democracies and economies. McNamee saw this in 2016 with both the Brexit referendum in the UK and the American Presidential election and concluded that Facebook does, indeed, give “asymmetric advantage” to negative messages.

McNamee still believes that Facebook can be fixed. But Zuckerberg and Sandberg, he insists, both have to be “honest” about what’s happened and recognize its “civic responsibility” in strengthening democracy. And tech can do its part too, McNamee believes, in acknowledging and confronting what he calls its “dark side”.

McNamee is certainly doing this. He has now teamed up with ex Google ethicist Tristan Harris in the creation of The Center for Human Technology — an alliance of Silicon Valley notables dedicated to “realigning technology with humanity’s best interests.”

Zuckerberg urges privacy carve outs to compete with China

Facebook’s founder said last month that the company is open to being regulated. But today he got asked by the US senate what sort of legislative changes he would (and wouldn’t) like to see as a fix for the problems that the Cambridge Analytica data scandal has revealed.

Zuckerberg’s response on this — and on another question about his view on European privacy regulations — showed in the greatest detail yet how he’s hoping data handling and privacy rules evolve in the US, including a direct call for regulatory carve outs to — as he couched it — avoid the US falling behind Chinese competitors.

Laying out “a few principles” that he said he believes would be “useful to discuss and potentially codify into law”, Zuckerberg first advocated for having “a simple and practical set of ways that you explain what you’re doing with data”, revealing an appetite to offload the problem of tricky privacy disclosures via a handy universal standard that can apply to all players.

“It’s hard to say that people fully understand something when it’s only written out in a long legal document,” he added. “This stuff needs to be implemented in a way where people can actually understand it.”

He then talked up the notion of “giving people complete control” over the content they share — claiming this is “the most important principle for Facebook”.

“Every piece of content that you share on Facebook, you own and you have complete control over who sees it and how you share it — and you can remove it at any time,” he said, without mentioning how far from that principle the company has been at earlier times in its history.

“I think that that control is something that’s important — and I think should apply to every service,” he continued, making a not-so-subtle plea for no other platforms to be able to leak data like Facebook’s platform historically has (and thus to close any competitive loopholes that might open up as a result of Facebook tightening the screw on developer access to data now in the face of a major scandal).

His final and most controversial point in response to the legislative changes question was about what he dubbed “enabling innovation”.

“Some of these use cases that are very sensitive, like face recognition for example,” he said carefully. “And I think that there’s a balance that’s extremely important to strike here where you obtain special consent for sensitive features like facial recognition. But don’t — but that we still need to make it so that American companies can innovate in those areas.

“Or else we’re going to fall behind Chinese competitors and others around the world who have different regimes for different, new features like that.”

Zuckerberg did not say which Chinese competitors he was thinking of specifically. But earlier this week ecommerce giant Alibaba announced another major investment in a facial recognition software business, leading a $600M Series C round in Hong Kong-based SenseTime — as one possible rival example.

A little later in the session, Zuckerberg was also directly asked whether European privacy regulations should be applied in the US. And here again he showed more of his hand — once again refusing to confirm if Facebook will implement “the exact same regulation” for North American users, as some consumer groups have been calling for it to.

“Regardless of whether we implement the exact same regulation — I would guess it would be somewhat different because we have somewhat different sensibilities in the US, as do other countries — we’re committed to rolling out the controls and the affirmative consent, and the special controls around sensitive types of technologies like face recognition that are required in GDPR, we’re doing that around the world,” he reiterated.

“So I think it’s certainly worth discussing whether we should have something similar in the US but what I would like to say today is that we’re going to go forward and implement that [the same controls and affirmative consent] regardless of what the regulatory outcome is.”

Given that’s now the third refusal by Facebook to confirm GDPR will apply universally, it looks pretty clear that users in North American will get some degree of second tier privacy vs international users — unless or until US lawmakers forcibly raise standards on the company and the industry as a whole.

That is perhaps to be expected. But it’s still a tricky PR message for Facebook to be having to deliver in the midst of a major data scandal — hence Zuckerberg’s attempt to reframe it as a matter of domestic vs foreign “sensibilities”.

Whether North American Facebook users buy into his repackaging of coach class privacy standards vs the rest of the world as just a little local flavor remains to be seen.

What Zuckerberg’s Congressional testimony doesn’t say

There’s a lot of keen analytical hindsight on display in Facebook chief executive Mark Zuckerberg’s written testimony to Congress ahead of his appearance at hearings on Wednesday, but nothing that indicates Facebook is ready to come to terms with the problems rotting the core of the social network.

The bulk of Zuckerberg’s opening statement is an historical analysis of the events of the past two years that have bruised the company’s reputation and share price.

Zuckerberg is defending his company on two fronts as he faces down the members of Congress that could regulate his company out of existence — user privacy and platform integrity.

In the testimony, Zuckerberg highlights the initial steps that Facebook has taken to close down access for third parties and to do more to combat fake accounts and the spread of misinformation.

These steps constitute what are now Zuckerberg’s usual assurances… Facebook is sacrificing its own profits to develop new tools and hire new personnel to combat bad actors that would leverage Facebook’s user information for their own fun and profit. Facebook has taken steps before the U.S. election to root out bad actors and will take even more steps now — since those initial efforts weren’t enough.

Near the close of his written testimony, Zuckerberg writes: “I want to be clear about what our priority is: protecting our community is more important than maximizing our profits.”

What Zuckerberg’s testimony fails to mention, as ever, is whether users themselves will ever be protected from Facebook.

Ultimately Facebook’s scandal is about how much the company knows about its users and how much power those users then have to control how Facebook applies (or shares) its knowledge.

As Wired columnist Zeynep Tufekci pointed out in a column this weekend, that’s been Facebook’s problem since the company’s inception.

By now, it ought to be plain to them, and to everyone, that Facebook’s 2 billion-plus users are surveilled and profiled, that their attention is then sold to advertisers and, it seems, practically anyone else who will pay Facebook—including unsavory dictators like the Philippines’ Rodrigo Duterte. That is Facebook’s business model. That is why the company has an almost half-a-trillion-dollar market capitalization, along with billions in spare cash to buy competitors.

All of the steps that Facebook is taking now to “make sure what happened with Kogan and Cambridge Analytica doesn’t happen again” only achieve one thing — consolidating Facebook’s control over the user data that it can make available to its customers.

The policies just reduce the funnel of information that application developers, advertisers and others can freely access (the emphasis here is on free). For those who want to pay the company for the information — there’s no guarantee that it won’t be used in some way.

As Tufekci writes, Facebook is a surveillance engine — that’s the core of its business and the sale of that surveillance to bidders is the way that it functions to connect its “community”. And protecting that community is a good way to also protect Facebook’s profits.

The problem for Facebook begins with the platform itself — and Zuckerberg’s designs for it. And it won’t be solved with a single congressional hearing.

To pre-empt Congressional questioning and change the conversation, Zuckerberg could have offered solutions for Facebook to proactively address the problems that bedevil it — beyond the adoption of the One scenario that could free Facebook from the advertising chains that ostensibly bind it to being a digital surveillance state is the introduction of a subscription service (as my colleague Josh Constine suggested earlier this year).

For regulators looking at potential legal solutions, the application of GDPR standards across the entire Facebook platform would be a step in the right direction. Zuckerberg has committed to it, but his company has a history of failing to live up to its promises to users. Perhaps Congress will find a way to convince Facebook’s chief to help the company keep its word… and avoid another apology tour.