Mobile social network Path, once a challenger to Facebook, is closing down

It’s that time again, folks, time to say goodbye to a social media service from days past.

Following the shuttering of Klout earlier this year, now Path, the one-time rival to Facebook, is closing its doors, according to an announcement made today. (Yes, you may be surprised to learn that Path was still alive.)

The eight-year-old service will close down in one month — October 18 — but it will be removed from the App Store and Google Play on October 1. Any remaining users have until October 18 to download a copy of their data, which can be done here.

Path was founded by former Facebook product manager Dave Morin, and ex-Napster duo Dustin Mierau and Shawn Fanning . The company burst onto the scene in 2010 with a mobile social networking app that was visually pleasing and — importantly — limited to just 50 friends per user. That positioned it as a more private alternative to Facebook with some additional design bells and whistles, although the friend restriction was later lifted and then removed altogether.

At its peak, the service had around 15 million users and it was once raising money at a valuation of $500 million. Indeed, Google tried to buy it for $100 million when it was just months old. All in all, the startup raised $55 million from investors that included top Silicon Valley names like Index, Kleiner Perkins and Redpoint.

Facebook ultimately defeated Path, but it stole a number of features from its smaller rival

But looks fade, and social media is a tough place when you’re not Facebook, which today has over 1.5 billion active users and aggressively ‘borrowed’ elements from Path’s design back in the day.

Path’s road took a turn for the worse and the much-hyped startup lost staff, users and momentum (and user data). The company tried to launch a separate app to connected businesses and users — Path Talk — but that didn’t work and ultimately it was sold to Korea’s Kakao — a messaging and internet giant — in an undisclosed deal in 2015. Kakao bought the app because it was popular in Indonesia, the world’s fourth-largest population where Path had four million users, and the Korean firm was making a major play for that market, which is Southeast Asia’s largest economy and a growing market for internet users.

However, Path hasn’t kicked on in the last three years and now Kakao is discarding it altogether.

“It is with deep regret that we announce that we will stop providing our beloved service, Path. We started Path in 2010 as a small team of passionate and experienced designers and engineers. Over the years we have tried to lay out our mission: through technology and design we aim to be a source of happiness, meaning, and connection to our users,” the company said in a statement.

Thanks Aulia

Facebook is hiring a director of human rights policy to work on “conflict prevention” and “peace-building”

Facebook is advertising for a human rights policy director to join its business, located either at its Menlo Park HQ or in Washington DC — with “conflict prevention” and “peace-building” among the listed responsibilities.

In the job ad, Facebook writes that as the reach and impact of its various products continues to grow “so does the responsibility we have to respect the individual and human rights of the members of our diverse global community”, saying it’s:

… looking for a Director of Human Rights Policy to coordinate our company-wide effort to address human rights abuses, including by both state and non-state actors. This role will be responsible for: (1) Working with product teams to ensure that Facebook is a positive force for human rights and apply the lessons we learn from our investigations, (2) representing Facebook with key stakeholders in civil society, government, international institutions, and industry, (3) driving our investigations into and disruptions of human rights abusers on our platforms, and (4) crafting policies to counteract bad actors and help us ensure that we continue to operate our platforms consistent with human rights principles.

Among the minimum requirements for the role, Facebook lists experience “working in developing nations and with governments and civil society organizations around the world”.

It adds that “global travel to support our international teams is expected”.

The company has faced fierce criticism in recent years over its failure to take greater responsibility for the spread of disinformation and hate speech on its platform. Especially in international markets it has targeted for business growth via its Internet.org initiative which seeks to get more people ‘connected’ to the Internet (and thus to Facebook).

More connections means more users for Facebook’s business and growth for its shareholders. But the costs of that growth have been cast into sharp relief over the past several years as the human impact of handing millions of people lacking in digital literacy some very powerful social sharing tools — without a commensurately large investment in local education programs (or even in moderating and policing Facebook’s own platform) — has become all too clear.

In Myanmar Facebook’s tools have been used to spread hate and accelerate ethic cleansing and/or the targeting of political critics of authoritarian governments — earning the company widespread condemnation, including a rebuke from the UN earlier this year which blamed the platform for accelerating ethnic violence against Myanmar’s Muslim minority.

In the Philippines Facebook also played a pivotal role in the election of president Rodrigo Duterte — who now stands accused of plunging the country into its worst human rights crisis since the dictatorship of Ferdinand Marcos in the 1970s and 80s.

While in India the popularity of the Facebook-owned WhatsApp messaging platform has been blamed for accelerating the spread of misinformation — leading to mob violence and the deaths of several people.

Facebook famously failed even to spot mass manipulation campaigns going on in its own backyard — when in 2016 Kremlin-backed disinformation agents injected masses of anti-Clinton, pro-Trump propaganda into its platform and garnered hundreds of millions of American voters’ eyeballs at a bargain basement price.

So it’s hardly surprising the company has been equally naive in markets it understands far less. Though also hardly excusable — given all the signals it has access to.

In Myanmar, for example, local organizations that are sensitive to the cultural context repeatedly complained to Facebook that it lacked Burmese-speaking staff — complaints that apparently fell on deaf ears for the longest time.

The cost to American society of social media enabled political manipulation and increased social division is certainly very high. The costs of the weaponization of digital information in markets such as Myanmar looks incalculable.

In the Philippines Facebook also indirectly has blood on its hands — having provided services to the Duterte government to help it make more effective use of its tools. This same government is now waging a bloody ‘war on drugs’ that Human Rights Watch says has claimed the lives of around 12,000 people, including children.

Facebook’s job ad for a human rights policy director includes the pledge that “we’re just getting started” — referring to its stated mission of helping  people “build stronger communities”.

But when you consider the impact its business decisions have already had in certain corners of the world it’s hard not to read that line with a shudder.

Citing the UN Guiding Principles on Business and Human Rights (and “our commitments as a member of the Global Network Initiative”), Facebook writes that its product policy team is dedicated to “understanding the human rights impacts of our platform and to crafting policies that allow us both to act against those who would use Facebook to enable harm, stifle expression, and undermine human rights, and to support those who seek to advance rights, promote peace, and build strong communities”.

Clearly it has an awful lot of “understanding” to do on this front. And hopefully it will now move fast to understand the impact of its own platform, circa fifteen years into its great ‘society reshaping experience’, and prevent Facebook from being repeatedly used to trash human rights.

As well as representing the company in meetings with politicians, policymakers, NGOs and civil society groups, Facebook says the new human rights director will work on formulating internal policies governing user, advertiser, and developer behavior on Facebook. “This includes policies to encourage responsible online activity as well as policies that deter or mitigate the risk of human rights violations or the escalation of targeted violence,” it notes. 

The director will also work with internal public policy, community ops and security teams to try to spot and disrupt “actors that seek to misuse our platforms and target our users” — while also working to support “those using our platforms to foster peace-building and enable transitional justice”.

So you have to wonder how, for example, Holocaust denial continuing to be being protected speech on Facebook will square with that stated mission for the human rights policy director.

At the same time, Facebook is currently hiring for a public policy manager in Francophone, Africa — who it writes can “combine a passion for technology’s potential to create opportunity and to make Africa more open and connected, with deep knowledge of the political and regulatory dynamics across key Francophone countries in Africa”.

That job ad does not explicitly reference human rights — talking only about “interesting public policy challenges… including privacy, safety and security, freedom of expression, Internet shutdowns, the impact of the Internet on economic growth, and new opportunities for democratic engagement”.

As well as “new opportunities for democratic engagement”, among the role’s other listed responsibilities is working with Facebook’s Politics & Government team to “promote the use of Facebook as a platform for citizen and voter engagement to policymakers and NGOs and other political influencers”.

So here, in a second policy job, Facebook looks to be continuing its ‘business as usual’ strategy of pushing for more political activity to take place on Facebook.

And if Facebook wants an accelerated understanding of human rights issues around the world it might be better advised to take a more joined up approach to human rights across its own policy staff board, and at least include it among the listed responsibilities of all the policy shapers it’s looking to hire.

Alibaba goes big on Russia with joint venture focused on gaming, shopping and more

Alibaba is doubling down on Russia after the Chinese e-commerce giant launched a joint venture with one of the country’s leading internet companies.

Russia is said to have over 70 million internet users, around half of its population, with countless more attracted from Russian-speaking neighboring countries. The numbers are projected to rise as, like in many parts of the world, the growth of smartphones brings more people online. Now Alibaba is moving in to ensure it is well placed to take advantage.

Mail.ru, the Russia firm that offers a range of internet services including social media, email and food delivery to 100 million registered users, has teamed up with Alibaba to launch AliExpress Russia, a JV that they hope will function as a “one-stop destination” for communication, social media, shopping and games. Mail.ru backer MegaFon, a telecom firm, and the country’s sovereign wealth fund RDIF (Russian Direct Investment Fund) have also invested undisclosed amounts into the newly-formed organization.

To recap: Alibaba — which launched its AliExpress service in Russia some years ago — will hold 48 percent of the business, with 24 percent for MegaFon, 15 percent for Mail.ru and the remaining 13 percent take by RDIF. In addition, MegaFon has agreed to trade its 10 percent stake in Mail.ru to Alibaba in a transaction that (alone) is likely to be worth north of $500 million.

That figure doesn’t include other investments in the venture.

“The parties will inject capital, strategic assets, leadership, resources and expertise into a joint venture that leverages AliExpress’ existing businesses in Russia,” Alibaba explained on its Alizila blog.

Alibaba looks to have picked its horse in Russia’s internet race: Mail.ru [Image via KIRILL KUDRYAVTSEV/AFP/Getty Images]

The strategy, it seems, is to pair Mail.ru’s consumer services with AliExpress, Alibaba’s international e-commerce marketplace. That’ll allow Russian consumers to buy from AliExpress merchants in China, but also overseas markets like Southeast Asia, India, Turkey (where Alibaba recently backed an e-commerce firm) and other parts of Europe where it has a presence. Likewise, Russian online sellers will gain access to consumers in those markets. Alibaba’s ‘branded mall’ — TMall — is also a part of the AliExpress Russia offering.

This deal suggests that Alibaba has picked its ‘horse’ in Russia’s internet race, much the same way that it has repeatedly backed Paytm — the company offering payments, e-commerce and digital banking — in India with funding and integrations.

Already, Alibaba said that Russia has been a “vital market for the growth” for its Alipay mobile payment service. It didn’t provide any raw figures to back that up, but you can bet that it will be pushing Alipay hard as it runs AliExpress Russia, alongside Mail.ru’s own offering, which is called Money.Mail.Ru.

“Most Russian consumers are already our users, and this partnership will enable us to significantly increase the access to various segments of the e-commerce offering, including both cross-border and local merchants. The combination of our ecosystems allows us to leverage our distribution through our merchant base and goods as well as product integrations,” said Mail.Ru Group CEO Boris Dobrodeev in a statement.

This is the second strategic alliance that MegaFon has struck this year. It formed a joint venture with Gazprombank in May through a deal that saw it offload five percent of its stake in Mail.ru. MegaFon acquired 15.2 percent of Mail.ru for $740 million in February 2017.

The Russia deal comes a day after Alibaba co-founder and executive chairman Jack Ma — the public face of the company — announced plans to step down over the next year. Current CEO Daniel Zhang will replace him as chairman, meaning that the company will also need to appoint a new CEO.

Facebook is opening its first data center in Asia

Facebook is opening its first data center in Asia. The company announced today that it is planning an 11-story building in Singapore that will help its services run faster and more efficiently. The development will cost SG$1.4 billion, or around US$1 billion, the company confirmed.

The social networking firm said that it anticipates that the building will be powered 100 percent by renewable energy. It said also that it will utilize a new ‘StatePoint Liquid Cooling’ system technology, which the firm claims minimizes the consumption of water and power.

Facebook said that the project will create hundreds of jobs and “form part of our growing presence in Singapore and across Asia.”

A render of what Facebook anticipates that its data center in Singapore will look like

Asia Pacific accounts for 894 million monthly users, that’s 40 percent of the total user base and it makes it the highest region based on users. However, when it comes to actually making money, the region is lagging. Asia Pacific brought in total sales of $2.3 billion in Facebook’s most recent quarter of business, that’s just 18 percent of total revenue and less than half of the revenue made from the U.S. during the same period. Enabling more efficient services is one step to helping to close that revenue gap.

Facebook isn’t the only global tech firm that’s investing in data centers in Asia lately. Google recently revealed that it plans to develop a third data center in Singapore. The firm also has data centers for Asia that are located in Taiwan.

Highlights from the Senate Intelligence hearing with Facebook and Twitter

Another day, another political grilling for social media platform giants.

The Senate Intelligence Committee’s fourth hearing took place this morning, with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey present to take questions as U.S. lawmakers continue to probe how foreign influence operations are playing out on Internet platforms — and eye up potential future policy interventions. 

During the session US lawmakers voiced concerns about “who owns” data they couched as “rapidly becoming me”. An uncomfortable conflation for platforms whose business is human surveillance.

They also flagged the risk of more episodes of data manipulation intended to incite violence, such as has been seen in Myanmar — and Facebook especially was pressed to commit to having both a legal and moral obligation towards its users.

The value of consumer data was also raised, with committee vice chair, Sen. Mark Warner, suggesting platforms should actively convey that value to their users, rather than trying to obfuscate the extent and utility of their data holdings. A level of transparency that will clearly require regulatory intervention.

Here’s our round-up of some of the other highlights from this morning’s session.

Google not showing up

Today’s hearing was a high profile event largely on account of two senior bums sitting on the seats before lawmakers — and one empty chair.

Facebook sent its COO Sheryl Sandberg. Twitter sent its bearded wiseman CEO Jack Dorsey (whose experimental word of the month appears to be “cadence” — as in he frequently said he would like a greater “cadence” of meetings with intelligence tips from law enforcement).

But Google sent the offer of its legal chief in place of Larry Page or Sundar Pichai, who the committee had actually asked for.

Which meant the company instantly became the politicians’ favored punchbag, with senator after senator laying into Alphabet for empty chairing them at the top exec level.

Whatever Page and Pichai were too busy doing to answer awkward questions about its business activity and ambitions in China the move looks like a major open goal for Alphabet as it was open season for senators to slam it.

Page staying away also made Facebook and Twitter look the very model of besuited civic responsibility and patriotism just for bothering to show up.

We got “Jack” and “Sheryl” first name terms from some of the senators, and plenty of “thanks for turning up” heaped on them from all corners — with some very particular barbs reserved for Google.

“I want to commend both of you for your appearance here today for what was no doubt going to be some uncomfortable questions. And I want to commend your companies for making you available. I wish I could say the same about Google,” said Senator Tom Cotton, addressing those in the room. “Both of you should wear it as a badge of honor that the Chinese Communist Party has blocked you from operating in their country.”

“Perhaps Google didn’t send a senior executive today because they’ve recently taken actions such as terminating a co-operation they had with the American military on programs like artificial intelligence that are designed not just to protect our troops and help them fight in our country’s wars but to protect civilians as well,” he continued, warming to his theme. “This is at the very same time that they continue to co-operate with the Chinese Communist Party on matters like artificial intelligence or partner with Huawei and other Chinese telecom companies who are effectively arms of the Chinese Communist Party.

“And credible reports suggest that they are working to develop a new search engine that would satisfy the Chinese Communist Party’s censorship standards after having disclaimed any intent to do so eight years ago. Perhaps they did not send a witness to answer these questions because there is no answer to these questions. And the silence we would hear right now from the Google chair would be reminiscent of the silence that that witness would provide.”

Even Sandberg seemed to cringe when offered the home-run opportunity to stick the knife in to Google — when Cotton asked both witnesses whether their companies would consider taking these kinds of actions?

But after a split second’s hesitation her media training kicked in — and she found her way of diplomatically giving Google the asked for kicking. “I’m not familiar with the specifics of this at all but based on how you’re asking the question I don’t believe so,” was her reply.

After his own small pause, Dorsey, the man of fewer words, added: “Also no.”

 

Dorsey repeat apologizing 

‘We haven’t done a good job of that’ was the most common refrain falling from Dorsey’s bearded lips this morning as senators asked why the company hasn’t managed to suck less from all sorts of angles — whether that’s by failing to provide external researchers with better access to data to help them help it with malicious interference; or failing to informing individual users who’ve been the targeted victims of Twitter fakery that that abuse has been happening to them; or just failing to offer any kind of contextual signal to its users that some piece of content they’re seeing is (or might be) maliciously fake.

But then this is the man who has defended providing a platform to people who make a living selling lies, so…

“We haven’t done a good job of that in the past,” was certainly phrase of the morning for a contrite Dorsey.  And while admitting failure is at least better than denying you’re failing, it’s still just that: Failure.

And continued failure has been a Twitter theme for so long now, when it comes to things like harassment and abuse, it’s starting to feel intentional. (As if, were you able to cut Twitter you’d find the words ‘feed the trolls’ running all the way through its business.)

Sadly the committee seemed to be placated by Dorsey’s repeat confessions of inadequacy. And he really wasn’t pressed enough. We’d have liked to see a lot more grilling of him over short term business incentives that tie his hands on fighting abuse.

Amusingly, one senator rechristened Dorsey “Mr Darcey”, after somehow tripping over the two syllables of his name. But actually, thinking about it, ‘pride and prejudice’ might be a good theme for the Twitter CEO to explore during one of his regular meditation sessions.

Y’know, as he ploughs through a second turgid decade of journeying towards self-awareness — while continuing to be paralyzed, on the business, civic and, well, human being, front, by rank indecision about which people and points of view to listen to (Pro-Tip: If someone makes money selling lies and/or spreading hate you really shouldn’t be letting them yank your operational chain) — leaving his platform (the would be “digital public square”, as he kept referring to it today), incapable of upholding the healthy standards it claims to want to have. (Or daubed with all manner of filthy graffiti, if you want a visual metaphor.)

The problem is Twitter’s stated position/mission, in Dorsey’s prepared statements to the committee, of keeping “all voices on the platform” is hubris. It’s a flawed ideology that results in massive damage to the free speech and healthy conversation he professes to want to champion because nazis are great at silencing people they hate and harass.

Unfortunately Dorsey still hasn’t had that eureka moment yet. And there was no sign of any imminent awakening judging by this morning’s performance.

 

Sandberg’s oh-so-smooth operation — but also an exchange that rattled her

The Facebook COO isn’t chief operating officer for nothing. She’s the queen of the polished, non-committal soundbite. And today she almost always had one to hand — smoothly projecting the impression that the company is always doing something. Whether that’s on combating hate speech, hoaxes and “inauthentic” content, or IDing and blocking state-level disinformation campaigns — thereby shifting attention off the deeper question of whether Facebook is doing enough. (Or even whether its platform might not be the problem itself.)

Albeit the bar looks very low indeed when your efforts are being set against Twitter and an empty chair.  (Aka the “invisible witness” as one senator sniped at Google.)

Very many of her answers courteously informed senators that Facebook would ‘follow up’ with answers and/or by providing some hazily non-specific ‘collaborative work’ at some undated future time — which is the most professional way to kick awkward questions into the long grass.

Though do it long enough and the grass can turn on you and start to bite back because it’s got so long and unkempt it now contains some very angry snakes.

Senator, Kamala Harrisvery clearly seething at this point — having had her questions to Facebook knocked about since November 2017, when its general council had first testified to the committee on the disinformation topic — was determined to get under Sandberg’s skin. And she did.

The exchange that rattled the Facebook COO started off around how much money it makes off of ads run by fake accounts — such as the Kremlin-backed Internet Research Agency.

Sandberg slickly reframed “inauthentic content” to an even more boring sound “inorganic content” — now several psychologic steps removed from the shockingly outrageous Kremlin propaganda that the company eventually disclosed.

She added it was equivalent to .004% of content in News Feed (hence Facebook’s earlier contention to Harris that it’s “immaterial to earnings”).

It’s not so much the specific substance of the question that’s the problem here for Facebook — with Sandberg also smoothly reiterating that the IRA had spent about $100k (which is petty cash in ad terms) — it’s the implication that Facebook’s business model profits off of fakes and hates, and is therefore amorously entwined in bed with fakes and hates.

“From our point of view, Senator Harris, any amount is too much,” continued Sandberg after she rolled out the $100k figure, and now beginning to thickly layer on the emulsion.

Harris cut her off, interjecting: “So are you saying that the revenue generated was .004% of your annual revenue”, before adding the pointed observation: “Because of course that would not be immaterial” — which drew a rare stuttered double “so” from Sandberg.

“So what metric are you using to calculate the revenue that was generated associated with those ads, and what is the dollar amount that is associated then with that metric?” pressed Harris.

Sandberg couldn’t provide the straight answer being sought, she said, because “ads don’t run with inorganic content on our service” — claiming: “There is actually no way to firmly ascertain how much ads are attached to how much organic content; it’s not how it works.”

“But what percentage of the content on Facebook is organic,” rejoined Harris.

That elicited a micro-pause from Sandberg, before she fell back on the usual: “I don’t have that specific answer but we can come back to you with that.”

Harris pushed her again, wondering if it’s “the majority of content”?

“No, no,” said Sandberg, sounding almost flustered.

“Your company’s business model is complex but it benefits from increased user engagement… so, simply put, the more people that use your platform the more they are exposed to third party ads, the more revenue you generate — would you agree with that,” continued Harris, starting to sound boring but only to try to reel her in.

After another pause Sandberg asked her to repeat this hardly complex question — before affirming “yes, yes” and then hastily qualifying it with: “But only I think when they see really authentic content because I think in the short run and over the long run it doesn’t benefit us to have anything inauthentic on our platform.”

Harris continued to hammer on how Facebook’s business model benefits from greater user engagement as more ads are viewed via its platform —  linking it to “a concern that many have is how you can reconcile an incentive to create and increase your user engagement with the content that generates a lot of engagement is often inflammatory and hateful”.

She then skewered Sandberg with a specific example of Facebook’s hate speech moderation failure — and by suggestive implication a financially incentivized policy and moral failure — referencing a ProPublica report from June 2017 which revealed the company had told moderators to delete hate speech targeting white men but not black children — because the latter were not considered a “protected class”.

Sandberg, sounding uncomfortable now, said this was “a bad policy that has been changed”. “We fixed it,” she added.

“But isn’t that a concern with hate period, that not everyone is looked at the same way,” wondered Harris?

Facebook “cares tremendously about civil rights” said Sandberg, trying to regain the PR initiative. But she was again interrupted by Harris — wondering when exactly Facebook had “addressed” that specific policy failure.

Sandberg was unable to put a date on when the policy change had been made. Which obviously now looked bad.

“Was the policy changed after that report? Or before that report from ProPublica?” pressed Harris.

“I can get back to you on the specifics of when that would have happened,” said Sandberg.

“You’re not aware of when it happened?”

“I don’t remember the exact date.”

“Do you remember the year?”

“Well you just said it was 2017.”

“So do you believe it was 2017 when the policy changed?”

“Sounds like it was.”

The awkward exchange ended with Sandberg being asked whether or not Facebook had changed its hate speech policies to protect not just those people who have been designated legally protected classes of people.

“I know that our hate speech policies go beyond the legal classifications, and they are all public, and we can get back to that on that,” she said, falling back on yet another pledge to follow up.

Twitter agreeing to bot labelling in principle  

We flagged this earlier but Senator Warner managed to extract from Dorsey a quasi-agreement to labelling automation on the platform in future — or at least providing more context to help users navigate what they’re being exposed to in tweet form.

He said Twitter has been “definitely” considering doing this — “especially this past year”.

Although, as we noted earlier, he had plenty of caveats about the limits of its powers of bot detection.

“It’s really up to the implementation at this point,” he added.

How exactly ‘bot or not’ labelling will come to Twitter isn’t clear. Nor was there any timeframe.

But it’s at least possible to imagine the company could add some sort of suggestive percentage of automated content to accounts in future — assuming Dorsey can find his first, second and third gears.

Lawmakers worried about the impact of deepfakes

Deepfakes, aka AI-powered manipulation of video to create fake footage of people doing things they never did is, perhaps unsurprisingly, already on the radar of reputation-sensitive U.S. lawmakers — even though the technology itself is hardly in widespread, volume usage.

Several senators asked whether (and how comprehensively) the social media companies archive suspended or deleted accounts.

Clearly politicians are concerned. No senator wants to be ‘filmed in bed with an intern’ — especially one they never actually went to bed with.

The response they got back was a qualified yes — with both Sandberg and Dorsey saying they keep such content if they have any suspicions.

Which is perhaps rather cold comfort when you consider that Facebook had — apparently — zero suspicious about all the Kremlin propaganda violently coursing across its platform in 2016 and generating hundreds of millions of views.

Since that massive fuck-up the company has certainly seemed more proactive on the state-sponsored fakes front  — recently removing a swathe of accounts linked to Iran which were pushing fake content, for example.

Although unless lawmakers regulate for transparency and audits of platforms there’s no real way for anyone outside these commercially walled gardens to be 110% sure.

Sandberg’s clumsy affirmation of WhatsApp encryption 

Since the WhatsApp founders left Facebook, earlier this year and in fall last, there have been rumors that the company might be considering dropping the flagship end-to-end encryption that the messaging platform boasts — specifically to help with its monetization plans around linking businesses with users.

And Sandberg was today asked directly if WhatsApp still uses e2e encryption. She replied by affirming Facebook’s commitment to encryption generally — saying it’s good for user security.

“We are strong believers in encryption,” she told lawmakers. “Encryption helps keep people safe, it’s what secures our banking system, it’s what secures the security of private messages, and consumers rely on it and depend on it.”

Yet on the specific substance of the question, which had asked whether WhatsApp is still using end-to-end encryption, she pulled out another of her professionally caveated responses — telling the senator who had asked: “We’ll get back to you on any technical details but to my knowledge it is.”

Most probably this was just her habit of professional caveating kicking in. But it was an odd way to reaffirm something as fundamental as the e2e encrypted architecture of a product used by billions of people on a daily basis. And whose e2e encryption has caused plenty of political headaches for Facebook — which in turn is something Sandberg has been personally involved in trying to fix.

Should we be worried that the Facebook COO couldn’t swear under oath that WhatsApp is still e2e encrypted? Let’s hope not. Presumably the day job has just become so fettered with fixes she just momentarily forgot what she could swear she knows to be true and what she couldn’t.

Facebook, Twitter: US intelligence could help us more in fighting election interference

Facebook’s chief operating officer Sheryl Sandberg has admitted that the social networking giant could have done more to prevent foreign interference on its platforms, but said that the government also needs to step up its intelligence sharing efforts.

The remarks are ahead of an open hearing at the Senate Intelligence Committee on Wednesday, where Sandberg and Twitter chief executive Jack Dorsey will testify on foreign interference and election meddling on social media platforms. Google’s Larry Page was invited, but declined to attend.

“We were too slow to spot this and too slow to act,” said Sandberg in prepared remarks. “That’s on us.”

The hearing comes in the aftermath of Russian interference in the 2016 presidential election. Social media companies have been increasingly under the spotlight after foreign actors, believed to be working for or closely to the Russian government, used disinformation spreading tactics to try to influence the outcome of the election, as well as in the run-up to the midterm elections later this year.

Both Facebook and Twitter have removed accounts and bots from their sites believed to be involved in spreading disinformation and false news. Google said last year that it found Russian meddling efforts on its platforms.

“We’re getting better at finding and combating our adversaries, from financially motivated troll farms to sophisticated military intelligence operations,” said Sandberg.

But Facebook’s second-in-command also said that the US government could do more to help companies understand the wider picture from Russian interference.

“We continue to monitor our service for abuse and share information with law enforcement and others in our industry about these threats,” she said. “Our understanding of overall Russian activity in 2016 is limited because we do not have access to the information or investigative tools that the U.S. government and this Committee have,” she said.

Later, Twitter’s Dorsey also said in his own statement: “The threat we face requires extensive partnership and collaboration with our government partners and industry peers,” adding: “We each possess information the other does not have, and the combined information is more powerful in combating these threats.”

Both Sandberg and Dorsey are subtly referring to classified information that the government has but private companies don’t get to see — information that is considered a state secret.

Tech companies have in recent years pushed for more access to knowledge that federal agencies have, not least to help protect against increasing cybersecurity threats and hostile nation state actors. The theory goes that the idea of sharing intelligence can help companies defend against the best resourced hackers. But efforts to introduce legislation has proven controversial because critics argue that in sharing threat information with the government private user data would also be collected and sent to US intelligence agencies for further investigation.

Instead, tech companies are now pushing for information from Homeland Security to better understand the threats they face — to independently fend off future attacks.

As reported, tech companies last month met in secret to discuss preparations to counter foreign manipulation on their platforms. But attendees, including Facebook, Twitter, and Google and Microsoft are said to have “left the meeting discouraged” that they received little insight from the government.

Political anonymity may help us see both sides of a divisive issue online

Some topics are so politically charged that even to attempt a discussion online is to invite toxicity and rigid disagreement among participants. But a new study finds that exposure to the views of others, minus their political affiliation, could help us overcome our own biases.

Researchers from the University of Pennsylvania, led by sociologist Damon Centola, examined how people’s interpretations of some commonly misunderstood climate change data changed after seeing those of people in opposing political parties.

The theory is that by exposing people to information sans partisan affiliation, we might be able to break the “motivated reasoning” that leads us to interpret data in a preconceived way.

The data in this case was a NASA study indicating that sea ice levels will decrease but frequently misinterpreted as suggesting the opposite. The misunderstanding isn’t entirely partisan in nature: 40 percent of self-identified Republicans and 26 percent of Democrats polled in the study adopted the mistaken latter view.

The NASA graph used in the study. As you can see it’s not crazy to think that the sea ice levels would increase, though it is incorrect.

Thousands from both parties, recruited via Mechanical Turk, were asked to indicate whether sea ice levels were rising or falling, and how much. After their initial guess, they were shown how others had answered and allowed to adjust their answer afterwards. Some were shown their peers answers with those peers’ political affiliation, and some were shown it without.

When political party was not attached to the answers, there was a considerable effect on people’s answers. Republicans jumped from about 65 percent getting it right to around 90, and Democrats went from 75 to 85 percent. When party was shown, improvements were much smaller; and when people were only exposed to those from their own party, there was practically no improvement at all.

Obviously this isn’t going to fix the problem of viral misinformation or the near-constant flame wars raging across every major online service. But it’s amazing that doing something as simple as stripping the political context from communications may lead to those communications being taken more seriously.

Perhaps something along these lines could help put the brakes on runaway articles: showing highly-cited views from people with no indication of their political beliefs. Will you be so quick to dismiss or accept someone’s argument if you can’t be sure of their agenda? At worst it may force people to take a second and evaluate those ideas on their merits, and that’s hardly a bad thing.

The study was published today in the Proceedings of the National Academy of Sciences.

Google, Facebook, Twitter chiefs called back to Senate Intelligence committee

Twitter chief executive Jack Dorsey and Facebook chief operations officer Sheryl Sandberg will testify in an open hearing at the Senate Intelligence Committee next week, the committee’s chairman has confirmed.

Larry Page, chief executive of Google parent company Alphabet, was also invited but has not confirmed his attendance, a committee spokesperson confirmed to TechCrunch.

Sen. Richard Burr (R-NC) said in a release that the social media giants will be asked about their responses to foreign influence operations on their platforms in an open hearing on September 5.

It will be the second time the Senate Intelligence Committee, which oversees the government’s intelligence and surveillance efforts, will have called the companies to testify. But it will be the first time that senior leadership will attend; though, Facebook chief executive Mark Zuckerberg did attend a House Energy and Commerce Committee hearing in April.

It comes in the wake of Twitter and Facebook recently announcing the suspension of accounts from their platforms that they believe to be linked to Iranian and Russian political meddling. Social media companies have been increasingly under the spotlight in the past years following Russian efforts to influence the 2016 presidential election with disinformation.

Twitter suspends more accounts for “engaging in coordinated manipulation”

Following last week’s suspension of 284 accounts for “engaging in coordinated manipulation,” Twitter announced today that it’s kicked an additional 486 accounts off the platform for the same reason, bringing the total to 770 accounts.

While many of the accounts removed last week appeared to originate from Iran, Twitter said this time that about 100 of the latest batch to be suspended claimed to be in the United States. Many of these were less than a year old and shared “divisive commentary.” These 100 accounts tweeted a total of 867 times and had 1,268 followers between them.

As examples of the “divisive commentary” tweeted, Twitter shared screenshots from several suspended accounts that showed anti-Trump rhetoric, counter to the conservative narrative that the platform unfairly targets Republican accounts.

Twitter also said that the suspended accounts included one advertiser that spent $30 on Twitter ads last year, but added those ads did not target the U.S. and that the billing address was outside of Iran.

“As with prior investigations, we are committed to engaging with other companies and relevant law enforcement entities. Our goal is to assist investigations into these activities and where possible, we will provide the public with transparency and context on our efforts,” Twitter said on its Safety account.

After years of accusations that it doesn’t enforce its own policies about bullying, bots and other abuses, Twitter has taken a much harder line on problematic accounts in the past few months. Despite stalling user growth, especially in the United States, Twitter has been aggressively suspending accounts, including ones that were created by users to evade prior suspensions.

Twitter announced a drop of one million monthly users in the second quarter, causing investors to panic even though it posted a $100 million profit. In its earnings call, Twitter said that its efforts don’t impact user numbers because many of the “tens of millions” of removed accounts were too new or had been inactive for more than a month and were therefore not counted in active user numbers. The company did admit, however, that it’s anti-spam measures had caused it to lose three million monthly active users.

Whatever its impact on user numbers, Twitter’s anti-abuse measures may help it save face during a Senate Intelligence Committee hearing on September 5. Executives from Twitter, Facebook and Google are expected to be grilled by Sen. Mark Warner and other politicians about the use of their platforms by other countries to influence U.S. politics.

Ipsy’s new subscription delivers full-size beauty products, not samples

Ipsy, the beauty box subscription service and e-commerce site founded in 2011 by YouTube creator Michelle Phan, is expanding its business beyond sample-sized products. The company today is debuting a more expensive “Glam Bag Plus” subscription, which will ship customers five full-sized products for $25 per month.

The move aims to capitalize on Ipsy’s established customer base who now trust Ipsy’s beauty products recommendations to the point they’re willing to pay upfront for full-sized products, instead of only samples with the option to later shop online for the products they liked.

It may also help attract a new customer who doesn’t find value in samples – which are sometimes one-use products, or packaged poorly compared with their full-size counterparts, making them difficult to travel with or throw in a purse.

So far, Ipsy’s curation has been succeeding – it touts over 3 million subscribers, compared with rival Birchbox’s over 2.5 million.

Ipsy, for what it’s worth, tends to offer better samples than Birchbox, now majority owned by hedge fund Viking Global Investors, after some financial struggles.

Birchbox shipments are often too reliant on less valuable items, like single-use makeup wipes, tiny eyeshadows without a reliable protective case, or totally hit-or-miss perfume samples, for example. Ipsy, meanwhile, sends out full-sized makeup brushes and other full-sized items along with samples on a regular basis. It also prioritizes makeup products over hair and skin care items in its curation.

Plus, it ships products in a reusable makeup travel bag (which, frankly, is great for when you need to unload some of your less-loved samples on friends).

With the new Glam Bag Plus, customers will have the option of paying a little more – $25 per month instead of $10 – for a selection of full-sized products, which would normally retail for $120.

The company says it will work with brands like Sunday Riley, Ciaté London, Purlisse, Morphe, Tarte Cosmetics, Buxom, and others.

As before, the exact mix of products shipped will be based on subscribers’ beauty profile. Today, Ipsy creates over 10,000 different makeup combinations in its monthly Glam Bag memberships, it says, because of this personalization.

The Plus service will also ship out a deluxe (read: larger) makeup bag on the first delivery, then every third delivery afterwards, as part of its subscription.

The new service will better cater to skin and hair care companies, and especially to newer brands that may not offer a wide ranges of samples at this time, but still want to be able to reach Ipsy’s millennial subscriber base.

Initially, existing Glam Bag subscribers will be able to switch over to the Plus tier of service, which will ship its first bag in October.

However, the company is advising customers that it has limited quantities of Glam Bag Plus products, so if they choose to later downgrade back to the sample Glam Bag, they may end up on a waitlist if they decide later they want to re-join Plus.

Ipsy also says it’s not set up right now to handle customers who want both memberships, so those who do should create a second account as a workaround.

Ipsy’s co-founder Michelle Phan left the startup last fall to run online makeup site EM Cosmetics, but Ipsy itself remains profitable – and it has been for several years. The company’s real value is not the money to be made on the subscription business itself, but rather in helping beauty brands reach social media influencers and YouTube stars, whose makeup tutorials and recommendations help them to gain exposure.

While Ipsy, like many in the subscription business, won’t talk about its critical business metrics like churn or margins, the company believes the Plus subscription will do well because it’s something members have been requesting for some time. It also surveyed the user community and ran focus groups ahead of this product’s launch, it told Glossy.

The subscription will become available to more customers in the future, says Ipsy.