NYU and Facebook team up to supercharge MRI scans with AI

Magnetic resonance imaging is an invaluable tool in the medical field, but it’s also a slow and cumbersome process. It may take fifteen minutes or an hour to complete a scan, during which time the patient, perhaps a child or someone in serious pain, must sit perfectly still. NYU has been working on a way to accelerate this process, and is now collaborating with Facebook with the goal of cutting down MRI durations by 90 percent by applying AI-based imaging tools.

It’s important at the outset to distinguish this effort from other common uses of AI in the medical imaging field. An X-ray, or indeed an MRI scan, once completed, could be inspected by an object recognition system watching for abnormalities, saving time for doctors and maybe even catching something they might have missed. This project isn’t about analyzing imagery that’s already been created, but rather expediting its creation in the first place.

The reason MRIs take so long is because the machine must create a series of 2D images or slices, many of which must be stacked up to make a 3D image. Sometimes only a handful are needed, but for full fidelity and depth — for something like a scan for a brain tumor — lots of slices are required.

The FastMRI project, begun in 2015 by NYU researchers, investigates the possibility of creating imagery of a similar quality to a traditional scan, but by collecting only a fraction of the data normally needed.

Think of it like scanning an ordinary photo. You could scan the whole thing… but if you only scanned every other line (this is called “undersampling”) and then intelligently filled in the missing pixels, it would take half as long. And machine learning systems are getting quite good at tasks like that. Our own brains do it all the time: you have blind spots with stuff in them right now that you don’t notice because your vision system is filling in the gaps — intelligently.

The data collected at left could be “undersampled” as at right, with the missing data filled in later

If an AI system could be trained to fill in the gaps from MRI scans where only the most critical data is collected, the actual time during which a patient would have to sit in the imaging tube could be reduced considerably. It’s easier on the patient, and one machine could handle far more people than it does doing a full scan every time, making scans cheaper and more easily obtainable.

The NYU School of Medicine researchers began work on this three years ago and published some early results showing that the approach was at least feasible. But like an MRI scan, this kind of work takes time.

“We and other institutions have taken some baby steps in using AI for this type of problem,” explained NYU’s Dan Sodickson, director of the Center of Advanced Imaging Innovation and Research there. “The sense is that already in the first attempts, with relatively simple methods, we can do better than other current acceleration techniques — get better image quality and maybe accelerate further by some percentage, but not by large multiples yet.”

So to give the project a boost, Sodickson and the radiologists at NYU are combining forces with the AI wonks at Facebook and its Artificial Intelligence Research group (FAIR).

NYU School of Medicine’s Department of Radiology chair Michael Recht, MD, Daniel Sodickson, MD, vice chair for research and director of the Center for Advanced Imaging Innovation and Yvonne Lui, MD, director of artificial intelligence, examine an MRI

“We have some great physicists here and even some hot-stuff mathematicians, but Facebook and FAIR have some of the leading AI scientists in the world. So it’s complementary expertise,” Sodickson said.

And while Facebook isn’t planning on starting a medical imaging arm, FAIR has a pretty broad mandate.

“We’re looking for impactful but also scientifically interesting problems,” said FAIR’s Larry Zitnick. AI-based creation or re-creation of realistic imagery (often called “hallucination”) is a major area of research, but this would be a unique application of it — not to mention one that could help some people.

With a patient’s MRI data, he explained, the generated imagery “doesn’t need to be just plausible, but it needs to retain the same flaws.” So the computer vision agent that fills in the gaps needs to be able to recognize more than just overall patterns and structure, and to be able to retain and even intelligently extend abnormalities within the image. To not do so would be a massive modification of the original data.

Fortunately it turns out that MRI machines are pretty flexible when it comes to how they produce images. If you would normally take scans from 200 different positions, for instance, it’s not hard to tell the machine to do half that, but with a higher density in one area or another. Other imagers like CT and PET scanners aren’t so docile.

Even after a couple years of work the research is still at an early stage. These things can’t be rushed, after all, and with medical data there are ethical considerations and a difficulty in procuring enough data. But the NYU researchers’ ground work has paid off with initial results and a powerful data set.

Zitnick noted that because AI agents require lots of data to train up to effective levels, it’s a major change going from a set of, say, 500 MRI scans to a set of 10,000. With the former data set you might be able to do a proof of concept, but with the latter you can make something accurate enough to actually use.

The partnership announced today is between NYU and Facebook, but both hope that others will join up.

“We’re working on this out in the open. We’re going to be open-sourcing it all,” said Zitnick. One might expect no less of academic research, but of course a great deal of AI work in particular goes on behind closed doors these days.

So the first steps as a joint venture will be to define the problem, document the data set and release it, create baselines and metrics by which to measure their success, and so on. Meanwhile, the two organizations will be meeting and swapping data regularly and running results past actual clinicians.

“We don’t know how to solve this problem,” Zitnick said. “We don’t know if we’ll succeed or not. But that’s kind of the fun of it.”

Keeping artificial intelligence accountable to humans

As a teenager in Nigeria, I tried to build an artificial intelligence system. I was inspired by the same dream that motivated the pioneers in the field: That we could create an intelligence of pure logic and objectivity that would free humanity from human error and human foibles.

I was working with weak computer systems and intermittent electricity, and needless to say my AI project failed. Eighteen years later — as an engineer researching artificial intelligence, privacy and machine-learning algorithms — I’m seeing that so far, the premise that AI can free us from subjectivity or bias is also disappointing. We are creating intelligence in our own image. And that’s not a compliment.

Researchers have known for awhile that purportedly neutral algorithms can mirror or even accentuate racial, gender and other biases lurking in the data they are fed. Internet searches on names that are more often identified as belonging to black people were found to prompt search engines to generate ads for bail bondsmen. Algorithms used for job-searching were more likely to suggest higher-paying jobs to male searchers than female. Algorithms used in criminal justice also displayed bias.

Five years later, expunging algorithmic bias is turning out to be a tough problem. It takes careful work to comb through millions of sub-decisions to figure out why the algorithm reached the conclusion it did. And even when that is possible, it is not always clear which sub-decisions are the culprits.

Yet applications of these powerful technologies are advancing faster than the flaws can be addressed.

Recent research underscores this machine bias, showing that commercial facial-recognition systems excel at identifying light-skinned males, with an error rate of less than 1 percent. But if you’re a dark-skinned female, the chance you’ll be misidentified rises to almost 35 percent.

AI systems are often only as intelligent — and as fair — as the data used to train them. They use the patterns in the data they have been fed and apply them consistently to make future decisions. Consider an AI tasked with sorting the best nurses for a hospital to hire. If the AI has been fed historical data — profiles of excellent nurses who have mostly been female — it will tend to judge female candidates to be better fits. Algorithms need to be carefully designed to account for historical biases.

Occasionally, AI systems get food poisoning. The most famous case was Watson, the AI that first defeated humans in 2011 on the television game show Jeopardy. Watson’s masters at IBM needed to teach it language, including American slang, so they fed it the contents of the online Urban Dictionary. But after ingesting that colorful linguistic meal, Watson developed a swearing habit. It began to punctuate its responses with four-letter words.

We have to be careful what we feed our algorithms. Belatedly, companies now understand that they can’t train facial-recognition technology by mainly using photos of white men. But better training data alone won’t solve the underlying problem of making algorithms achieve fairness.

Algorithms can already tell you what you might want to read, who you might want to date and where you might find work. When they are able to advise on who gets hired, who receives a loan or the length of a prison sentence, AI will have to be made more transparent — and more accountable and respectful of society’s values and norms.

Accountability begins with human oversight when AI is making sensitive decisions. In an unusual move, Microsoft president Brad Smith recently called for the U.S. government to consider requiring human oversight of facial-recognition technologies.

The next step is to disclose when humans are subject to decisions made by AI. Top-down government regulation may not be a feasible or desirable fix for algorithmic bias. But processes can be created that would allow people to appeal machine-made decisions — by appealing to humans. The EU’s new General Data Protection Regulation establishes the right for individuals to know and challenge automated decisions.

Today people who have been misidentified — whether in an airport or an employment data base — have no recourse. They might have been knowingly photographed for a driver’s license, or covertly filmed by a surveillance camera (which has a higher error rate). They cannot know where their image is stored, whether it has been sold or who can access it. They have no way of knowing whether they have been harmed by erroneous data or unfair decisions.

Minorities are already disadvantaged by such immature technologies, and the burden they bear for the improved security of society at large is both inequitable and uncompensated. Engineers alone will not be able to address this. An AI system is like a very smart child just beginning to understand the complexities of discrimination.

To realize the dream I had as a teenager, of an AI that can free humans from bias instead of reinforcing bias, will require a range of experts and regulators to think more deeply not only about what AI can do, but what it should do — and then teach it how. 

Robotics-as-a-service is on the way and inVia Robotics is leading the charge

The team at inVia Robotics didn’t start out looking to build a business that would create a new kind of model for selling robotics to the masses, but that may be exactly what they’ve done.

After their graduation from the University of Southern California’s robotics program, Lior Alazary, Dan Parks, and Randolph Voorhies, were casting around for ideas that could get traction quickly.

“Our goal was to get something up and running that could make economic sense immediately,’ Voorhies, the company’s chief technology officer, said in an interview.

The key was to learn from the lessons of what the team had seen as the missteps of past robotics manufacturers.

Despite the early success of iRobot, consumer facing or collaborative robots that could operate alongside people had yet to gain traction in wider markets.

Willow Garage, the legendary company formed by some of the top names in the robotics industry had shuttered just as Voorhies and his compatriots were graduating, and Boston Dynamics, another of the biggest names in robotics research, was bought by Google around the same time — capping an six-month buying spree that saw the search giant acquire eight robotics companies.

In the midst of all this we were looking around and we said, ‘God there were a lot of failed robotics companies!’ and we asked ourselves why did that happen?” Voorhies recalled. “A lot of the hardware companies that we’d seen, their plan was: step one build a really cool robot and step three: an app ecosystem will evolve and people will write apps and the robot will sell like crazy. And nobody had realized how to do step 2, which was commercialize the robot.”

So the three co-founders looked for ideas they could take to market quickly.

The thought was building a robot that could help with mobility and reaching for objects. “We built a six-degree-of-freedom arm with a mobile base,” Voorhies said.

However, the arm was tricky to build, components were expensive and there were too many variables in the environment for things to go wrong with the robot’s operations. Ultimately the team at inVia realized that the big successes in robotics were happening in controlled environments. 

“We very quickly realized that the environment is too unpredictable and there were too many different kinds of things that we needed to do,” he said. 

Parks then put together a white paper analyzing the different controlled environments where collaborative robots could be most easily deployed. The warehouse was the obvious choice.

Back in March of 2012 Amazon had come to the same conclusion and acquired Kiva Systems in a $775 million deal that brought Kiva’s army of robots to Amazon warehouses and distribution centers around the world.

“Dan put a white paper together for Lior and I,” Voorhies said, “and the thing really stuck out was eCommerce logistics. Floors tend to be concrete slabs; they’re very flat with very little grade, and in general people are picking things off a shelf and putting them somewhere else.”

With the idea in place, the team, which included technologists Voorhies and Parks, and Lazary, a serial entrepreneur who had already exited from two businesses, just needed to get a working prototype together.

Most warehouses and shipping facilities that weren’t Amazon were using automated storage and retrieval systems, Voorhies said. These big, automated systems that looked and worked like massive vending machines. But those systems, he said, involved a lot of sunk costs, and weren’t flexible or adaptable.

And those old systems weren’t built for random access patterns and multi-use orders which comprise most of the shipping and packing that are done as eCommerce takes off.

With those sunk costs though, warehouses are reluctant to change the model. The innovation that Voorhies and his team came up with, was that the logistics providers wouldn’t have to.

“We didn’t like the upfront investment, not just to install one but just to start a company to build those things,” said Voorhies. “We wanted something we could bootstrap ourselves and grow very organically and just see wins very very quickly. So we looked at those ASRS systems and said why don’t we build mobile robots to do this.”

In the beginning, the team at inVia played with different ways to build the robot.l first there was a robot that could carry several different objects and another that would be responsible for picking.

The form factor that the company eventually decided on was a movable puck shaped base with a scissor lift that can move a platform up and down. Attached to the back of the platform is a robotic arm that can extend forward and backward and has a suction pump attached to its end. The suction pump drags boxes onto a platform that are then taken to a pick and pack employee.

We were originally going to grab individual product.s. Once we started talking to real warehouses more and more we realized that everyone stores everything in these boxes anyway,” said Voorhies. “And we said why don’t we make our lives way easier, why don’t we just grab those totes?” 

Since bootstrapping that initial robot, inVia has gone on to raise $29 million in financing to support its vision. Most recently with a $20 million round which closed in July.

“E-commerce industry growth is driving the need for more warehouse automation to fulfill demand, and AI-driven robots can deliver that automation with the flexibility to scale across varied workflows. Our investment in inVia Robotics reflects our conviction in AI as a key enabler for the supply chain industry,” said Daniel Gwak, Co-Head, AI Investments at Point72 Ventures, the early stage investment firm formed by the famed hedge fund manager, Steven Cohen.

Given the pressures on shipping and logistics companies, it’s no surprise that the robotics and automation are becoming critically important strategic investments, or that venture capital is flooding int the market. In the past two months alone, robotics companies targeting warehouse and retail automation have raised nearly $70 million in new financing. They include the recent raised $17.7 million for the French startup Exotec Solutions and Bossa Nova’s $29 million round for its grocery store robots.

Then there are warehouse-focused robotics companies like Fetch Robotics, which traces its lineage back to Willow Garage and Locus Robotics, which is linked to the logistics services company Quiet Logistics.

“Funding in robotics has been incredible over the past several years, and for good reason,” said John Santagate, Research Director for Commercial Service Robotics at Research and Analysis Firm IDC, in a statement. “The growth in funding is a function of a market that has become accepting of the technology, a technology area that has matured to meet market demands, and vision of the future that must include flexible automation technology. Products must move faster and more efficiently through the warehouse today to keep up with consumer demand and autonomous mobile robots offer a cost-effective way to deploy automation to enable speed, efficiency, and flexibility.”

The team at inVia realized it wasn’t enough to sell the robots. To give warehouses a full sense of the potential cost savings they could have with inVia’s robots, they’d need to take a page from the software playbook. Rather than selling the equipment, they’d sell the work the robots were doing as a service.

“Customers will ask us how much the robots cost and that’s sort of irrelevant,” says Voorhies. “We don’t want customers to think about those things at all.”

Contracts between inVia and logistics companies are based on the unit of work done, Voorhies said. “We charge on the order line,” says Voorhies. “An order line is a single [stock keeping unit] that somebody would order regardless of quantity… We’re essentially charging them every time a robot has to bring a tote and present it in front of a person. The faster we’re able to do that and the less robots we can use to present an item the better our margins are.”

It may not sound like a huge change, but those kinds of efficiencies matter in warehouses, Voorhies said. “If you’re a person pushing a cart in a warehouse that cart can have 35 pallets on it. With us, that person is standing still, and they’re really not limited to a single cart. They are able to fill 70 orders at the same time rather than 55,” he said.

At Rakuten logistics, the deployment of inVia’s robots are already yielding returns, according to Michael Manzione, the chief executive officer of Rakuten Super Logistics.

“Really [robotics] being used in a fulfillment center is pretty new,” said Manzione in an interview. “We started looking at the product in late February and went live in late March.”

For Manzione, the big selling point was scaling the robots quickly, with no upfront cost. “The bottom line is ging to be effective when we see planning around the holiday season,” said Manzione. “We’re not planning on bringing in additional people, versus last year when we doubled our labor.”

As Voorhies notes, training a team to work effectively in a warehouse environment isn’t easy.

The big problem is that it’s really hard to hire extra people to do this. In a warehouse there’s a dedicated core team that really kicks ass and they’re really happy with those pickers and they will be happy with what they get from whatever those people can sweat out in a shift,” Voorhies said. “Once you need to push your throughput beyond what your core team can do it’s hard to find people who can do that job well.” 

Google gives its AI the reins over its data center cooling systems

The inside of data centers is loud and hot — and keeping servers from overheating is a major factor in the cost of running them. It’s no surprise then that the big players in this space, including Facebook, Microsoft and Google, all look for different ways of saving cooling costs. Facebook uses cool outside air when possible, Microsoft is experimenting with underwater data centers and Google is being Google and looking to its AI models for some extra savings.

A few years ago, Google, through its DeepMind affiliate, started looking into how it could use machine learning to provide its operators some additional guidance on how to best cool its data centers. At the time, though, the system only made recommendations and the human operators decided whether to implement them. Those humans can now take longer naps during the afternoon, because the team has decided the models are now good enough to give the AI-powered system full control over the cooling system. Operators can still intervene, of course, but as long as the AI doesn’t decide to burn the place down, the system runs autonomously.

The new cooling system is now in place in a number of Google’s data centers. Every five minutes, the system polls thousands of sensors inside the data center and chooses the optimal actions based on this information. There are all kinds of checks and balances here, of course, so the chances of one of Google’s data centers going up in flames because of this is low.

Like most machine learning models, this one also became better as it gathered more data. It’s now delivering energy savings of 30 percent on average, compared to the data centers’ historical energy usage.

One thing that’s worth noting here is that Google is obviously trying to save a few bucks, but in many ways, the company is also looking at this as a way of promoting its own machine learning services. What works in a data center, after all, should also work in a large office building. “In the long term, we think there’s potential to apply this technology in other industrial settings and help tackle climate change on an even grander scale,” DeepMind writes in today’s announcement.

Autonomous retail startup Inokyo’s first store feels like stealing

Inokyo wants to be the indie Amazon Go. It’s just launched its prototype cashierless autonomous retail store. Cameras track what you grab from shelves, and with a single QR scan of its app on your way in and out of the store, you’re charged for what you got.

Inokyo‘s first store is now open on Mountain View’s Castro Street selling an array of bougie kombuchas, snacks, protein powders, and bath products. It’s sparse and a bit confusing, but offers a glimpse of what might be a commonplace shopping experience five years from now. You can get a glimpse yourself in our demo video below:

“Cashierless stores will have the same level of impact on retail as self-driving cars will have on transportation” Inokyo co-founder Tony Francis tells me. “This is the future of retail. It’s inevitable that stores will become increasingly autonomous.”

Inokyo (rhymes with Tokyo) is now accepting signups for beta customers who want early access to its Mountain View store. The goal is to collect enough data to dictate the future product array and business model. Inokyo is deciding whether it wants to sell its technology as a service to other retail stores, run its own stores, or work with brands to improve their product’s positioning based on in-store sensor data on custom behavior.

We knew that building this technology in a lab somewhere wouldn’t yield a successful product” says Francis. “Our hypothesis here is that whoever ships first, learns in the real world, and iterates the fastest on this technology will be the ones to make these stores ubiquitous.” Inokyo might never rise into a retail giant ready to compete with Amazon and Whole Foods. But its tech could even the playing field, equipping smaller businesses with the tools to keep tech giants from having a monopoly on autonomous shopping experiences.

It’s About What Cashiers Do Instead

Amazon isn’t as ahead as we assumed” Francis remarks. He and his co-founder Rameez Remsudeen took a trip to Seattle to see the Amazon Go store that first traded cashiers for cameras in the US. Still, they realized “This experience can be magical”. The two had met at Carnegie Mellon through machine learning classes before they went on to apply that knowledge at Instaram and Uber. The two decided that if they jumped into autonomous retail soon enough, they could still have a say in shaping its direction.

Next week, Inokyo will graduate from Y Combinator’s accelerator that provided its initial seed funding. In six weeks during the program, they found a retail space on Mountain View’s main drag, studied customer behaviors in traditional stores, built an initial product line, and developed the technology to track what user are taking off the shelves.

Here’s how the Inokyo store works. You download its app and connect a payment method, and you get a QR code that you wave in front of a little sensor as you stroll into the shop. Overhead cameras will scan your body shape and clothing without facial recognition in order to track you as you move around the store. Meanwhile, on-shelf cameras track when products are picked up or put back. Combined, knowing who’s where and what’s grabbed lets it assign the items to your cart. You scan again on your way out, and later you get a receipt detailing the charges.

Originally, Inokyo actually didn’t make you scan on the way out, but it got the feedback that customers were scared they were actually stealing. The scan-out is more about peace of mind than engineering necessity. There is a subversive pleasure to feeling like “well, if Inokyo didn’t catch all the stuff I chose, that’s not my problem.” And if you’re overcharged, there’s an in-app support button for getting a refund.

Inokyo co-founders (from left): Tony Francis and Rameez Remsudeen

Inokyo was accurate in what it charged me despite me doing a few switcharoos with products I nabbed. But there were only about three people in the room with at the time. The real test for these kinds of systems are when a rush of customers floods in and that cameras have to differentiate between multiple similar-looking people. Inokyo will likely need to be over 99 percent accurate to be more of a help than a headache. An autonomous store that constantly over- or undercharges would be more trouble than it’s worth, and patrons would just go to the nearest classic shop.

Just because autonomous retail stores will be cashier-less doesn’t mean they’ll have no staff. To maximize cost-cutting, they could just trust that people won’t loot it. However, Inokyo plans to have someone minding the shop to make sure people scan in the first place and to answer questions about the process. But theirs also an opportunity in reassigning labor from being cashiers to concierges that can recommend the best products or find what’s the right fit for the customer. These stores will be judged by the convenience of the holistic experience, not just the tech. At the very least, a single employee might be able to handle restocking, customer support, and store maintenance once freed from cashier duties.

The Amazon Go autonomous retail store in Seattle is equipped with tons of overhead cameras.

While Amazon Go uses cameras in a similar way to Inokyo, it also relies on weight sensors to track items. There are plenty of other companies chasing the cashierless dream. China’s BingoBox has nearly $100 million in funding and has over 300 stores, though they use less sophisticated RFID tags. Fellow Y Combinator startup Standard Cognition has raised $5 million to equip old school stores with autonomous camera-tech. AiFi does the same, but touts that its cameras can detect abnormal behavior that might signal someone is a shoplifter.

The store of the future seems like more and more of a sure thing. The race’s winner will be determined by who builds the most accurate tracking software, easy-to-install hardware, and pleasant overall shopping flow. If this modular technology can cut costs and lines without alienating customers, we could see our local brick-and-mortars adapt quickly. The bigger question than if or even when this future arrives is what it will mean for the millions of workers who make their living running the checkout lane.

Y Combinator is launching a startup program in China

U.S. accelerator Y Combinator is expanding to China after it announced the hiring of former Microsoft and Baidu executive Qi Lu who will develop a standalone startup program that runs on Chinese soil.

Shanghai-born Lu spent 11 years with Yahoo and eight years with Microsoft before a short spell with Baidu, where he was COO and head of the firm’s AI research division. Now he becomes founding CEO of YC China while he’s also stepping into the role of Head of YC Research. YC will also expand its research team with an office in Seattle, where Lu has plenty of links.

There’s no immediate timeframe for when YC will launch its China program, which represents its first global expansion, but YC President Sam Altman told TechCrunch in an interview that the program will be based in Beijing once it is up and running. Altman said Lu will use his network and YC’s growing presence in China — it ran its first ‘Startup School’ event in Beijing earlier this year — to recruit prospects who will be put into the upcoming winter program in the U.S..

Following that, YC will work to launch the China-based program as soon as possible. It appears that the details are still being sketched out, although Altman did confirm it will run independently but may lean on local partners for help. The YC President he envisages batch programming in the U.S. and China overlapping to a point with visitors, shared mentors and potentially other interaction between the two.

China’s startup scene has grown massively in recent years, numerous reports peg it close to that of the U.S., so it makes sense that YC, as an ‘ecosystem builder,’ wants to in. But Altman believes that the benefits extend beyond YC and will strengthen its network of founders, which spans more than 1,700 startups.

“The number one asset YC has is a very special founder community,” he told TechCrunch. “The opportunity to include a lot more Chinese founders seems super valuable to everyone. Over the next decade, a significant portion of the tech companies started will be from the U.S. or China [so operating a] network across both is a huge deal.”

Altman said he’s also banking on Lu being the man to make YC China happen. He revealed that he’s spent a decade trying to hire Lu, who he described as “one of the most impressive technologists I know.”

Y Combinator President Sam Altman has often spoken of his desire to get into the Chinese market

Entering China as a foreign entity is never easy, and in the venture world it is particularly tricky because China already has an advanced ecosystem of firms with their own networks for founders, particularly in the early-stage space. But Altman is confident that YC’s global reach and roster of founders and mentors appeals to startups in China.

YC has been working to add Chinese startups to its U.S.-based programs for some time. Altman has long been keen on an expansion to China, as he discussed at our Disrupt event last year, and partner Eric Migicovsky — who co-founder Pebble — has been busy developing networks and arranging events like the Beijing one to raise its profile.

That’s seen some progress with more teams from China — and other parts of the world — taking part in YC batches, which have never been more diverse. But YC is still missing out on global talent.

According to its own data, fewer than 10 Chinese companies have passed through its corridors but that list looks like it is missing some names so the number may be higher. Clearly, though, admission are skewed towards the U.S. — the question is whether Qi Lu and creation of YC China can significantly alter that.

Openbook is the latest dream of a digital life beyond Facebook

As tech’s social giants wrestle with antisocial demons that appear to be both an emergent property of their platform power, and a consequence of specific leadership and values failures (evident as they publicly fail to enforce even the standards they claim to have), there are still people dreaming of a better way. Of social networking beyond outrage-fuelled adtech giants like Facebook and Twitter.

There have been many such attempts to build a ‘better’ social network of course. Most have ended in the deadpool. A few are still around with varying degrees of success/usage (Snapchat, Ello and Mastodon are three that spring to mine). None has usurped Zuckerberg’s throne of course.

This is principally because Facebook acquired Instagram and WhatsApp. It has also bought and closed down smaller potential future rivals (tbh). So by hogging network power, and the resources that flow from that, Facebook the company continues to dominate the social space. But that doesn’t stop people imagining something better — a platform that could win friends and influence the mainstream by being better ethically and in terms of functionality.

And so meet the latest dreamer with a double-sided social mission: Openbook.

The idea (currently it’s just that; a small self-funded team; a manifesto; a prototype; a nearly spent Kickstarter campaign; and, well, a lot of hopeful ambition) is to build an open source platform that rethinks social networking to make it friendly and customizable, rather than sticky and creepy.

Their vision to protect privacy as a for-profit platform involves a business model that’s based on honest fees — and an on-platform digital currency — rather than ever watchful ads and trackers.

There’s nothing exactly new in any of their core ideas. But in the face of massive and flagrant data misuse by platform giants these are ideas that seem to sound increasingly like sense. So the element of timing is perhaps the most notable thing here — with Facebook facing greater scrutiny than ever before, and even taking some hits to user growth and to its perceived valuation as a result of ongoing failures of leadership and a management philosophy that’s been attacked by at least one of its outgoing senior execs as manipulative and ethically out of touch.

The Openbook vision of a better way belongs to Joel Hernández who has been dreaming for a couple of years, brainstorming ideas on the side of other projects, and gathering similarly minded people around him to collectively come up with an alternative social network manifesto — whose primary pledge is a commitment to be honest.

“And then the data scandals started happening and every time they would, they would give me hope. Hope that existing social networks were not a given and immutable thing, that they could be changed, improved, replaced,” he tells TechCrunch.

Rather ironically Hernández says it was overhearing the lunchtime conversation of a group of people sitting near him — complaining about a laundry list of social networking ills; “creepy ads, being spammed with messages and notifications all the time, constantly seeing the same kind of content in their newsfeed” — that gave him the final push to pick up the paper manifesto and have a go at actually building (or, well, trying to fund building… ) an alternative platform. 

At the time of writing Openbook’s Kickstarter crowdfunding campaign has a handful of days to go and is only around a third of the way to reaching its (modest) target of $115k, with just over 1,000 backers chipping in. So the funding challenge is looking tough.

The team behind Openbook includes crypto(graphy) royalty, Phil Zimmermann — aka the father of PGP — who is on board as an advisor initially but billed as its “chief cryptographer”, as that’s what he’d be building for the platform if/when the time came. 

Hernández worked with Zimmermann at the Dutch telecom KPN building security and privacy tools for internal usage — so called him up and invited him for a coffee to get his thoughts on the idea.

“As soon as I opened the website with the name Openbook, his face lit up like I had never seen before,” says Hernández. “You see, he wanted to use Facebook. He lives far away from his family and facebook was the way to stay in the loop with his family. But using it would also mean giving away his privacy and therefore accepting defeat on his life-long fight for it, so he never did. He was thrilled at the possibility of an actual alternative.”

On the Kickstarter page there’s a video of Zimmermann explaining the ills of the current landscape of for-profit social platforms, as he views it. “If you go back a century, Coca Cola had cocaine in it and we were giving it to children,” he says here. “It’s crazy what we were doing a century ago. I think there will come a time, some years in the future, when we’re going to look back on social networks today, and what we were doing to ourselves, the harm we were doing to ourselves with social networks.”

“We need an alternative to the social network work revenue model that we have today,” he adds. “The problem with having these deep machine learning neural nets that are monitoring our behaviour and pulling us into deeper and deeper engagement is they already seem to know that nothing drives engagement as much as outrage.

“And this outrage deepens the political divides in our culture, it creates attack vectors against democratic institutions, it undermines our elections, it makes people angry at each other and provides opportunities to divide us. And that’s in addition to the destruction of our privacy by revenue models that are all about exploiting our personal information. So we need some alternative to this.”

Hernández actually pinged TechCrunch’s tips line back in April — soon after the Cambridge Analytica Facebook scandal went global — saying “we’re building the first ever privacy and security first, open-source, social network”.

We’ve heard plenty of similar pitches before, of course. Yet Facebook has continued to harvest global eyeballs by the billions. And even now, after a string of massive data and ethics scandals, it’s all but impossible to imagine users leaving the site en masse. Such is the powerful lock-in of The Social Network effect.

Regulation could present a greater threat to Facebook, though others argue more rules will simply cement its current dominance.

Openbook’s challenger idea is to apply product innovation to try to unstick Zuckerberg. Aka “building functionality that could stand for itself”, as Hernández puts it.

“We openly recognise that privacy will never be enough to get any significant user share from existing social networks,” he says. “That’s why we want to create a more customisable, fun and overall social experience. We won’t follow the footsteps of existing social networks.”

Data portability is an important ingredient to even being able to dream this dream — getting people to switch from a dominant network is hard enough without having to ask them to leave all their stuff behind as well as their friends. Which means that “making the transition process as smooth as possible” is another project focus.

Hernández says they’re building data importers that can parse the archive users are able to request from their existing social networks — to “tell you what’s in there and allow you to select what you want to import into Openbook”.

These sorts of efforts are aided by updated regulations in Europe — which bolster portability requirements on controllers of personal data. “I wouldn’t say it made the project possible but… it provided us a with a unique opportunity no other initiative had before,” says Hernández of the EU’s GDPR.

“Whether it will play a significant role in the mass adoption of the network, we can’t tell for sure but it’s simply an opportunity too good to ignore.”

On the product front, he says they have lots of ideas — reeling off a list that includes the likes of “a topic-roulette for chats, embracing Internet challenges as another kind of content, widgets, profile avatars, AR chatrooms…” for starters.

“Some of these might sound silly but the idea is to break the status quo when it comes to the definition of what a social network can do,” he adds.

Asked why he believes other efforts to build ‘ethical’ alternatives to Facebook have failed he argues it’s usually because they’ve focused on technology rather than product.

“This is still the most predominant [reason for failure],” he suggests. “A project comes up offering a radical new way to do social networking behind the scenes. They focus all their efforts in building the brand new tech needed to do the very basic things a social network can already do. Next thing you know, years have passed. They’re still thousands of miles away from anything similar to the functionality of existing social networks and their core supporters have moved into yet another initiative making the same promises. And the cycle goes on.”

He also reckons disruptive efforts have fizzled out because they were too tightly focused on being just a solution to an existing platform problem and nothing more.

So, in other words, people were trying to build an ‘anti-Facebook’, rather than a distinctly interesting service in its own right. (The latter innovation, you could argue, is how Snap managed to carve out a space for itself in spite of Facebook sitting alongside it — even as Facebook has since sought to crush Snap’s creative market opportunity by cloning its products.)

“This one applies not only to social network initiatives but privacy-friendly products too,” argues Hernández. “The problem with that approach is that the problems they solve or claim to solve are most of the time not mainstream. Such as the lack of privacy.

“While these products might do okay with the people that understand the problems, at the end of the day that’s a very tiny percentage of the market. The solution these products often present to this issue is educating the population about the problems. This process takes too long. And in topics like privacy and security, it’s not easy to educate people. They are topics that require a knowledge level beyond the one required to use the technology and are hard to explain with examples without entering into the conspiracy theorist spectrum.”

So the Openbook team’s philosophy is to shake things up by getting people excited for alternative social networking features and opportunities, with merely the added benefit of not being hostile to privacy nor algorithmically chain-linked to stoking fires of human outrage.

The reliance on digital currency for the business model does present another challenge, though, as getting people to buy into this could be tricky. After all payments equal friction.

To begin with, Hernández says the digital currency component of the platform would be used to let users list secondhand items for sale. Down the line, the vision extends to being able to support a community of creators getting a sustainable income — thanks to the same baked in coin mechanism enabling other users to pay to access content or just appreciate it (via a tip).

So, the idea is, that creators on Openbook would be able to benefit from the social network effect via direct financial payments derived from the platform (instead of merely ad-based payments, such as are available to YouTube creators) — albeit, that’s assuming reaching the necessary critical usage mass. Which of course is the really, really tough bit.

“Lower cuts than any existing solution, great content creation tools, great administration and overview panels, fine-grained control over the view-ability of their content and more possibilities for making a stable and predictable income such as creating extra rewards for people that accept to donate for a fixed period of time such as five months instead of a month to month basis,” says Hernández, listing some of the ideas they have to stand out from existing creator platforms.

“Once we have such a platform and people start using tips for this purpose (which is not such a strange use of a digital token), we will start expanding on its capabilities,” he adds. (He’s also written the requisite Medium article discussing some other potential use cases for the digital currency portion of the plan.)

At this nascent prototype and still-not-actually-funded stage they haven’t made any firm technical decisions on this front either. And also don’t want to end up accidentally getting into bed with an unethical tech.

“Digital currency wise, we’re really concerned about the environmental impact and scalability of the blockchain,” he says — which could risk Openbook contradicting stated green aims in its manifesto and looking hypocritical, given its plan is to plough 30% of its revenues into ‘give-back’ projects, such as environmental and sustainability efforts and also education.

“We want a decentralised currency but we don’t want to rush into decisions without some in-depth research. Currently, we’re going through IOTA’s whitepapers,” he adds.

They do also believe in decentralizing the platform — or at least parts of it — though that would not be their first focus on account of the strategic decision to prioritize product. So they’re not going to win fans from the (other) crypto community. Though that’s hardly a big deal given their target user-base is far more mainstream.

“Initially it will be built on a centralised manner. This will allow us to focus in innovating in regards to the user experience and functionality product rather than coming up with a brand new behind the scenes technology,” he says. “In the future, we’re looking into decentralisation from very specific angles and for different things. Application wise, resiliency and data ownership.”

“A project we’re keeping an eye on and that shares some of our vision on this is Tim Berners Lee’s MIT Solid project. It’s all about decoupling applications from the data they use,” he adds.

So that’s the dream. And the dream sounds good and right. The problem is finding enough funding and wider support — call it ‘belief equity’ — in a market so denuded of competitive possibility as a result of monopolistic platform power that few can even dream an alternative digital reality is possible.

In early April, Hernández posted a link to a basic website with details of Openbook to a few online privacy and tech communities asking for feedback. The response was predictably discouraging. “Some 90% of the replies were a mix between critiques and plain discouraging responses such as “keep dreaming”, “it will never happen”, “don’t you have anything better to do”,” he says.

(Asked this April by US lawmakers whether he thinks he has a monopoly, Zuckerberg paused and then quipped: “It certainly doesn’t feel like that to me!”)

Still, Hernández stuck with it, working on a prototype and launching the Kickstarter. He’s got that far — and wants to build so much more — but getting enough people to believe that a better, fairer social network is even possible might be the biggest challenge of all. 

For now, though, Hernández doesn’t want to stop dreaming.

“We are committed to make Openbook happen,” he says. “Our back-up plan involves grants and impact investment capital. Nothing will be as good as getting our first version through Kickstarter though. Kickstarter funding translates to absolute freedom for innovation, no strings attached.”

You can check out the Openbook crowdfunding pitch here.

NASA’s Parker Solar Probe launches tonight to ‘touch the sun’

NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:33 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.

If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.

This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly. (Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.

It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.

Go on – it’s quite cool.

The car-sized Parker will orbit the sun and constantly rotate itself so the heat shield is facing inward and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.

And such instruments! There are three major experiments or instrument sets on the probe.

WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation and other nuisances will produce an amazingly clear picture.

SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.

FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.

They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.

Even then, they’ll get so hot that the team needed to implement the first-ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.

The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that seven more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.

On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.

It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.

The mission is scheduled to last seven years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.

The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.

AI training and social network content moderation services bring TaskUs a $250 million windfall

TaskUs, the business process outsourcing service that moderates content, annotates information and handles back office customer support for some of the world’s largest tech companies, has raised $250 million in an investment from funds managed by the New York-based private equity giant, Blackstone Group.

It’s been ten years since TaskUs was founded with a $20,000 investment from its two co-founders, and the new deal, which values the decade-old company at $500 million before the money even comes in, is proof of how much has changed for the service in the years since it was founded.

The Santa Monica-based company, which began as a browser-based virtual assistant company — “You send us a task and we get the task done,” recalled TaskUs chief executive Bryce Maddock — is now one of the main providers in the growing field of content moderation for social networks and content annotation for training the algorithms that power artificial intelligence services around the world.

“What I can tell you is we do content moderation for almost every major social network and it’s the fastest growing part of our business today,” Maddock said.

From a network of offices spanning the globe from Mexico to Taiwan and the Philippines to the U.S., the thirty two year-old co-founders Maddock and Jaspar Weir have created a business that’s largest growth stems from snuffing out the distribution of snuff films; child pornography; inappropriate political content and the trails of human trafficking from the user and advertiser generated content on some of the world’s largest social networks.

(For a glimpse into how horrific that process can be, take a look at this article from Wiredwhich looked at content moderation for the anonymous messaging service, Whisper.)

Maddock estimates that while the vast majority of the business was outsourcing business process services in the company’s early days (whether that was transcribing voice mails to texts for the messaging service PhoneTag, or providing customer service and support for companies like HotelTonight) now about 40% of the business comes from content moderation.

Image courtesy of Getty Images

Indeed, it was the growth in new technology services that attracted Blackstone to the business, according to Amit Dixit, Senior Managing Director at Blackstone.

“The growth in ride sharing, social media, online food delivery, e-commerce and autonomous driving is creating an enormous need for enabling business services,” said Dixit in a statement. “TaskUs has established a leadership position in this domain with its base of marquee customers, unique culture, and relentless focus on customer delivery.”

While the back office business processing services remain the majority of the company’s revenue, Maddock knows that the future belongs to an increasing automation of the company’s core services. That’s why part of the money is going to be invested in a new technology integration and consulting business that advises tech companies on which new automation tools to deploy, along with shoring up the company’s position as perhaps the best employer to work for in the world of content moderation and algorithm training services.

It’s been a long five year journey to get to the place it’s in now, with glowing reviews from employees on Glassdoor and social networks like Facebook, Maddock said. The company pays well above minimum wage in the market it operates in (Maddock estimates at least a 50% premium); and provides a generous package of benefits for what Maddock calls the “frontline” teammates. That includes perks like educational scholarships for one child of employees that have been with the company longer than one year; healthcare plans for the employee and three beneficiaries in the Philippines; and 120 days of maternity leave.

And, as content moderation is becoming more automated, the TaskUs employees are spending less time in the human cesspool that attempts to flood social networks every day.

“Increasingly the work that we’re doing is more nuanced. Does this advertisement have political intent. That type of work is far more engaging and could be seen to be a little bit less taxing,” Maddock said.

But he doesn’t deny that the bulk of the hard work his employees are tasked with is identifying and filtering the excremental trash that people would post online.

“I do think that the work is absolutely necessary. The alternative is that everybody has to look at this stuff. it has to be done in a way thats thoughtful and puts the interests of the people who are on the frontlines at the forefront of that effort,” says Maddock. “There have been multiple people who have been involved in sex trafficking, human trafficking and pedophilia that have been arrested directly because of the work that TaskUs is doing. And the consequence of someone not doing that is a far far worse world.”

Maddock also said that TaskUs now shields its employees from having to perform content moderation for an entire shift. “What we have tried to do universally is that there is a subject matter rotation so that you are not just sitting and doing that work all day.”

And the company’s executive knows how taxing the work can be because he said he does it himself. “I try to spend a day a quarter doing the work of our frontline teammates. I spend half my time in our offices,” Maddock said.

Now, with the new investment, TaskUs is looking to expand into additional markets in the UK, Europe, India, and Latin America, Maddock said.

“So far all we’ve been doing is hiring as fast as we possibly can,” said Maddock. “At some point in the future, there’s going to be a point when companies like ours will see the effects of automation,” he added, but that’s why the company is investing in the consulting business… so it can stay ahead of the trends in automation.

Even with the threat that automation could pose to the company’s business, TaskUs had no shortage of other suitors for the massive growth equity round, according to one person familiar with the company. Indeed, Goldman Sachs and Softbank were among the other bidders for a piece of TaskUs, the source said.

Currently, the company has over 11,000 employees (including 2,000 in the U.S.) and is looking to expand.

“We chose to partner with Blackstone because they have a track record of building category defining businesses. Our goal is to build TaskUs into the world’s number one provider of tech enabled business services.  This partnership will help us dramatically increase our investment in consulting, technology and innovation to support our customer’s efforts to streamline and refine their customer experience,” said Maddock in a statement.

The transaction is expected to close in the fourth quarter of 2018, subject to regulatory approvals and customary closing conditions.

AI giant SenseTime leads $199M investment in Chinese video tech startup

SenseTime may be best known as the world’s highest-valued AI company — having raised $620 million at a valuation of over $4.5 billion — but it is also an investor, too. The Chinese firm this week led a 1.36 billion RMB ($199 million) Series D funding round for Moviebook, a Beijing-based startup that develops technology to support online video services.

Moviebook previously raised a 500 million RMB Series C in 2017, worth around $75 million. SB China Venture Capital (SBCVC) also took part in this new round alongside Qianhai Wutong, PAC Partners, Oriental Pearl, and Lang Sheng Investment.

With the investment, SenseTime said it also inked a partnership with Moviebook which will see the two companies collaborate on a range of AI technologies, including augmented reality, with a view to increasing the use of AI in the entertainment industry.

The object detection and tracking technology developed by SenseTime Group Ltd. is displayed on a screen at the Artificial Intelligence Exhibition & Conference in Tokyo, Japan, on Wednesday, April 4, 2018. The AI Expo will run through April 6. Photographer: Kiyoshi Ota/Bloomberg

In a statement in Chinese, SenseTime co-founder Xu Bing said the companies plan to use the vast amounts of video data from broadcasting, TV and internet streams to help unlock commercial opportunities in the future. He also stressed the potential to bring AI and new technologies to the entertainment industry.

This isn’t SenseTime’s first strategic investment, but it is likely to be its most significant to date. The company has previously backed startups that include 51VR, Helian Health and Suning Sports, the spinout from retail giant Suning.

SenseTime itself has raised over $1.6 billion from investors, which include Alibaba, Tiger Global, Qualcomm, IDG Capital, Temasek and Silver Lake Partners.