Amazon starts shipping its $249 DeepLens AI camera for developers

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

Under a millimeter wide and powered by light, these tiny cameras could hide almost anywhere

As if there weren’t already cameras enough in this world, researchers created a new type that is both microscopic and self-powered, making it possible to embed just about anywhere and have it work perpetually. It’s undoubtedly cool technology, but it’s probably also going to cause a spike in tinfoil sales.

Engineers have previously investigated the possibility of having a camera sensor power itself with the same light that falls on it. After all, it’s basically just two different functions of a photovoltaic cell — one stores the energy that falls on it while the other records how much energy fell on it.

The problem is that if you have a cell doing one thing, it can’t do the other. So if you want to have a sensor of a certain size, you have to dedicate a certain amount of that real estate to collecting power, or else swap the cells rapidly between performing the two tasks.

Euisik Yoon and post-doc Sung-Yun Park at the University of Michigan came up with a solution that avoids both these problems. It turns out that photosensitive diodes aren’t totally opaque — in fact, quite a lot of light passes right through them. So putting the solar cell under the image sensor doesn’t actually deprive it of light.

That breakthrough led to the creation of this “simultaneous imaging and energy harvesting” sensor, which does what it says on the tin.

The prototype sensor they built is less than a square millimeter, and fully self-powered in sunlight. It captured images at up to 15 frames per second of pretty reasonable quality:

The Benjamin on the left is at 7 frames per second, and on the right is 15.

In the paper, the researchers point out that they could easily produce better images with a few tweaks to the sensor, and Park tells IEEE Spectrum that the power consumption of the chip is also not optimized — so it could also operate at higher framerates or lower lighting levels.

Ultimately the sensor could be essentially a nearly invisible camera that operates forever with no need for a battery or even wireless power. Sounds great!

In order for this to be a successful spy camera, of course, it needs more than just an imaging component — a storage and transmission medium are necessary for any camera to be useful. But microscopic versions of those are also in development, so putting them together is just a matter of time and effort.

The team published their work this week in the journal IEEE Electron Device Letters.

A system to tell good fake bokeh from bad

 The pixel-peepers at DxOMark have shared some of the interesting metrics and techniques they use to judge the quality of a smartphone’s artificial bokeh, or background blur in photos. Not only is it difficult to do in the first place, but they have to systematize it! Their guide should provide even seasoned shooters with some insight into the many ways computational bokeh varies in quality. Read More