Hollywood gets its own open-source foundation

Open source is everywhere now, so maybe it’s no surprise that the Academy of Motion Picture Arts and Sciences (yes, the organization behind the Oscars) today announced that it has partnered with the Linux Foundation to launch the Academy Software Foundation, a new open-source foundation for developers in the motion picture and media space.

The founding members include a number of high-powered media and tech companies, including Animal Logic, Blue Sky Studios, Cisco, DreamWorks, Epic Games, Google, Intel, SideFX, Walt Disney Studios and Weta Digital.

“Open Source Software has enabled developers and engineers to create the amazing effects and animation that we see every day in the moves, on television and in video games,” said Linux Foundation CEO Jim Zemlin. “With the Academy Software Foundation, we are providing a home for this community of open source developers to collaborate and drive the next wave of innovation across the motion picture and broader media industries.”

The Academy Software Foundation’s mission statement notes that it wants to be a neural forum “to coordinate cross-project efforts; to provide a common built and test infrastructure; and to provide individuals and organizations a clear path to participation in advancing out open source ecosystem.”

According to a survey by the Academy, 84 percent of the industry uses open-source software already, mostly for animation and visual effects. The group also found that what’s holding back open-source development in the media industry is the siloed nature of the development teams across the different companies in this ecosystem.

“The creation of the Academy Software Foundation is an important and exciting step of the motion picture industry,” said Nick Cannon, the chief technology officer of Walt Disney Animation Studios. “By increasing collaboration within our industry, it allows all of us to pool our efforts on common foundation technologies, drive new standards for interoperability and increase the pace of innovation.”

The fact that even Hollywood is now embracing open source and its collaborative nature is yet another sign of how the world of software development has changed in recent years. Over the last few years, traditional enterprises realized that whatever technology they developed to run their software infrastructure isn’t what actually delivers value to their customers, so it made sense to collaborate in this area, even with their fiercest competitors — and the same, it seems, now holds true for the Hollywood studios, too (or at least for those that have now joined the new foundation).

Disney tech smooths out bad CG hair days

Disney is unequivocally the world’s leader in 3D simulations of hair — something of a niche talent in a way, but useful if you make movies like Tangled, where hair is basically the main character. A new bit of research from the company makes it easier for animators to have hair follow their artistic intent while also moving realistically.

The problem Disney Research aimed to solve was a compromise that animators have had to make when making the hair on characters do what the scene requires. While the hair will ultimately be rendered in glorious high-definition and with detailed physics, it’s too computationally expensive to do that while composing the scene.

Should a young warrior in her tent be wearing her hair up or down? Should it fly out when she turns her head quickly to draw attention to the movement, or stay weighed down so the audience isn’t distracted? Trying various combinations of these things can eat up hours of rendering time. So, like any smart artist, they rough it out first:

“Artists typically resort to lower-resolution simulations, where iterations are faster and manual edits possible,” reads the paper describing the new system. “But unfortunately, the parameter values determined in this way can only serve as an initial guess for the full-resolution simulation, which often behaves very different from its coarse counterpart when the same parameters are used.”

The solution proposed by the researchers is basically to use that “initial guess” to inform a high-resolution simulation of just a handful of hairs. These “guide” hairs act as feedback for the original simulation, bringing a much better idea of how the rest will act when fully rendered.

The guide hairs will cause hair to clump as in the upper right, while faded affinities or an outline-based guide (below, left and right) would allow for more natural motion if desired.

And because there are only a couple of them, their finer simulated characteristics can be tweaked and re-tweaked with minimal time. So an artist can fine-tune a flick of the ponytail or a puff of air on the bangs to create the desired effect, and not have to trust to chance that it’ll look like that in the final product.

This isn’t a trivial thing to engineer, of course, and much of the paper describes the schemes the team created to make sure that no weirdness occurs because of the interactions of the high-def and low-def hair systems.

It’s still very early: it isn’t meant to simulate more complex hair motions like twisting, and they want to add better ways of spreading out the affinity of the bulk hair with the special guide hairs (as seen at right). But no doubt there are animators out there who can’t wait to get their hands on this once it gets where it’s going.

Big tech companies are looking at Hollywood as the next stage in their play for the cloud

This week, both Microsoft and Google made moves to woo Hollywood to their cloud computing platforms in the latest act of the unfolding drama over who will win the multi-billion dollar business of the entertainment industry as it moves to the cloud.

Google raised the curtain with a splashy announcement that they’d be setting up their fifth cloud region in the U.S. in Los Angeles. Keeping the focus squarely on tools for artists and designers the company talked up its tools like Zync Render, which Google acquired back in 2014, and Anvato, a video streaming and monetization platform it acquired in 2016.

While Google just launched its LA hub, Microsoft has operated a cloud region in Southern California for a while, and started wooing Hollywood last year at the National Association of Broadcasters conference, according to Tad Brockway, a general manager for Azure’s storage and media business.

Now Microsoft has responded with a play of its own, partnering with the provider of a suite of hosted graphic design and animation software tools called Nimble Collective.

Founded by a former Pixar and DreamWorks animator, Rex Grignon, Nimble launched in 2014 and has raised just under $10 million from investors including the UCLA VC Fund and New Enterprise Associates, according to Crunchbase.

“Microsoft is committed to helping content creators achieve more using the cloud with a partner-focused approach to this industries transformation,” said Tad Brockway, General Manager, Azure Storage, Media and Edge at Microsoft, in a statement. “We’re excited to work with innovators like Nimble Collective to help them transform how animated content is produced, managed and delivered.”

There’s a lot at stake for Microsoft, Google and Amazon as entertainment companies look to migrate to managed computing services. Tech firms like IBM have been pitching the advantages of cloud computing for Hollywood since 2010, but it’s only recently that companies have begun courting the entertainment industry in earnest.

While leaders like Netflix migrated to cloud services in 2012 and 21st Century Fox worked with HP to get its infrastructure on cloud computing, other companies have lagged. Now companies like Microsoft, Google, and Amazon are competing for their business as more companies wake up to the pressures and demands for more flexible technology architectures.

As broadcasters face more demanding consumers, fragmented audiences, and greater time pressures to produce and distribute more content more quickly, cloud architectures for technology infrastructure can provide a solution, tech vendors argue.

Stepping into the breach, cloud computing and technology service providers like Google, Amazon, and Microsoft are trying to buy up startups servicing the entertainment market specifically, or lock in vendors like Nimble through exclusive partnerships that they can leverage to win new customers. For instance, Microsoft bought Avere Systems in January, and Google picked up Anvato in 2016 to woo entertainment companies.

The result should be lower cost tools for a broader swath of the market, and promote more cross-pollination across different geographies, according to Grignon, Nimble’s chief executive.

“That worldwide reach is very important,” Grignon said. “In media and entertainment there are lots of isolated studios around the world. We afford this pathway between the studio in LA and the studio in Bangalore. We open these doorways.”

There are other, more obvious advantages as well. Streaming — exemplified by the relationship between Amazon and Netflix is well understood — but the possibility to bring costs down by moving to cloud architectures holds several other distribution advantages as well as simplifying processes across pre- and post-production, insiders said.

 

These five trends are rocking the animation industry

I’ve been very lucky to have been in the animation industry since the mid-1980s, and I have lived through my share of big disruptions — most of them having to do with new technologies. What’s going on today is as significant as anything I’ve seen before, but it’s being driven by a whole new set of forces.

Here’s a quick survey of trends in the animation landscape that have me pretty optimistic about the future.

Technology is vanishing

By “vanishing,” I don’t mean going away; I mean disappearing from view. I’ve always said, “When technology can disappear, that’s when creativity can really begin.”

For the past 20 years, feature-film animation in particular has been an arms race of studios like Pixar and DreamWorks trying to out-engineer each other to deliver high-end character performances and visual effects that no one had ever seen. As a result, studios spent tens of millions of dollars becoming, essentially, IT companies with teams of creators producing stories to demonstrate their latest breakthroughs. Now, thanks to simpler, more intuitive tools, technology is becoming less intrusive in the creative process and animators can finally get back to doing what they love: telling stories.

New distribution platforms are driving demand

The explosion of new outlets on cable, over-the-top and online, is creating unprecedented opportunities for animated content in a wide range of styles, genres and formats. Netflix has found an audience for all kinds of quirky original animated programs, and its competitors are following suit. Cable networks are pushing the envelope in all kinds of ways, as well.

These new channels are opening up the world to artists and studios. Now anyone can create a compelling story in their basement and the world will have a chance to see it. And if it finds its audience, it can be as big as any studio release. Never before has that been possible.

Digital imagery is invading the physical world

If you love animation and digital imagery, you’re no longer limited to watching it on a screen. It’s spilling out into the world around us on mobile devices, augmented and virtual reality headsets, immersive smart spaces, holograms, giant flat panels and who knows what’s next?

The thing is, most of those hardware innovations are still waiting for their “killer app” — that must-have content or experience that pulls audiences to the new ways of experiencing stories. I firmly believe that animation and visual storytelling is going to drive those killer apps, particularly in VR.

Globally distributed workflow is transforming teamwork

The idea of a global distributed workforce isn’t new to most businesses, but it’s somewhat new for animation production. Sure, offshore outsourcing has been happening for years, but animation, at its best, is massively collaborative. Teams have to collaborate and share their ideas to help a story reach its full potential. Until recently, the connective technology hasn’t been up to the task. Now, at last, the cloud makes a lot of those limitations obsolete. It doesn’t matter if your teammate is sitting at the next desk or in Seoul, Dublin or Mexico City. And that means…

New voices are joining the conversation

All these trends are lowering the barriers and cost to entry for upstarts around the world. That means we’ll be hearing from lots of people and perspectives that haven’t been part of the animation industry before. In a world where creativity is the coin of the realm, that means we’re all going to be a whole lot richer.

Barnes & Noble teeters in a post-text world

Barnes & Noble, that once proud anchor to many a suburban mall, is waning. It is not failing all at once, dropping like the savaged corpse of Toys “R” Us, but it also clear that its cultural moment has passed and only drastic measures can save it from joining Waldenbooks and Borders in the great, paper-smelling ark of our book-buying memory. I’m thinking about this because David Leonhardt at the New York Times calls for B&N to be saved. I doubt it can be.

First, there is the sheer weight of real estate and the inexorable slide away from print. B&N is no longer a place to buy books. It is a toy store with a bathroom and a cafe (and now a restaurant?), a spot where you’re more likely to find Han Solo bobbleheads than a Star Wars novel. The old joy of visiting a bookstore and finding a few magical books to drag home is fast being replicated by smaller bookstores where curation and provenance are still important while B&N pulls more and more titles. To wit:

But does all of this matter? Will the written word – what you’re reading right now – survive the next century? Is there any value in a book when VR and AR and other interfaces can recreate what amounts to the implicit value of writing? Why save B&N if writing is doomed?

Indulge me for a moment and then argue in comments. I’m positing that B&N’s failure is indicative of a move towards a post-text society, that AI and new media will redefine how we consume the world and the fact that we see more videos than text on our Facebook feed – ostensibly the world’s social nervous system – is indicative of this change.

First, some thoughts on writing vs. film. In his book of essays, Distrust That Particular Flavor, William Gibson writes about the complexity and education and experience needed to consume various forms of media:

The book has been largely unchanged for centuries. Working in language expressed as a system of marks on a surface, I can induce extremely complex experiences, but only in an audience elaborately educated to experience this. This platform still possesses certain inherent advantages. I can, for instance, render interiority of character with an ease and specificity denied to a screenwriter.

But my audience must be literate, must know what prose fiction is and understand how one accesses it. This requires a complexly cultural education, and a certain socioeconomic basis. Not everyone is afforded the luxury of such an education.

But I remember being taken to my first film, either a Disney animation or a Disney nature documentary (I can’t recall which I saw first), and being overwhelmed by the steep yet almost instantaneous learning curve: In that hour, I learned to watch film.

This is a deeply important idea. First, we must appreciate that writing and film offer various value adds beyond linear storytelling. In the book, the writer can explore the inner space of the character, giving you an imagined world in which people are thinking, not just acting. Film – also a linear medium – offers a visual representation of a story and thoughts are inferred by dint of their humanity. We know a character’s inner life thanks to the emotion we infer from their face and body.

This is why, to a degree, the CGI human was so hard to make. Thanks to books, comics, and film we, as humans, were used to giving animals and enchanted things agency. Steamboat Willie mostly thought like us, we imagined, even though he was a mouse with big round ears. Fast forward to the dawn of CGI humans – think Sid from Toy Story and his grotesque face – and then fly even further into the future Leia looking out over a space battle and mumbling “Hope” and you see the scope of achievement in CGI humans as well as the deep problems with representing humans digitally. A CGI car named Lightning McQueen acts and thinks like us while a CGI Leia looks slightly off. We cannot associate agency with fake humans, and that’s a problem.

Thus we needed books to give us that inner look, that frisson of discovery that we are missing in real life.

But soon – and we can argue that films like Infinity War prove this – there will be no uncanny valley. We will be unable to tell if a human on screen or in VR is real or fake and this allows for an interesting set of possibilities.

First, with VR and other tricks, we could see through a character’s eyes and even hear her thoughts. This interiority, as Gibson writes, is no longer found in the realm of text and is instead an added attraction to an already rich medium. Imagine hopping from character to character, the reactions and thoughts coming hot and heavy as they move through the action. Maybe the story isn’t linear. Maybe we make it up as we go along. Imagine the remix, the rebuild, the restructuring.

Gibson again:

This spreading, melting, flowing together of what once were distinct and separate media, that’s where I imagine we’re headed. Any linear narrative film, for instance, can serve as the armature for what we would think of as a virtual reality, but which Johnny X, eight-year-old end-point consumer, up the line, thinks of as how he looks at stuff. If he discovers, say, Steve McQueen in The Great Escape, he might idly pause to allow his avatar a freestyle Hong Kong kick-fest with the German guards in the prison camp. Just because he can. Because he’s always been able to. He doesn’t think about these things. He probably doesn’t fully understand that that hasn’t always been possible.

In this case B&N and the bookstore don’t need to exist at all. We get the depth of books with the vitality of film melded with the immersion of gaming. What about artisanal book lovers, you argue, they’ll keep things alive because they love the feel of books.

When that feel – the scent, the heft, the old book smell – can be simulated do we need to visit a bookstore? When Amazon and Netflix spend millions to explore new media and are sure to branch out into more immersive forms do you need to immerse yourself in To The Lighthouse? Do we really need the education we once had to gain in order to read a book?

We know that Amazon doesn’t care about books. They used books as a starting point to taking over ecommerce and, while the Kindle is the best system for ebooks in existence, it is an afterthought compared to the rest of the business. In short, the champions of text barely support it.

Ultimately what I posit here depends on a number of changes coming all at once. We must all agree to fall headfirst into some share hallucination the replaces all other media. We must feel that that world is real enough for us to abandon our books.

It’s up to book lovers, then, to decide what they want. They have to support and pay for novels, non-fiction, and news. They have to visit small booksellers and keep demand for books alive. And they have to make it possible to exist as a writer. “Publishers are focusing on big-name writers. The number of professional authors has declined. The disappearance of Borders deprived dozens of communities of their only physical bookstore and led to a drop in book sales that looks permanent,” writes Leonhardt and he’s right. There is no upside for text slingers.

In the end perhaps we can’t save B&N. Maybe we let it collapse into a heap like so many before it. Or maybe we fight for a medium that is quickly losing cachet. Maybe we fight for books and ensure that just because the big guys on the block can’t make a bookstore work the rest of us don’t care. Maybe we tell the world that we just want to read.

I shudder to think what will happen if we don’t.

Taiwanese startup Kdan Mobile raises $5M Series A for its cloud-based content creation tools

Kdan Mobile founder and CEO Kenny Su

Kdan Mobile, a Taiwanese startup that makes cloud-based software for content creators, announced a $5 million Series A today, raised from investors including W.I. Harper Group, Darwin Venture Management and Accord Ventures. Founded in 2009, the Tainan City startup says its products have been downloaded more 120 million times, with about 40% of its customers located in the United States.

Its Series A takes Kdan Mobile’s total funding so far to $6.5 million. The capital will be used for product development, including blockchain-based encryption for documents and real-time collaboration features, to appeal to enterprise and education users. The company also plans to spend more on user acquisition in the U.S. and China, two of its growth markets.

Kdan Mobile’s products include Creativity 365, a software suite with a mobile animation creator and video editor, and Document 365, launched last year to attract enterprise users. The company also recently began offering new subscription plans for businesses and educational organizations and claims that its cloud platform, called Kdan Cloud, now counts over 3.5 million members.

Founder and chief executive officer Kenny Su says Kdan Mobile is seeking new partners that will allow it to establish a bigger presence in markets like Japan. One of its Series A investors, Accord Ventures, is based in Tokyo, and Kdan Mobile may start marketing to the country’s animation industry, Su tells TechCrunch. The company already has partnerships with Taiwanese mobile services provider GMobi, Jot Stylus maker Adonit and Ningbo, China-based design sharing platform LKKER.

Su says one of the ways Kdan’s products differentiate from cloud-based software by Google, Microsoft, Adobe and other major competitors is its focus on artists, designers and other creative professionals. Kdan’s products were also created to allow users to start projects on mobile devices before moving onto desktop apps. As many users of Google Docs, Office 365 or Adobe Creative Cloud have discovered, accessing them on mobile devices feels much more awkward than on desktop. Kdan Mobile, however, was founded just as smartphones and tablets usage was becoming widespread, and its products were created specifically for mobile.

“We are trying to fill the gap, helping users create content on mobile and then allowing them to finish it in a desktop environment, not only with our own tools, but also by exporting to other places including Adobe,” says Su.

Part of Kdan Mobile’s Series A financing will also be used to figure out how to the company can increase the use of artificial intelligence in its products. Kdan Mobile already uses machine learning algorithms to improve its software by analyzing what users upload and recommend on its content sharing platform.

In a press statement, W.I. Harper Group managing director Y.K. Chu said “We are stunned by Kdan’s leading development technology and global vision. We are glad to be part of their development plan and expect to grow with them.”