Categories
Apple Interesting Technology

QuickTime X???

Boy, it’s dusty in here!! Been busy with lots of things, including just this week releasing The Hd Survival Handbook, but there was one thing from WWDC that caught my eye.

Using media technology pioneered in OS X iPhone™, Snow Leopard introduces QuickTime X, which optimizes support for modern audio and video formats resulting in extremely efficient media playback. Snow Leopard also includes Safari® with the fastest implementation of JavaScript ever, increasing performance by 53 percent, making Web 2.0 applications feel more responsive.*

Now, I’m surprised at myself for even bothering to attempt to second guess Apple by hypothesizing wildly, but that doesn’t stop my friend James Gardiner so it won’t stop me!

There are few clues and most of my usual sources are cold. There’s the rub, anyone who has Snow Leopard is under NDA and won’t talk. Anyone who is talking is guessing – we should keep that in mind.

Tim Robertson hopes that “modern codec support” would include .AVI, which is funny because .AVI has not been developed since being abandoned by Microsoft in 1996 – 12 years ago, just after QuickTime was introduced! James Gardiner thinks it might be a Flash/Silverlight competitor. Who knows they could be right as we’re all guessing wildly.

In Apple’s world “Modern codec support” means H.264 in .mp4 wrappers, and just maybe H.264 in .mov wrappers but that’s depricated as they say. (You can still use it but it’s not the recommended method.) Apple have totally moved away from all the rich interactive features that attracted me to the technology in the first place. (Much of what was added to Flash 9, was available in QT3 but never pushed by Apple.)

Then there’s this on Apple’s Snow Leopard page

Using media technology pioneered in OS X iPhone…

The media playback support on iPhone is very basic: H.264 video, AAC audio, MPEG-4 Simple Profile video, mp3 in .mp4 containers with limited support for .mov playback of those codecs. That’s it. A simplified form of media player with none of the older codecs not supported by MPEG-4. None of the wired sprite features, no VT objects or panoramas. A simple, lightweight media player that developers can draw on. (I should note that Flash Player and Adobe Media Player now support those exact same codecs.)

Looking also at what Apple have been doing with Javascript, and knowing there’s already limited Javascript support in QuickTime, my further guess is that QT X will be very open to Javascript, Apple’s new favorite browser language thanks to Sproutcore and the new Webkit Javascript engineSquirelfish. It’s interesting that Apple announced QuickTime X and the new Javascript engine for Safari in Snow Leopard in the same paragraph. Other features went into separate paragraphs.

So my guess is that QuickTime X is a newly optimized media player engine with hooks to good Javascript for interactive programming. Perhaps even to Ruby/Ruby on Rails since Apple’s also adopting that.

But who knows for sure? Only those who can’t tell.

Categories
Distribution Interesting Technology Video Technology

Little boxes, on the set top, little boxes full of ticky tacky!

So, Netflix and LG announce yet another set top box. Well, actually they announced that LG would include the Netflix service on “selected devices”. Best guesses are that the service will be added to a dual mode (HD DVD and Blu-ray) player, or even the Television itself.

Here’s the problem with this: it’s Netflix on LG devices and only Netflix. No slight on Netflix, the service is good and the company needs to provide for a non-disc future. However, the industry should be gathering together for a single standard for delivering from the Interent to the lounge room, not proprietary deals with single suppliers. If we want movies from Apple then it’s another set top box (Apple TV). Vudu have their own movie service and their own proprietary box. Tivo is a little more open – it has an API for programmers – but it’s still another box on top of the cable or satellite box you’ll probably still have.

We need a single standard or interoperable standards for delivery of “Internet TV” to Televisions. It has to be simple. TV would not have caught on if we’d needed separate Television sets for CBS, NBC and ABC – but that’s exactly where these companies think we’re heading.

It won’t work. Any device(s) that link the Internet sources with a TV are good, but it needs an open standard – perhaps that’s what google are working on, but even then it won’t help integrate with cable or satellite boxes unless those providers have a significant change of heart.

Apple TV, for all the people who claim it to be a “failure” is the leading device to connect computers and televisions. While its sales are disappointing by Apple’s mega-hit standard, it’s estimated to have about 800,000 units sold in 10 months (took Tivo 4 years to get that far) and it’s way ahead of the competitors, other than Xbox or PS3 which both act at media extenders.

My mantra for 2008 – proprietary bad, open standards (even from one company) good.

Categories
Apple Pro Apps Interesting Technology

XML Article at kenstone.net

Ken Stone has released an article I wrote on XML in the Final Cut Studio. The article is What is XML and what does it mean for Final Cut Studio users?

Also, Steve Douglas reviewed my company’s Pro Apps Tips, also at KenStone.net in the Pro Apps Tips Review.

Just thought you’d like to know. Oh, and in 2007, I’ll be posting more regularly as I evolve some of the thoughts around a book I’m working on, tentatively titled “Television 3.0”.

Categories
Business & Marketing Interesting Technology

Why Revver Gets it

For those who don’t know Revver.com, at the simplest level it’s “yet another video sharing site” except it has two distinct differences: it has a revenue model based on advertising and it’s entirely driven by an API. Why are these distinctions important? They’re important because they essentially mean that Revver.com itself is irrelevant to their business.

Most “Web 2.0” websites are built on advertising support – Google Adsense at the simplest level, display advertising if they have an advertising sales force or by sponsorship. YouTube tried the latter – sponsorship of channels by the large content providers, or even “The Brittney Channel” and to are working on recognizing content and sharing revenue from advertising on the same page with the large conglomerates that own the content. Neither are innovative and both require the visitor to actually be on YouTube.com to see the advertising. Trouble is, one of YouTube’s greatest appeals is the embedded player which puts the content on another site (where the site owner could display ads and collect the revenue).

The use of embedded players or more commonly RSS driven technology is a problem for site owners looking to advertising-on-the-website models. As RSS becomes more widely adopted (because of the huge value add to subscribers) that tension increases. A site like creativecow.net or 2-pop.com requires visitors to be at the site to read their forums, tutorials or other content because that’s what pays the sites’ expenses and provides a return to the owners. This is a huge problem for content creators if podcasts/video podcasts, which are RSS driven, takes off.

If RSS/embedded players become successful, as they inevitably will because they provide the biggest payoff to the user/viewer, then the website becomes irrelevant, even dead. That’s why Revver’s model shows they understand the direction the web is taking. Revver provides a very comprehensive API so anyone can set up a full Revver.com clone, or customize content out of Revver’s collection to a subject-specific site. Revver.com is built on the same API and (with few exceptions) anything Revver can do on their own site, can be done on any site, without any “permissions” required from Revver.

This is because Revver serves up ads at the end of the video. The revenue from the ads is shared with affiliates (anyone using the API to drive traffic) who get 20% and the balance is split between Revver and the content provider. Ads are short and unobtrusive and pay on click through, not on ad impression.

So, it doesn’t matter how people use the content, wherever they use the content – either through an embeddable Flash player or through downloads (or even if the content is aggregated into an RSS feed) all parties still benefit and there’s no “must drive traffic to website” model involved.

In my (probably not so humble) opinion Revver is one model that will sustain. The other would be direct payment between viewers and content owners in an RSS-driven (Podcast/Video Podcast like) feed. But that model doesn’t exist until klickTab.com launches.

Categories
Distribution Interesting Technology Random Thought

Yahoo and TiVo hook up, world shakes a little

Although it doesn’t seem like a big announcement, Yahoo users who own a TiVo can now program their TiVo from the Yahoo TV guide. No big thing because TiVo owners have been able to do that for years, so it’s only a minor additional convenience until we read some of the fine print and plans for the near future.

And in the coming months, possibly before the end of the year, Yahoo’s traffic and weather content, as well as its users’ photos will be viewable on televisions via TiVo’s broadband service and easy-to-use screen menu.

So now Yahoo content (pictures, traffic and weather for now) is available on the television with television/PVR/DVR ease of use. Remember TiVo already have many components of the digital home experience with TiVo to go to take digital material from the TiVo drive to PCs, DVD-R and portable media players.

Where it gets interesting is to project forward. Yahoo consider themselves a media company evidenced by the formation of the Yahoo Media Group back in January 2005 and an orgy of hiring media executives for the division since then. The purpose of the Yahoo Media Group is to “significantly strengthening our content pillar”. In April they hired Shawn Hardin, who, according to ZDnet, “Hardin has previously worked in television and the Internet, holding executive positions at NBC and Snap.com.”

Although Yahoo has been vague about its plans for the Media Group, it’s built a team with a strong balance of “Hollywood” and “Web” backgrounds and it’s perfectly reasonable to expect that it has some big plans…

Oh, let’s not forget Yahoo Video Search. In competition with Google video Yahoo are building connections to a library of independent content. I’m sure the Yahoo Media Group are building a library of mainstream content: Santa Monica is not that far from Hollywood!

Now to my conjecture. TiVo has about 3 million subscribers. TiVo is linking with Yahoo to display Yahoo content via the TiVo device, direct to the living room. Aren’t there many people planning/desiring to bring Internet-delivered “TV” into the living room, right where the TiVo box is already sitting? Akimbo, DaveTV, Brightcove all seem to be going down that path with a dedicated box and service. Apple with its FrontRow and Microsoft with its Media Center PC are approaching it from the other direction.

But none have the penetration in the living room that TiVo does right now.

If the relationship with Yahoo goes just a little further then what’s to stop Yahoo delivering video content direct to the TiVo under some form of pay-for-view or subscription model? It seems self-evident to me that this is really about a much bigger play for the living room and part-and-parcel of the Yahoo Media Group’s efforts. Yahoo could launch with 3 million subscriber boxes already in place. A number that would bring much cheer to Akimbo, DaveTV or Brightcove’s investors if they ever achieve anything close to that. Apple and Microsoft (and its partners) have to get the computer moved into the living room, although don’t be surprised to find Apple taking a page from TiVo’s book and integrating a tuner, program guide and simplified iTunes into a future “Mac mini” with video output to sit under the TV. When they’re ready, of course.

Given that TiVo are somewhat struggling right now, it’s not far fetched to consider Yahoo buying TiVo? With a market capitalization of around $300 million TiVo wouldn’t be hard for Yahoo to swallow, or even Apple (as rumored earlier in 2005). Only time will tell, but when Yahoo buys TiVo, remember, you heard it here first!

Categories
Business & Marketing Distribution Interesting Technology

The power of disruptive technologies

A disruptive technology is one that most people do not see coming and yet, within a very short period it changes everything. A disruptive technology will become the dominant technology. Rarely are they accurately predicted because predictions are generally extrapolated from the existing understanding. For example, there’s no doubt that the invention of the motor car was a disruptive technology, but Henry Ford is often quoted as saying “If we had asked the public what they wanted, they would have said faster horses.”

It’s almost impossible to predict what will become a disruptive technology (although the likelihood of being wrong isn’t going to stop me) but they are very easily recognized in hindsight. Living in Los Angeles it’s obvious the affect that Mr Ford’s invention has had on this society. Although some would argue that it wasn’t so much the invention of the motor car that made the difference, but the assembly-line technique that made the motor vehicle (relatively) affordable.

In fact I think it’s reasonable to believe that a disruptive technology will have a democratizing component to it, or a lowering (or removal) of price barriers.

Non-linear editing on the computer – Avid’s innovation – was a disruptive technology but initially only within a relatively small community of high end film and episodic television editors. The truly disruptive technology was DV. DV over FireWire starting with Sony’s VX-1000 and Charles McConathy/Promax’s efforts to make it work with the Adobe Premiere of the day, paved the way for what we now call “The DV Revolution”.

Apple were able to capitalize on their serendipitous purchase of Final Cut from Macromedia and drop the work that had been done to make Final Cut work with Targa real-time cards and concentrate on FireWire/DV support. (It was two further releases before we saw the same level of real-time effects support as was present in the Macromedia NAB 98 alpha preview.) I think, at the time, Apple saw Final Cut Pro to be another way of selling new G3’s with built-in FireWire. The grand plan of Pro Apps came about when the initial success of Final Cut Pro showed the potential. But that’s another blog post.

DV/FireWire was good enough at a much lower price, with all the convenience of single-wire connection. We’ve grown from an industry of under 100,000 people worldwide involved in professional production and post-production to one almost certainly over 1 million worldwide.

Disruptive technologies usually involve a confluence of technologies at the right time. Lower cost editing software wouldn’t have been that disruptive without lower cost acquisition to feed to it. Both would have been pointless without sufficient computer power to run the software adequately. Final Cut Pro on an Apple IIe wouldn’t have been that productive!

In a larger sense DV/FireWire was part of a larger disruption affecting the computer industry – the transition from hardware-based to software-based. We are, in fact, already through this transition with digital video, although the success of AJA and Blackmagic Design might suggest otherwise. However, the big difference now is that the software is designed to do its job with hardware as the accessory. Back in the days of Media 100’s success, Media 100 would not run without the hardware installed, in fact without the hardware it was pretty useless as everything went through the card. Then when they rebuilt the application for OS X they developed it as (essentially) software-only. This paved the way to the current HD version (based on a completely different card) and a software-only version.

Ultimately, all tasks will be done in software other than the hardware needed to convert from format to format. In fact much of the role of today’s hardware is that of format interface rather than the basis for the NLE as it was in the day of Media 100, Avid AVBV/Meridien and even Cinéwave. Today’s hardware takes some load off the CPU but as an adjunct to the software, not because the task couldn’t be done without the hardware. This has contributed to the “DV Revolution” by dramatically dropping prices on hardware.

Disruptive technologies are hard to predict because they are disruptive. Any attempt to predict disruptive technologies is almost certainly doomed to failure, but like I said, that’s not going to stop me now!

We are headed for a disruptive change in distribution of media, both audio and video content. I wish I could see clearly how this is going to shake out, so I could invest “wisely” but it’s still too early to predict exactly what will be the outcome 5-7 years down the track. I feel strongly that it will include RSS with enclosures, in some form. It will have aspects of Tivo/DVR/PVR where the content’s delivery and consumption will be asynchronous. Apart from news and weather, is there any need for real-time delivery as long as the content is available when it’s ready to be consumed? Delivery will, for the most part, be via broadband connections using the Internet Protocol.

There is a growing trend to want to merge the “computer” and “the TV”, either by creating media center computers, by adding Internet connected set-top boxes (like cable boxes) or by delivering video content direct to regular computers. Microsoft’s Media Center PCs haven’t exactly set the world on fire outside the college dorm where they fit a real niche; Apple are clearly moving slowly toward some media-centric functions in the iLife suite where it will be interesting to see what’s announced at MacWorld San Francisco in January; and there are developments like Participatory Culture’s DTV and Ant is not TV’s FireANT for delivering channels of content directly to the computer screen. Both DTV and FireANT are based on RSS, with enclosures, for their channels, just like audio podcasting does.

On the hardware box front, companies like Akimbo, Brightcove and DaveTV are putting Internet-connected boxes under the TV, although DaveTV is having a bet both ways with computer software or set-top box.

Whether any of these nascent technologies are “the future of media” as they are touted by their developers, whichever way this shakes out it has important implications for our industry. No-one foresaw that the Master Antenna and Community Antenna systems of the 1950’s would evolve into today’s dominant distribution networks – the cable channels, which have now (collectively) moved ahead of the four major networks in total viewership. The advent of cable distribution opened up hundreds, or thousands of new production opportunities for content creators. This time many people foresee (or hope) that using the Internet for program distribution will take down the last wall separating content creators from their audience.

In the days of four networks, any program idea had better aim to appeal to 20-30% of the available audience – young, middle-aged or old – to have any chance of success. In an age where the “family” sat down to watch TV together (and even ate meals together) that was a reasonable thing to attempt. As society fragmented we discovered that there were viable niches in the expanded cable networks. Programs have been artistically and/or financially successful that would never have been made for network TV because of the (relatively) small audiences or because the content was not acceptable under the regulations governing the networks. The development of niche channels for niche markets parallels the fragmentation of society as a whole into smaller demographic units.

Will we see, or do we need, more channels? Is 500 channels, and nothing on, going to be better when it’s 5,000 channels? Probably, because in among the 5,000 (or 50,000) channels will be content that I care enough about to watch. It won’t be current news, that’s still best done with real-time broadcasting, but for other content, why not have it delivered to my “box” (whatever takes this role) ready to be watched (on whatever device I choose to watch it)? (Some devices will be better suited to certain types of content: a “video iPod” device would be better suited to short video pieces than multi-hour movies, for example.)

If the example of audio podcasting is anything to go by, and with just one year of history to date it’s probably a little hard to be definitive, then yes, subscriber-chosen content delivered “whenever” for consumption on my own schedule is compelling. I’ve replaced the car radio newstalk station with podcasts from my iPod mini. Content I want to listen to, available when I’m ready to listen. Podcasts have replaced my consumption of radio almost completely.

Ultimately it will come down to content. Will the 5,000 or 50,000 channels be filled with something I want to watch? Sure, subscribing to the “Final Cut Pro User Group” channel is probably more appealing than (for me) many of the channels available on my satellite system. Right now, video podcasts tend to be of the “don’t criticize what the dog has to say, marvel that the dog can talk” variety. Like a lot of podcasts. Not every one of the more-than-10,000 podcasts now listed in the iTunes directory is compelling content or competently produced.

But before we can start taking advantage of new distribution channels, for more than niche applications, we need to see some common standards come to the various platforms so that channels will discovered on Akimbo, DaveTV, DTV and on a Media Center PC. About the only part of this prediction I feel relatively sure of, is that it will involve RSS with audio and video enclosures, or a technology derived from RSS, like Atom (although RSS 2 seems to have the edge right now.)

In a best-case scenario, we’ll have many more distribution channels, aggregating niche markets into a big-enough channel for profitable content (particularly with lower cost production tools now in place). Direct producer-customer connection, without the intermediation of network or cable channel aggregators improves profit potential on popular content and possibly moves content into different distribution paths. Worst case scenario is that nothing much changes and Akimbo, DaveTV, Brightcove, Apple or Microsoft’s Media-centric computers go the way of the Apple Lisa – paving the way for the real “next big thing”.

Categories
Apple Apple Pro Apps Business & Marketing Interesting Technology Random Thought

Don’t panic! Apple adopts Intel processors

The confusion and furor surrounding Apple CEO Steve Jobs’ announcement at the WordWide Developers Conference that future Mac, after Jun 2006, will use Intel processors inside is totally unfounded. Nothing changes now, very little changes in the next year and longer term the future for the Mac got a little brighter. Although the decision caught me by surprise, as I thought about it, and listened to what was said in the keynote, I could see why it made sense.

If we look short term, the decision makes little sense. Right now a G5 (Power PC, aka PPC) PowerMac has very similar performance to the best workstations on the PC/Intel platform running Windows and the G5 will cost less than a similarly performing PC workstation. At the low end the Mac mini is competitively priced to a cheap Dell or other name brand. (Macs are not price competitive with off-brand PCs, the so called “white box”.) So, why put the developer community, and developers within Apple, through the pain of a processor shift?

For the future (“we have to do it for the children”) and because it’s really not that painful for most developers.

Right now a G5 PowerMac is very performance competitive with the best offerings from Intel. What Apple have been privy to, that rest of us haven’t, is the future of both Intel processors and PPC processors. Based on that future Apple decided they had no choice but to make the change. In the future, the performance-per-watt of power of a PPC chip will be “15 units of processing” according to Mr Jobs. The same watt of energy would give 70 units of performance on an Intel processor. Without knowing exactly how those figures were derived, and what it means for real-world processing power it seems like a significant difference. It was enough to push Apple to make the change.

Not that there’s anything wrong with the PPC architecture: IBM continue to develop and use it at the high end and PPC chips (triple core “G5” chips) will power the Microsoft XBox360. The sales of chips to Microsoft will well and truly outweigh the loss of business from Apple. It is, however, a crazy world: next year will see a Microsoft product powered by PPC and Macintoshes powered by Intel!

Steve Jobs demonstrated how easy it will be for developers to port applications to OS X Intel. In fact, he confirmed long-term rumors that Apple have kept OS X running on Intel processors with every development on OS X – Mr Jobs demonstrated and ran his keynote from an Intel Macintosh. For most applications a simple recompile in the Xcode developer environment will suffice – a matter of a few hours work at most. Moreover, even if the developer does not recompile, Apple have a compatibility layer, called Rosetta, that will run pure PPC code on an Intel Mac. Both platforms are to be supported “well into the future”.

During the keynote Mathematica was demonstrated (huge application, 12 lines of code from 20 million needed changing, 2 hours work) as were office applications. Commitments to port Adobe’s creative suite and Microsoft’s Mac Business Unit software were presented. Apple have been working on Intel-compatible versions of all their internal applications according to Mr Jobs. [Added] Luxology’s president has since noted that their 3D modelling tool modo took just 20 minutes to port, because it was already Xcode-based, and built on modern Mach-0 code.

Remember, these applications are for an Intel-powered OS X Macintosh. No applications are being developed for Windows. In fact, after the keynote Senior Vice President Phil Schiller addressed the issue of Windows. Although it would be theoretically possible to run Windows on an Intel Macintosh it will not be possible to run OS X on anything but Apple Macintosh.

Apple’s Professional Video and Audio applications might not be as trivial to port although most of the modern suite should have no problem. LiveType, Soundtrack Pro, DVD Studio Pro and Motion are all new applications built in the Cocoa development environment and will port easily. Final Cut Pro may be less trivial to port. It has a heritage as a Carbon application, although the code has been tweaked for OS X over recent releases. More than most applications, Final Cut Pro relies on the Altivec vector processing of the PPC chip for its performance. But even there, the improvement in processor speeds on the Intel line at the time Intel Macs will be released are likely to be able to compensate for the loss of vector processing. At worst there will be a short-term dip in performance. However with Intel Macintoshes rolling out from June 2006 it’s likely we’ll see an optimized version of Final Cut Pro ready by the time it’s needed.

[Added] Another consideration is the move to using the GPU over the CPU. While the move to Intel chips makes no specific change to that migration – Graphics card drivers for OS X still need to be written for the workstation-class cards – Final Cut Pro could migrate to OS X technologies like Core Video to compensate for the lack of Altivec optimizations for certain functions, like compositing. Perhaps then, finally we could have real-time composite modes!

Will the announcement kill Apple’s hardware sales in the next year? Some certainly think so but consider this: if you need the fastest Macintosh you can get, buy now. There will always be a faster computer out in a year whatever you buy now. If your business does not need the fastest Mac now (and many don’t) then do what you’d always do: wait until it makes sense. The G5 you buy now will still be viable way longer than its speed will be useful in a professional post-production environment. It’s likely there will be speed-bumps in the current G5 line over the next year, as IBM gets better performance out of its chips. We are waiting for a new generation of chips from Intel before there would be any speed improvement. If Apple magically converted their current G5 line to the best chips Intel has to offer now, there would be little speed improvement: this change is for the future, not the present.

So, I don’t think it will affect hardware sales significantly. As a laptop user I’m not likely to upgrade to a new G4 laptop, but then there will be little speed boosts available there in the next year anyway. But as a laptop user, I’m keen to get a faster PowerBook and using an Intel chip will make that possible.

Although I have to say I initially discounted the reports late last week because, based on current chip developments, there seemed little advantage in a difficult architecture change. With the full picture revealed in the Keynote as to the long term advantages and the minimal discomfort for developers, it seems like a reasonable move that will change very little except give us faster macs in the future.

How could we have any problem with that?

[Added] Good FAQ from Giles Turnbull at O’Reilly’s Developer Weblog

Categories
Interesting Technology Random Thought

E3 and what it means for the future of production

This week was my first visit ever to an Entertainment Electronics Expo, usually written as E3. Entertainment in this context means gaming – computer games. My desire to visit E3 was driven by a deep-seated feeling that I was ignoring an important cultural trend because it doesn’t intersect my life direction – I don’t play “video games” and I don’t have children, nieces or nephews nearby. But it’s hard to ignore an industry that reportedly eclipsed motion picture distribution in gross revenue last year. It hasn’t – Grumpy Gamer puts things into perspective.

That doesn’t mean that the gaming industry statistics are anything but impressive: Halo 2’s first day sales of US$125 Million eclipses the record first weekend box office (Spiderman 2002) of $114 milliion – a statistic the gaming industry likes to promote but isn’t as impressive when you consider whole-of-life revenue. Grumpy Gamer again. Gaming is a huge business and E3 had many booths that represent a multi-million dollar investment in the show by the companies exhibiting. E3, like NAB is an industry-only show (18+ only and industry affiliation required) and this year attracted 70,000 attendees (vs NAB 2005 with about 95-97,000) in a much smaller show area. This is big business.

But what has this got to do with “the present and future of production and post production”? There are three “game” developments that will, ultimately impact video production, post production and distribution. This is quite aside from the fact that, right now, video production for games is a big part of the expense of each game. Most games have video/film production budgets way above those of a typical independent film’s total budget. This presents an opportunity for savvy producers to team up with game producers for “serious games” – games aimed at education or corporate training. Video production alone is rapidly becoming a commodity service so working with a games company for their acquisition is a value add and opportunity in the short term.

Microsoft’s Steve Ballmer

Take today’s passive video content, add a little interactivity to it. Take today’s interactive content, games, and add a little bit more video sequencing to it. It gets harder and harder to tell what’s what…

In the longer term three trends will impact “video” production: graphics quality and rendering, “convergence” (yes, I hate the term too, but don’t have a good alternative), and interactive storytelling.

Graphic Powerhouse

We all owe the gaming community a debt of gratitude for constantly pushing the performance of real-time graphics and thus the power of graphics cards and GPUs. The post-production industry benefits from real-time in applications like Boris Blue, Apple’s Motion, Core Video in OS X 10.4 Tiger and other applications. Without the mass market for these cards they’d be much more expensive and would not have advances as quickly as they have.

The quality of real-time graphics coming from companies like Activision with their current release F.E.A.R. in a standard definition game, and the quality of their upcoming HD releases for next-generation gaming consoles is outstanding. In one demonstration (actual game play) of a 2006 release, the high definition graphics quality, including human face close-ups was really outstanding. Extrapolate just a few more years and you have to wonder how much shooting will be required. If we can recreate, or create, anything in computer simulation in close to real time (or in real time) at high definition, what’s the role of location shooting and sets? Sky Captain and the World of Tomorrow and Sin City have shown that it’s possible to create movies without ‘real’ sets, although both movies seemed to have needed to apply extreme color treatments to disguise the lack of reality (or was that purely motivated by creativity).

Actors could be safe for a little while, because of the ‘Uncanny Valley’ effect, although the soldiers in Activision’s preview were close-to-lifelike in closeup, as long as you didn’t look at the eyes – the dead giveaway for the moment. Longer term (five + years) realistic humans are almost certain to come down the line. At that point, where is the difference between a fixed path in a game, and a video production?

Convergence

NAB has, until this year, long been “The Convergence Marketplace” without a lot of convergence happening. However, the world of gaming has converged with the world of movies a long time ago. It is standard practice for a blockbuster movie release to have a themed game available – Star Wars III: Revenge of the Sith and Madagascar have simultaneous movie and game releases. Activision’s game development team were on the Madagascar set and developed 20 hours of game-play following the theme of a 105 minutes movie!

Similarly Toronto-based Spaceworks Entertainment, Inc. announced at E3 a Canada/UK co-production of Ice Planet the TV series and game are to be developed together, again with the game developers on the set of the TV series shoot. Although the game and TV series can be enjoyed independently the plan is to enhance the game via the TV show and the TV show via the game. Game player relevant information can be found throughout each of the 22 episodes of the series’ first season – the first of five seasons planned in the story arc.

Whether or not Spaceworks Entertainment are the folks to bring this off eventually it will happen that there is interplay between television and related game play. Television will need something to bring gamers back to the networks (cable or broadcast) if there’s a future to be had there. (Microsoft, on the other hand, wants to bring the networks to the gamers via the Xbox 360.)

Interactive Storytelling

The logical outcome of all this is an advanced form of interactive storytelling that could supplant “television” as we know it. Or not. Traditionally television has been a lean-back, turn my mind off medium and I imagine there will continue to be a demand for this type of non-involved media consumption in the future that won’t be supplanted by a more active lean-forward medium. However, the lean-forward medium will be there to supplement and, for many people, replace the non-involved medium.

Steven Johnson’s Everything bad is good for you makes some valid points that suggest that the act of gaming might be more important than imagined (and less bad for you). From one review:

The thesis of Everything Bad is Good for You is this: people who deride popular culture do so because so much of popcult’s subject matter is banal or offensive. But the beneficial elements of videogames and TV arise not from their subject matter, but from their format, which require that players and viewers winkle out complex storylines and puzzles, getting a “cognitive workout” that teaches the same kind of skills that math problems and chess games impart. As Johnson points out, no one evaluates the benefit of chess based on its storyline or monotonically militaristic subject matter.

In the same vein, and a little aside, I was amused by this comment posted in Kevin Briody’s Seattle Duck blog that hypothesises how we would have contemplated books had they been invented after the video game.

“Reading books chronically understimulates the senses…
Books are also tragically isolating…
But perhaps the most dangerous property of these books is the fact that they follow a fixed linear path. You can’t control their narratives in any fashion: you simply sit back and have the story dictated to you…”

How far will we go with non-linear paths? Most games today have fairly limited, linear paths to a single destination, with a lot of flexibility in the journey. I’m no fan of first-person shooter games but can imagine becoming more involved with another type of story. Don your 3D immersion headset, or relax in your lounge with the 60″ wall-mounted flat panel, and join me in this week’s episode of (say) Star Trek TNG. Choose your character and participate in the story appropriately. Clues as to your behavior would be an optional “cheat” track (not dissimilar to the podcasts accompanying this year’s Battlestar Gallactica episodes). The rest of the characters would guide the story and respond to your participation, to whatever outcome. Whatever character role you took, would control the perspective of the show that you saw (when involved a scene).

Is this a game? Is it “television”? Is it something else? Storytellers have adapted their stories for specific audiences from the first day there were stories. Roaming storytellers would adapt details for kings or commoners, for this geographic region over that (often for their own self-preservation) so adaptive (interactive) storytelling isn’t new, just new to modern electronic media. Do a search for “interactive storytelling” at google.com and you’ll find many links. I just found that my hypothesis above has a name Mixed Reality Interactive Storytelling! The marketing people will have to massage that into something that would capture popular imagination.

None of this will have much impact on typical production this week, next year, or the five years following that, but some time in the future, at least some elements will have crossed over. In a very real sense, I went to E3 last week to get a sense of “the future of video production”.

Categories
Apple Interesting Technology Random Thought

iTunes becomes a movie management tool

iTunes has been doing movies for some time now – trailers from the Apple Movie Trailer’s website have been passed through iTunes for full screen playback, leading many to believe that Apple were grooming iTunes for eventual movie distribution.

Well, iTunes 4.8 will do nothing to dispel the rumor mongers – in version 4.8 iTunes gains more movie management and playback features, including movie Playlists and full screen playback. Simply drag a movie or folders of movies (any .mov or .mp4 whatever the size) into the iTunes interface and they become Playlists.

Playback can be in the main interface (in the area occupied by album artwork otherwise); in a separate movie window (non-floating so it will go behind the iTunes main interface) or to full screen. Visual can be of individual movies or of playlists – audio always plays the playlist regardless of the setting controlling the visuals.

If one had to speculate (and one does, really in the face of Apple’s enticement) it certainly seems that Apple are evolving iTunes toward some movie management features. The primary driver of this development in version 4.8 is the inclusion of “behind the scenes/making of” videos with some albums. For example, the Dave Matthews Band “Stand Up” album in the iTunes Music Store features “an (sic) fascinating behind-the-scenes look at band’s (sic) creative process with the bonus video download.” The additional movie content gives Apple the excuse to charge an extra $2 for the album ($11.99 while most albums are $9.99).

There is a lot of “chatter in the channel” about delivery of movies to computers or a lounge room viewing device (derived from a computer but simplified). Robert Cringely, among others, seem to think the Mac Mini is well positioned for the role of lounge room device. Perhaps, others like Dave TV think a dedicated box or network will be the way to go. Ultimately it will be about two things: content and convenience.

Recreational Television and movie watching is a “lay back” experience – performed relaxed on a comfortable chair at the end of a busy day with little active involvement of the mind. Even home theater consumption of movies is not quite the same experience as a cinema (although close enough to it for many people.) It will take a major shift in thinking for the “TV” to become a “Media Center” outside of the College Dorm Room. We’re still many years from digital television broadcasting being the norm, let alone HD delivery to in-home screens big enough to actually display it at useful viewing distances. (If you want the HD experience right now on a computer screen Apple have some gorgeous examples in their H.264 HD Gallery. QuickTime 7, a big screen and a beefy system are pre-requisites but the quality is stunning.)

Apple do not have to move fast, nor be first, with the “Home Media Center” to ultimately be successful. Look at what happened with the iPod and iTunes in the first place. The iPod was neither the first “MP3 Player” nor some would argue “the best” but it had a superior overall experience, aided by a huge ‘coolness’ factor. So, even if Apple are planning an ‘iTunes Music Store for Movies” some time down the path, it’s not something I’d expect to be announced at MacWorld January 2006 or even 2007!

In the meantime, the new movie management features in iTunes are great. This is not a professional video asset management tool, we’ll have to look elsewhere for that (something I hope the Pro Apps group would be working on) but it is a tool for organizing and playing videos. I have collected show reels and other design pieces I look to for creative inspiration but until now there was no way of organizing them easily. Now I can import them all to iTunes, create play lists for “titles”, “3D”, “design”, “action” and so on for when I need inspiration. Movies can be in multiple play lists, just like music.

I can wait to see what Apple have planned in the future, in the meantime, I’m happy with a new tool in my toolbox.

Categories
Business & Marketing Interesting Technology

Can a computer replace an editor?

Before we determine whether or not a computer is likely to replace an editor, we need to discuss just exactly what is the role of an editor – the human being that drives the software (or hardware) that edits pictures and sound? What do they bring to the production process? Having determined that, perhaps we can consider what it is that a piece of computer software might be capable of now or in the future.

First off I think we need to rid ourselves of any concept that there is just one “editor” role even though there is only one term to cover a vast range of roles in post production. Editing an event video does not use the same skills and techniques as editing a major motion picture; documentary editing is different from episodic television; despite the expectation of similarity, documentary editing and reality television require very different approaches. There is a huge difference in the skills of an off-line editor (story) and an on-line editor (technical accuracy) even if the two roles are filled by the same person.

So let’s start with what I think will take a long time for any computer algorithm to be able to do. There’s no project from current technology – in use or in the lab – that would lead to an expectation that an algorithm would be able to find the story in 40 hours of source and make an emotionally compelling, or vaguely interesting, program of 45 minutes. Almost certainly not going to happen in my lifetime. There’s a higher chance of an interactive storytelling environment à la Star Trek’s Holodeck (sans solid projection). Conceptually that type of environment is probably less than 30 years away, but that’s another story.

If a computer algorithm can’t find the story or make an emotionally compelling program, what can it do? Well, as we discovered earlier, not all editing is the same. There is a lot of fairly repetitive and rather assembly line work labeled as editing: news, corporate video, event videography are all quite routine and could conceivably be automated, if not completely at least in part. Then there is the possibility of new forms of media consumption that could be edited by software based on metadata.

In fact, all use of computer algorithms to edit rely on metadata – descriptions of the content that the software can understand. This is analogous to human logging and log notes in traditional editing. The more metadata software has about media the more able it is to create some sort of edit. Mostly now that metadata will come from the logging process. (The editor may be out of a job, but the assistant remains employed!) That is the current situation but there’s reason to believe it could change in the future – more on that later in the piece.

If we really think about what it is we do as editors on these more routine jobs, we realize that there are a series of thought processes that we go through and underlying “algorithms” that determine why one shot goes into this context rather than anther shot.

To put it at the most basic level, an example might be during editing content from an interview. Two shots of the same person have audio content we want in sequence but the effect is a jump cut. [If two shots in sequence feature same person, same shot…] At this point we choose between putting another shot in there – say from another interview or laying in b-roll to cover the jump cut. […then swap with alternate shot with same topic. If no shot with same topic available, then choose b-roll.]

That’s a rudimentary example and doesn’t take into account the value judgment that the human editor brings as to whether another interview conveys the story or emotion as well. Most editors are unfamiliar with their underlying thought processes and not analytical about why any given edit “works” – they just know it does but ultimately that judgment is based on something. Some learned skill, some thought process, something. With enough effort that process can be analyzed and in some far distant time and place, reproduced in software. Or it could except for that tricky emotional element – the thing that makes our storytelling interesting and worth watching.

The more emotion is involved in your storytelling output, the safer your job – or the longer it might be before it can be replaced. 🙂

Right now, the examples of computerized editing available now – Magic iMovie and Muvee Auto Producer use relatively unsophisticated techniques to build “edited” movies. Magic iMovie essentially adds transitions to avoid jump-cut problems and builds to a template; Muvee Auto Producer requires you to vet shots (thumbs up or down) then uses a style template and cues derived from audio to “edit” the program. This is not a threat to any professional or semi-professional editor with even the smallest amount of skill.

However, it is only a matter of time before some editing functions are automated. Event videography and corporate presentations are very adaptable to a slightly more sophisticated version of these baby step products. OK, a seriously more sophisticated version of these baby-step products, but the difference between slightly and seriously is about 3 years of development!

In the meantime, there are other uses for “automated” editing. For example, I developed a “proof of concept” piece for QuickTime Live! in February 2002 that used automated editing as a means of exploring the bulk of material shot for a documentary but not included in the edited piece. Not intended to be direct competition for the editor (particularly as that was me) it was intended as a means of creating short edited videos that were customized in answer to a plain language query of a database. The database contained metadata about the Clips – extended logging information really. In addition to who, where and when, there are fields for keywords, a numeric value for relative usefulness of the clip, a field for keywords to search for for b-roll [If b-roll matches this, search for more than one clip in the search result, edit them together and lay b-roll over all the clips that use this b-roll.]

So, right now, computer editing can be taught rudimentary skills. This particular piece of software knows how to avoid jump cuts and cut to length based on the quality criteria. It is, in fact, a better editor than many who don’t know the basic grammar of video editing. Teaching the basic grammar is relatively easy. Teaching software to take some basic clips and cut into a news item or even basic template-based corporate video is only a matter of putting in some energy and effort.

But making something that is emotionally compelling – not any time soon.

Here’s how I see it could pan out over the next couple of year. Basic editing skills from human-entered metadata – easy. Generating that metadata by having the computer recognize the images – possible now but extremely expensive. Having a computer edit an emotionally compelling piece – priceless.

It’s not unrealistic to expect, probably before the end of the decade, that a field tape could be fed into some future software system that recognizes the shots as wide, medium, close-up etc; identifies shots in specific locations and with specific people (based on having been shown examples of each) and transcribes the voice content and the text in signs and other places in the image. Software will recognize poor exposure, loss of contrast and loss of focus, eliminating shots that do not stand up technically. Nothing here is that difficult – it’s already being done to some degree in high end systems that are > $300,000 right now. From there it’s only a matter of time before the price comes down and the quality goes up.

Tie that together with a template base for common editing formats and variations and an editing algorithm that’s not that much further on than where we are now and it’s reasonable to expect to be able to input one or more source tapes into the system in the afternoon, and next morning come back to review several edited variations. A couple of mouse-clicks to choose the best of each variation and the project’s done, output to a DVD (or next generation optical disc), to a template-based website, or uploaded to the play-out server.

Nothing’s far fetched. Developing the basic algorithm was way too easy and it works for its design goals. Going another step is only a matter of time and investment. Such is the case with anything that is repetitive in nature: ultimately it can be reproduced in a “good enough” manner. It’s part of a trend I call the “templatorization” of the industry. But that’s another blog discussion. For now, editors who do truly creative, original work need not be worried, but if you’re hacking together video in an assembly-line fashion start thinking of that fall-back career.