There certainly wasn’t much new in NLE at NAB. Avid had already announced, Apple are keeping to their own schedule that apparently doesn’t include NAB (although Apple folk were in town) and Adobe have a 4.1 update coming for Premiere Pro CS4. The only new NLE version was Sony’s Vegas, which moves up to version 9. With, of course, RED support. Can’t forget the RED support – it was everywhere (again).
Lenses for RED, native support, non-native support: everyone has something for RED, or Scarlet/Epic coming up. Lenses are already appearing for those not-yet-shipping cameras.
Even camera technology seemed to take a year off. I certainly became convinced of the format superiority (leaving aside lenses, and convenience factors) of AVCCAM, which is a pro version of the consumer AVCHD, with higher bit rates. The evidence supports the hypothesis that AVCCAM at 21 or 24 Mbits/sec should produce a much higher quality image than MPEG-2 at the same bitrate. Before this NAB I was only convinced “in theory”. Of course, choose the AVCCAM path and you’ll be transcoding on import to FCP or Avid to much larger ProRes or DNxHD files, which is an optional (and recommended) path for HDV or XDCAM HD/EX.
Everyone has a 3D story to tell. Panasonic promise 3D-all-the-way workflows “coming” and there were all sorts of tools on the floor for working with 3D, projecting 3D, viewing 3D… Â As one of my friends quipped “The presentations were amazing. What’s more I took off my glasses and the 3D experience continued around me!”
I confess to being a little torn on 3D (and Twitter, but that’s another post). I’ve seen some really amazing footage, and some that simply tries too hard to be 3D. Â I also worry how we’ll adapt to sudden jumps in perspective as the 3D camera cuts to a different shot. I noticed a little of this when viewing an excerpt from the U2 3D concert film. There are natural analogs to cutting in 2D – in effect we build out view of the world from many quick closeups, so cutting in film and TV parallels that.
I can’t think of an analog for the sudden jumps in position in 3D space and perspective that would help our brains adapt. Maybe we’ll just adapt and I’m jumping at shadows? Who knows. I don’t plan on 3D soon.
Nor do I expect to see Flash supported on a TV in my home for at least a couple of years. That’s the problem that Adobe faces in getting support for Flash on TVs and set-top boxes. For a start it will require a lot more horsepower than those boxes have already, but Moore’s law will take care of that without a blink. A bigger problem is the slow turn-over cycle of Televisions. Say it’s 6 months before the first sets come out (and none are yet announced). It’s probably ten years before any particularly provider can rely on the majority of sets being Flash enabled. Assuming it catches on.
So I rather see that as a non-announcement. Remember the cable industry already has it’s own Tru2way technology for interactivity on set-top boxes.
I am much more interested in Adobe’s new Strobe frameworks, even though it could take some business away from my own OpenTVNetwork.
For the geeky, my favorite new piece of technology for the show would have to be Blackmagic Design’s Ultrascope – an HD scope package, just add PC and monitor to the $695 hardware and software bundle for a true HD scope at an affordable price.
I’ve already given my opinions on the Blackmagic Design announcements, AJA announcements and Panasonic announcements during the show.
Two more trends this year: cheaper and better storage and voice and facial recognition technologies are becoming more widespread.
I am amazed at the way hardware-RAID protected systems have fallen in cost. Not only the drives themselves but the enclosures are getting to the point where it’s no longer cost-effective to build your own, certainly not if you want RAID 5 or 6.
Five years ago the only company demonstrating facial and speech recognition were Virage, who I didn’t see this year. But there are an increasing number of companies that have speech recognition that seems to be, overall, about the same quality as that bundled with Adobe’s Premiere Pro and Soundbooth CS4, i.e. it can get reasonably high in accuracy with well paced, clean audio and no accent. Good enough for locating footage.
Facial recognition seems to be everywhere, from Google’s Picassa to news transcription services. Not only do they recognize cuts but they also recognize the people in the shots, prompting when a new face is recognized.
How long before the metadata that powers First Cuts doesn’t have to be input by a person, again? That’s what really excited me about NAB 2009.