Categories
Interesting Technology Item of Interest Metadata Video Technology

The Future of Picture Editing

The Future of Picture Editing http://bit.ly/aNRLVA

I’ve had the pleasure of meeting Zak Ray when I travelled to Boston. I like people who have an original take on things and Zak’s approach to picture editing – and his tying it to existing technologies (that may ned improvement) – is an interesting one.

And yet, despite such modern wonders as Avid Media Access and the Mercury Playback Engine, modern NLEs remain fundamentally unchanged from their decades-old origins. You find your clip in a browser, trim it to the desired length, and edit it into a timeline, all with a combination of keys and mouse (or, if you prefer, a pen tablet). But is this process really as physically intuitive as it could be? Is it really an integrable body part in the mind’s eye, allowing the editor to work the way he thinks? Though I can only speak for myself, with my limited years of editing experience, I believe the answer is a resounding “no”. In his now famous lecture-turned-essay In the Blink of an Eye, Walter Murch postulates that in a far-flung future, filmmakers might have the ability to “think” their movies into existence: a “black box” that reads one’s brainwaves and generates the resulting photo-realistic film. I think the science community agrees that such a technology is a long way off. But what untilthen? What I intend to outline here is my thoughts on just that; a delineation of my own ideal picture-editing tools, based on technologies that either currently exist, or are on the drawing board, and which could be implemented in the manner I’d like them to be. Of course, the industry didn’t get from the one-task, one-purpose Moviola to the 2,000 page user manual for Final Cut Pro for no reason. What I’m proposing is not a replacement for these applications as a whole, just the basic cutting process; a chance for the editor to work with the simplicity and natural intuitiveness that film editors once knew, and with the efficiency and potential that modern technology offers.

It’s a good article and a good read. Raises the question though – if Apple (or Adobe/Avid) really innovated the interface would people “hate it” because it was “different”?

Categories
Assisted Editing Interesting Technology Item of Interest Metadata Video Technology

Powerful new transcript workflow tool

Powerful new transcript workflow tool – paper cuts without the pain – from Intelligent Assistance (my day job). http://bit.ly/9nQv07

We just launched prEdit, our pre-editing tool for developing paper cuts (a.k.a. radio cut) from transcripts. prEdit:

  • Lets producers or editors cut transcripts into selects in seconds
  • Adds and updates log notes with auto-complete logging fields
  • Previews the video for any clip, subclip, paper cut or section of paper cut
  • Exports to Excel spreadsheets and Final Cut Pro, or Premiere Pro Sequences

“prEdit marks a new generation of postproduction tools,”  say I. “Video editing by text is a whole new way of working that will take weeks out of developing a paper cut.”

prEdit is available now from AssistedEditing.com and carries an MSRP of $395, discounted for an introductory special to $295 until August 31st. The prEdit workflow is described at http://assistedediting.com/prEdit/workflow.html and a video overview is available at http://assistedediting.com/prEdit. The first 80 seconds provide an overview.

The video is now available at YouTube  http://youtu.be/3fV388QsVVA?a

Categories
Metadata Video Technology

How Adobe ‘gets’ metadata workflows

Thanks to an upcoming piece of software we’re working on I’ve been spending a lot of time within the CS5 workflow environment, particularly looking at metadata and the Story workflow, and I’ve come to the conclusion that we’ve been so blinded by the Mercury Engine’s performance that we might not have seen where they’re heading. And I like what I see.

Most readers will likely be aware of the Transcription ability introduced with CS4 and updated in CS5. Either in Soudbooth, or in Adobe Media Encoder (AME) via Premiere Pro for batches, the technology Adobe builds on from Autonomy will transcribe the spoken word into text. Our initial testing wasn’t that promising, but we’ve realized we weren’t sending it any sort of fair test. With good quality audio the results are pretty good: not perfect but close, depending on the source, of course.

We first explored this early in the year when we built and released Transcriptize, to port that transcription metadata from the Adobe world across to Apple’s. That’s what set us down our current path to the unreleased software, but more of that in a later post.

Now we’re back in that world, it’s a pretty amazing “story”. There’s three ways they get it that I see:

  1. Good metadata tracking at the file level
  2. Flexible metadata handling
  3. Metadata-based workflows built into the CS applications (and beyond).

Balancing that is the serious miss of not showing source metadata from non-tape media that doesn’t fit into pre-defined schema. At least that seems to be the case: I can’t find a Metadata Panel that shows the Source Metadata from P2, AVCCAM/AVCHD, or RED to display. Some of the source metadata is displayed in existing fields, but they are only the fields that Adobe has built into Premiere Pro, which miss a lot of information from the source. For example, none of the exposure metadata from RED footage is displayed, nor Latitude and Longitude from P2 and AVCCAM footage.

That’s the downside. To be fair, Final Cut Pro doesn’t display any of the Source Metadata either (although you can access it via the XML file.)  Media Composer can show all the Source if desired.

Good Metadata Tracking at the file level

Apple added QuickTime Metadata to Final Cut Pro 5.1.2 where they retain and track any Source Metadata from files imported via Log and Transfer. This is a flexible schema but definitely under supported. Adobe’s alternative is XMP metadata. (Both XMP and QuickTime metadata can co-exist in most media file formats.)

XMP metadata is file based, meaning it is stored in, and read from, the file. There are seven built-in categories, plus Speech Analysis, which is XMP metadata stored in the file (for most formats) but considered as a different category in the Premiere Pro CS5 interface. I believe that the Source metadata should show in the XMP category because it is file-based even if its not XMP.

On the other plus side XMP metadata is very flexible. You don’t need third party applications to write to the XMP metadata. Inside Premiere Pro CS5 you simply set up the schema you want and the data is written to the file transparently. If the data is in a file when it’s added to a project, it’s read into the project and immediately accessible.

This metadata travels with the file to any and all projects. This provide a great way of sending custom metadata between applications. Speed Analysis metadata is also carried in the file, so it can be read by any Adobe application (and an upcoming one from us, see intro paragraph) direct from the file.

Flexible Metadata Handling

Not only is the XMP file-based metadata incredibly flexible, but you can also apply any metadata scheme to a clip within a Premiere Pro project, right into Clip metadata. For an example of how this is useful, let’s consider what we had to do in Final Cut Pro for First Cuts. Since Final Cut Pro doesn’t have a flexible metadata format, we had to co-opt Master Comments 1-4 and Comment A to carry our metadata. In Premiere Pro CS5 we could simply create new Clip-based fields for Story Keywords, Name, Place, Event or Theme and B-roll search keywords.

(Unfortunately this level of customization in Premiere Pro CS5 does not extend to Final Cut Pro XML import or export.)

An infinitely flexible metadata scheme for clips and for media files (independently) is exactly what I’d want an application to do.

Metadata-based Workflows in the CS5 Applications

To my chagrin I only recently discovered how deeply metadata-based workflows have become embedded in the Adobe workflow. (Thanks to Jacob Rosenberg’s demonstration at the June Editor’s Lounge for turning me on to this.) Adobe have crafted a great workflow for scripted productions that goes like this:

  1. Collaboratively write your script in Adobe Story, or import a script from most formats, including Final Draft. (Story is a web application.)
    • Adobe Story parses the script into Scenes and Shots automatically.
  2. Export from Adobe Story to a desktop file that is imported into OnLocation during shooting.
    • In OnLocation you have access to all the clips generated out of the Adobe Story file. Clips can be duplicated for multiple takes.
    • Clips are named after Scene/Shot/Take.
  3. During shooting you do not need to have a connection to the camera because some genius at Adobe realized that metadata could solve that problem. All that needs to be done during shooting of any given shot/take is for a time stamp to be marked against the Clip:
    • i.e. this clips was being taken “now”.
    • Marking a time stamp is a simple button press with the clip selected.
  4. After footage has been shot, the OnLocation project is “pointed” to the media where it automatically matches the shot with the appropriate media file, based on the time stamp metadata in the media file with the time mark in the OnLocation Clip.
    • The media file is renamed to match the clip. Ready for import to Premiere Pro CS5.

Now here’s the genius part in my opinion (other than using the time stamp to link clips). The script from Adobe Story has been embedded in those OnLocation clips, and is now in the clip. Once Speech Analysis is complete for each clip, the script is laid-up against the analyzed media file so each word is time stamped. The advantage of this workflow over using a guide script directly imported is that the original formatting is used when the script comes via Story.

All that needs to be done is to build the sequence based on the script, with the advantage that every clip is now searchable by word. Almost close to, but not quite, Avid’s ScriptSync based on an entirely different technology (Nexidia).

It’s a great use of script and Speech Analysis and a great use of time-stamp metadata to reduce clip naming, linking and script embedding. A hint of the future of metadata-based workflows.

All we need now, Adobe, is access to all the Source Metadata.

Categories
Apple Metadata Video Technology

How serious is Apple about metadata?

During a recent thread here where I “infamously” suggested Apple should drop Log and Capture for the next version of FCP, one of the topics that came up was the use of metadata. Most commenters (all?) appeared – to my interpretation – to feel that reel name and TC were the “essence” of metadata.

And yet, if we look at the most recent work of the Chief Video Architect (apparently for both pro and consumer applications) Randy Ubilos we see that Location metadata is a requirement for the application. According to Apple’s FAQ for iMovie for IPhone if you don’t allow iMovie for iPhone to access your location metadata:

Because photos and videos recorded on iPhone 4 include location information, you must tap OK to enable iMovie to access photos and videos in the Media Library.

If you do not allow iMovie to use your location data, then the app is unable to access photos and videos in the Media Browser.

You can still record media directly from the camera to the timeline but, without the Location metadata, you’re pretty much locked out of iMovie for iPhone for all practical purposes.

There is no location metadata from tape capture! There’s not much from non-tape media right now, although some high end Panasonic cameras have an optional GPS board. However P2 media (both DVCPRO HD and AVC-I) as well as AVCCAM all have metadata slots for latitude and longitude.

Now, I’m NOT saying that Apple should force people to use metadata – particularly if it’s non existent – and this type of restriction in a Pro app would be unconscionable. I merely point out that this shows the type of thinking within Apple. In iMovie for iPhone they can create a better user (consumer) experience because they use Location metadata for automatic lower third locations in the themes.

Where I think it’s a little relevant is in counterpoint to some of my commentors: building an app that’s reliant on metadata is a different app than one relying on simple reel name and TC numbers.

Categories
Apple Metadata Video Technology

How is Apple using metadata in iMovie for iPhone?

I was finally watching the Steve Jobs Keynote from WWDC on June 7. (I know, but this was our second try – we get talking about stuff, what can I say?) I got to the iMovie for iPhone 4 demo and was blown away by the creative use of source metadata.

At 58 minutes into the keynote, Randy Ubillos is demonstrating adding a title to the video he’s editing in iMovie and iMovie automatically ads the location into the title. Not magic, but it’s simply reading the location metadata stored with images and videos shot with an iPhone and using that to generate part of the title. This is exactly how metadata should be used: to make life easier and to automate as much of the process as possible.

Likewise the same metadata draws a location pin on the map in one of the different themes. Exactly like the same metadata does in iPhoto.

In a professional application, that GPS data – which is coming to more and more professional and consumer video camcorders – could not only be used to add locations, but also to read what businesses are at the address. From that source and derived metadata (address and business derived from location information) we can infer a lot.

Check out my original article on metadata use in post production and for a more detailed version, with some pie-in-the-sky predictions of where this is going to lead us, download the free Supermeet Magazine number 4 and look for the article (featured on the cover) The Mundane and Magic future of Metadata.

Categories
Apple Distribution Media Consumption Video Technology

What is it with Flash?

I’ve just been reading my daily round of news, and there’s still more on the whole “Flash v HTML5” or “Flash v H.264” thing and I’m just arrogant enough to believe I can contribute something here.

Flash is an interactive player that produces a consistent result across browsers and platforms. That’s why publishers like it. But most Flash use is at a very basic level: a simple video player. That is also why early QuickTime interactive programmers liked to use Flash (yes, as a QT media type) for controls and text as QT text did not display consistently across platform.

Flash is a player and not a codec or file format. The current iteration of the Flash player plays:

  • the original “Flash video” format, which is sequential JPG files, up to 15,000 a movie
  • Sorenson Spark, the first real video codec for Flash; based on the very ancient H.263 videoconferencing codec it did not produce good video quality.
  • On2 VP6, a good, high quality codec now owned by Google with their purchase of On2. Still not a bad choice for Flash playback if you need to use an alpha channel for real-time compositing in Flash.
  • H.264 in MP4 or MOV (with limitations) format. Licensed from Main Concept (now owned by DivX).

Note that those same H.264/MP4 files can be played on Apple’s iDevices using the built-in player; or using the <video> tag supported by HTML5 in Safari or Chrome (and IE9 coming sometime).

Flash as a simple video player is probably dead in the water. Flash for complex interactivity and rich media experiences probably will continue for a while, at least until there are better authoring environments for the more complex interactivity provided in “HTML5”.

That brings me to HTML5, which is not a simple player but a revision of the whole HTML tags supported by browsers, that allow native video playback by the browser without plug-in (the <video> tag); local storage (similar to Google’s temporary Gears offering, now replaced by HTML5 support) and a whole bunch of other goodies. Add to this CSS for complex display (and I mean complex – mapping video to 3D objects in the browser, for example); Javascript for interactivity and connectivity to remote servers/databases; and SVG (Scalable Vector Graphics) for creating graphic elements in a browser (useful for interface elements in rich media).

Javascript used to be very slow and not even comparable to the speed of interactivity possible in Flash, but over the last three years all Javascript interpreters have become massively faster, making complex software possible in the browser. (Check out Apple’s implementation of iPhoto-like tools in their Gallery – online version.)

Summing up: HTLM5/CSS/Javascript is already very powerful. Check out Ajaxian for examples on what is already being done. For simple video playback, Flash is probably not the best choice. MPEG-4 H.264 video AAC audio probably is the best choice. For rich interactivity targeted at anything Apple, build it with HTML5/CSS/Javascript – it’s the only choice. It is also a powerful one: Apple’s iTunes Albums are essentially HTM5-based mini-sites; iAds are all HTM5/CSS/Javascript based and not lacking in rich interactivity or experience.

If you’re building a rich media application to connect with a web backend targeting mostly desktop computers, then Flash could still be the best choice.

For building Apps for iPhone, iPad: use the Xcode tools Apple provides free. While Adobe might be complaining to the Feds looking for “anti-trust” sympathy, they won’t get it as Apple is nowhere near dominant in any market, which has to be proven before taking up the point as to whether or not they have abused a monopoly position. Apple are not the dominant smartphone manufacturer; nor dominant MP3 player, nor dominant Tablet manufacturer. (Ok, they probably are dominant in MP3 players and Tablets but they are not, by definition, a monopoly, and Apple will work very hard to ensure they never are.)

Categories
Apple Video Technology

There’s no QuickTime on Apple’s Mobile Devices Either!

In the discussion about Flash-on-iDevices following yesterday’s post it occured to me that not only was there no Flash on the iPhone, et al., but there was no QuickTime either!

Not what QT was at least. The iDevices support H.264 video and AAC audio, primarily in a MPEG 4 file wrapper (although some devices will play H.264/AAC in a MOV wrapper) that is really not what QuickTime has been. (More below). Try playing a Sorenson video file on an iPad. What about QuickTime interactivity (Wired Sprites)?  Ever seen a QT VR play on an iPhone?

Of course not. QuickTime is not supported on any Apple device other than desktop and laptop computers. I also believe that the QT I loved and evangelized heavily late last Century is destined for the scrapheap. It’s been increasingly obvious, since around 2002/2001 that Apple decided that the future of web video was MP4: open standards. Initially they supported the MPEG-4 Simple Profile (just MPEG-4 in Apple’s world) in QuickTime 6 and then H.264 – the Advanced Video Codec from MPEG 4 Part 10.

Now, a lot of MPEG-4 is adopted from QuickTime. Apple donated the QT container to the MPEG group for consideration as their container format. Because of that MPEG-4 can do pretty much anything that QT could do, except there are very few implementations of anything beyond basic video playback. So with the QT container at the center of MPEG-4 it was easy for Apple to adopt and support this evolving (at the time) technology.

So QuickTime became the pre-eminent MPEG-4 player. When it came to the Apple TV, iPhone, iTouch and now iPad, the decision was made to only support simple MP4 playback. When QuickTime X was announced it referenced “the experience of the iPhone video” suggesting that QuickTime X was a different approach. When it was released it’s clear that QuickTime X will be the next generation of consumer-facing video playback.

So I expect that QuickTime X will never get the advanced features that QuickTime currently has. There’s no business model for it within Apple, which was always the problem with QuickTime. Frankly that Apple never provided a development environment was why Flash was able to so quickly “take over”. Remember that in QuickTime 6, Flash 5 was a supported media type. (Support was dropped because of security concerns with that version of Flash.) It took Flash to version 8 before it equalled all the features of QuickTime 3! (Seriously).

Few people made use of the advanced features of QuickTime. Our Australian company was one of them, making all the movies for the DV Companion for Final Cut Pro, and most of the other Intelligent Assistants with QuickTime wired sprite animations so the file size was acceptable. We were in the era of small hard drives after all. There was never a development environment from Apple: Totally Hip stepped up with our development environment (LiveStage Pro). Had there been a business model within Apple for QuckTime then the story of the web would have been different.

The advanced features in QuickTime have had no development since, well, QuickTime 4 (before the return of Jobs to Apple). I believe, without proof, that there was a fundamental shift within Apple around that time to, really, abandon the features they could get no return on, and make QuickTime the best MPEG-4 player; a great architecture for creating media and the foundation of their total media strategy. Without the advanced features, because, by this time Flash had “won” the interactivity war.

Now we can have better interactivity using features from HTML5, Javascript and CSS, which are all web standards overseen by a body outside of one company. It’s not just Flash that won’t see the iDevices, but any resemblance to the old QuickTime won’t make it either.

And I’m OK with that. QuickTime – MOV distribution – served Apple well and continues to power their iLife applications and Professional Video and Audio applications, but without the features that it had, and no longer needs. Apple are always “good” at dumping technology that no longer meets their need. I think it’s one of Jobs’ strengths.

I also believe Apple are being consistent by not allowing Flash: it’s on a par with their own technology also not getting on the platform.

Categories
Apple Pro Apps Interesting Technology Video Technology

What are my thoughts on NAB 2010?

By now you’ve likely been exposed to news from NAB – at least I hope so. If not head over to Oliver Peter’s blog and read up on what you missed. Rather than rehash the news I’d like to put a little perspective on it.

Digital Production BuZZ

The little show that I co-created nearly five years after a successful five years with DV Guys (although I was only managing editor for the last 3 years of that show) has now been the official NAB Podcast for 2009 and 2010. Big props to Larry Jordan, Cirina Catania, Debbie Price and the amazing team they put together for NAB 2010. I filed some special reports, which you can hear among the more than 70 shows the team pulled together in the six days of NAB.

3D Everywhere

Whether it’s Panasonic’s “Lens to Lounge” or Sony’s “Camera to Couch” 3D was everywhere. Everywhere except actually being able to do something with all the 3D content we’re being pushed to produce. I’m aware that the top grossing movies last year were 3D and 3D movies perform better than 2D. I just don’t see that as being relevant to my universe where I don’t distribute my work through a major studio to 2000 cinemas.

So short of that, where’s the outlet for all the 3D? YouTube plays 3D (but is incredibly hard to monetize). The Blu-ray 3D spec is finalized but no shipping players, burners or encoders are available.

While I have no real quibble with the cinema experience – although films need to be designed for 3D, and shot with 3D in mind, to be successful 3D experiences (and few are) – I am very skeptical about 3D in the home, at least for the next couple of years. The problems of the glasses – I multitask a lot of the time while watching TV, what about visitors, or preparing dinner? – and the very different nature seems to limit the future of 3D in the home to those who have dedicated home theaters and dedicated, monotasking viewing time.

The missing Apple

Of course, if you’re a regular reader you’ll know it came as no surprise that Apple wasn’t at NAB. They don’t do trade shows any more so it was highly unrealistic to expect anything at NAB this year, next year, or any year. When they have something to announce, they’ll announce it.

You’ll also be aware that I believe Apple is doing a lot of what they need to do with Final Cut Pro to make it the “awesome” release that Steve Jobs tells us it will be. Maybe 2011 some time, but more likely early 2012 for the next awesome Final Cut Studio release. Or whenever Apple is ready!

Avid Media Composer 5 and editing in the cloud

The new management (current management) at Avid certainly appear to be spot on track. Media Composer 3.5, 4 and now 5 have all been great releases. As more of the work this management team are pushing comes to the public, the more I see the company back on track.

In fact hearing “interoperable” and “openness” sprinkled regularly into the press event and marketing materials seems slightly out of character from the old Avid, but is very welcome. Direct editing of QuickTime media, HDSLR or RED media via AMA for quick turn-around content is a huge advancement. Improvements to audio filters (and eventual round-tripping to a future version of ProTools) are long-standing requests from Avid’s customers. Even the “expensive” monitoring (output only) requirement has gone thanks to support for an MXO Mini for monitoring. (I wish that was an option back in January – it would have saved a client of mine about $18K!)

While only a “technology demonstration” at this point, Avid’s “edit in the cloud” (i.e. over the Internet or from a Local Server) looks like the real deal. Scott Simmons has a review of the demo over at Pro Video Coalition. Avid is back and we like it.

Adobe CS5

I doubt there’s much to add to Adobe’s CS5 announcements. The Mercury Engine is a major step forward in performance and it will take the others a while to catch up. To be competitive Apple would have to rewrite FCP to 64 bit and then implement Grand Central Dispatch and OpenCL to deliver that level of performance (and that’s what I expect they’re doing). Adobe’s platform-agnostic code (at the core) has made it easier for them to move to 64 bit, and tight integration with Nvidia’s CUDA engine, on top of some mighty software optimizations, gives the performance boost.

The whole Master Collection is a must-have for post production for After Effects, Encore, Photoshop and Illustrator alone. Premiere Pro is a bonus and could well become the Swiss Army Knife of editing tools as it supports pretty much any format natively.

Pick of the show

The pick of the show for me is, without a doubt, Get: phonetic search for Final Cut Pro. Search your clips for specific words wherever they occur. The exact opposite of Adobe’s Transcription (although that can be boosted by feeding it a script in CS5) Get does not attempt to derive meaning from the waveforms that make up the audio. Instead it predicts what the waveform for your search terms should look like, then goes and tries to match it in your media.

It has certainly set my thinking cap buzzing. What we could do at Assisted Editing with this technology would be amazing – almost delivering my “magic future” for metadata I spoke of at my two presentations. But for now, Get is an amazingly powerful tool that every documentary filmmaker will want to be using.

Hardware trends you might have missed

Not many of the main news streams picked up on the trend to multiple cards, or multi-channel cards, this NAB. Obviously 3D capable cards were announced (by AJA and Blackmagic Design) but AJA also announced that multiple Kona cards can co-exist in the one host computer; while Blackmagic Design announced a dual channel card, and Matrox promised a four channel I/O card.

What we’ll be using this multi-channel capability for, I’m not quite sure, as no software supports it, yet. Except, Blackmagic Design used to have a two channel software switcher in their product range (although it seems to be missing from their website right now). A dual channel Decklink card, with software switcher, makes a very powerful and inexpensive studio or location tool with a Mac Pro. Seriously undercuts dedicated switchers from Focus Enhancements or Pansonic.

$999 daVinci

Blackmagic Design almost deserve a post of their own on the NAB announcement (that you no doubt followed here) of the $999 software-only daVinci. Scott Simmons reminded me in a Tweet that I had accurately predicted a dramatic price drop for the daVinci system. What I didn’t predict was how far, and how fast, Grant Petty would drop the price. What I expected to come in at $60K was announced as a turnkey system for $30,000! I didn’t expect the software only version, although in reality, with hardware, monitors, scopes and storage, that’s still likely a $20,000 investment, for what used to be a minimum of $300,000 or more.

This is, of course, consistent with everything that Grant Petty has done with Blackmagic Design. I remember the first Decklink announcement (on the DV Guys show) at under $1,000 and everyone wondered how the industry would cope. Those cards are now much more powerful, and even cheaper, and now we’re going down the same path with daVinci.

Friends, fun and the Future

For me, NAB is as much about friends as it is about the technology. It’s a time when my virtual communities intrude into real space. Once again, NAB proved to be two days too long and four nights too short. With about 20 parties happening Monday night and a similar number Tuesday, we need more nights to spread them over, and fewer days. I was done with the show floor by Tuesday afternoon and there were two days to run.

This year’s MediaMotion Ball was a great social event, as it always is; running into the Adobe party following. Tuesday’s Supermeet broke new ground with the “Three A’s” on stage together for the first time.

I made my contribution to the show via my Supermeet Magazine article, The Mundane and Magic Future of Metadata, which I also delivered as a presentation at the ProMax event and in the Post Pit on the show floor. The Supermeet Magazine should be available soon from Supermeet.com.

The future of post production automation is metadata. Check out the article and tell me what you think.

And that’s my NAB wrap for 2010. Other than to say, worst WiFi experience ever at the Sahara. Expensive and slow. It’s time for broadband to be included in the price of a room, like air conditioning (didn’t use); the Television (only to get the sign up details for the Internet connection); etc.

Categories
Item of Interest Video Technology

What else did Avid announce at NAB 2010? [Corrected]

From my NAB BuZZ special report:

Two words I never thought I’d associate with Avid are Interoperability and Openness, and yet this has been the theme of Avid’s 2010 announcements.

Since the recent rebranding of the five Avid-owned companies as one Avid, the company is using the tag line We’re Avid, are you, and constantly pushed the idea that Avid was open and stood for interoperability.

Heading off their event announcements was the concept of an Integrated Media Enterprise with an open media catalog for metadata management. Metadata management means that media can be found when it’s needed using search functions. New is the ability to view metadata in a Media Composer timeline.

Integrated Media Enterprise is at the core of Avid’s “any media, any platform” editing solution across diverse locations. Also important is a new revision to Interplay: Interplay 2, which comes in two forms. Interplay Media Asset Management is based on Avid’s recent purchase of Blue Order. Featuring an open, modular achitecture and support for a unified media management.

The Original Interplay is now known as Interplay Production to pair with the Interplay Media Asset Manager.

In the editorial field, Media Composer 5 has been announced for a mid July ship date. This new version expands the Avid Media Architecture to support RED, Canon XF and QT media natively with instant access and no transcoding required. My personal favorite new feature for Media Composer is the end of Segment mode: I can drag and drop clips in a sequence just like I’d expect. Also new is support for AVC-50 and 100 throughout the entire Avid product line and real time audio filters in the timeline. [Updated] Although it was stated at the press conference these audio filters carry through to a ProTools session and all changes made in ProTools to filter settings are all carried back to Media Composer, this is Avid’s intention but until ProTools gets an update to support it, the workflow is not complete.

Rounding out the day’s announcements was the purchase of Euphonix by Avid to enhance their range of control surfaces so they can support more market needs.

Check out the Avid announcements at Avid.com

Categories
Item of Interest Video Technology

What did Sony announce at NAB 2010?

Although a long and tedious press meeting Sony had nothing of interest to me (or presumably my audience). They showed a lot of 3D (please put your glasses on, take them off for the next bit, put them on again, repeat endlessly) even though they only have one 3D capable camera (and you need two to get stereoscopy).

Everything has been previously announced but still, it’s important to waste the time of a couple of hundred journalists during their busy NAB schedule to just talk about what their customers are doing.