Categories
Item of Interest Video Technology

What else did Avid announce at NAB 2010? [Corrected]

From my NAB BuZZ special report:

Two words I never thought I’d associate with Avid are Interoperability and Openness, and yet this has been the theme of Avid’s 2010 announcements.

Since the recent rebranding of the five Avid-owned companies as one Avid, the company is using the tag line We’re Avid, are you, and constantly pushed the idea that Avid was open and stood for interoperability.

Heading off their event announcements was the concept of an Integrated Media Enterprise with an open media catalog for metadata management. Metadata management means that media can be found when it’s needed using search functions. New is the ability to view metadata in a Media Composer timeline.

Integrated Media Enterprise is at the core of Avid’s “any media, any platform” editing solution across diverse locations. Also important is a new revision to Interplay: Interplay 2, which comes in two forms. Interplay Media Asset Management is based on Avid’s recent purchase of Blue Order. Featuring an open, modular achitecture and support for a unified media management.

The Original Interplay is now known as Interplay Production to pair with the Interplay Media Asset Manager.

In the editorial field, Media Composer 5 has been announced for a mid July ship date. This new version expands the Avid Media Architecture to support RED, Canon XF and QT media natively with instant access and no transcoding required. My personal favorite new feature for Media Composer is the end of Segment mode: I can drag and drop clips in a sequence just like I’d expect. Also new is support for AVC-50 and 100 throughout the entire Avid product line and real time audio filters in the timeline. [Updated] Although it was stated at the press conference these audio filters carry through to a ProTools session and all changes made in ProTools to filter settings are all carried back to Media Composer, this is Avid’s intention but until ProTools gets an update to support it, the workflow is not complete.

Rounding out the day’s announcements was the purchase of Euphonix by Avid to enhance their range of control surfaces so they can support more market needs.

Check out the Avid announcements at Avid.com

Categories
Item of Interest Video Technology

What did Sony announce at NAB 2010?

Although a long and tedious press meeting Sony had nothing of interest to me (or presumably my audience). They showed a lot of 3D (please put your glasses on, take them off for the next bit, put them on again, repeat endlessly) even though they only have one 3D capable camera (and you need two to get stereoscopy).

Everything has been previously announced but still, it’s important to waste the time of a couple of hundred journalists during their busy NAB schedule to just talk about what their customers are doing.

Categories
Interesting Technology Item of Interest Video Technology

What did Panasonic announce at NAB 2010?

Panasonic’s 2010 announcements centered around the adoption of AVC-Intra and their affordable 3D camera, the AG-3DA1 with the most interesting announcement being their 4/3″ sensor AG-AF100.

Naturally Panasonic reminded us of their leadership in IT-based workflows and I was surprised to be reminded that it’s been 7 NABs since the release of the first P2-based camcorder. This year they focused on the adoption of AVC-Intra with new partners and their affordable AVCCAM format recording to SD media.

A new AVCCAM shoulder mounted camera has been announced for delivery later this year: the AG-HM80. Featuring full raster 3 Mpixel sensors with recording to AVCCAM or SD DV the camera has interchangeable lenses, optical image stabilizing, a manual focus ring and user assignable controls. Naturally 24P native recording at 1080 and 720 image sizes is supported along with typical video frame rates and it features Dynamic Range stretch for better control of mixed light situations and Cine-like gamma.  With a full range of inputs and outputs including HDMI, USB 2, composite, component and 1294 for the DV signal, the HM80 includes a 2 year warranty in it’s recommended price of just $2895.

However, the most interesting announcement was the large sensor AG-AF100 expected to ship later this year. Featuring a 4/3″, 12.1 Mpixel sensor, support for professional audio I/O, the AF100 records to all four AVVCAM modes to SD/SDHD or the new SDXC cards in dual slots. While the price was not announced it is expected to carry an MSRP of around $6000 and a street price a bit lower.

Along with the camera announcements were a new P2 studio deck, the AJ-HPD2500 and a 3D production monitor due September 2010.


Categories
Item of Interest Video Technology

What are Avid doing with Media Composer 5?

Well, it turns out I was right yesterday – the Supermeet Agenda did reveal a little of Avid’s Media Composer 5 announcement: native QuickTime/H.264 workflows for Canon 5D and 7D cameras, but that was only a fraction of what was announced.

Native QuickTime media support

Anything QuickTime Player can play, Media Composer can use as a source. This includes ProRes, Cineform and the H.264 mov files that the Canon cameras produce with no transcoding, rewrapping, or logging and transferring required! That is great news for the post industry because it will simplify workflows between platforms. It will also seriously simplify moving between Final Cut Pro and Media Composer (along with Automatic Duck to convert the project).

Native RED .r3d support via AMA

The Avid Media Architecture, introduced a few years back for native access to source media, has been enhanced to give native support to RED .r3d media, without needing to process through Metafuze. Now, all media is scaled to HD size, but since few people are really working at higher than HD sizes, this won’t be a significant limitation in a Media Composer workflow. In one jump, Media Composer leaps from being the platform with the least-native RED support to one of the best.

Drag and Drop editing

So one of my (purely personal) primary objections to the Media Composer interface – the need to go into segment mode to simply drag a clip – has been eliminated. Now drag and drop audio and video files wherever you want to rearrange sequences. Ditto on grabbing the in or out point of a clip to trim it. What can I say: “At last?” But still good.

External monitoring with Matrox MXO2 Mini

No need to buy a (relatively) expensive Mojo if all you want to do is monitor! (Where was this 2 months ago when I spec’d some Media Composer systems for a client! They would have saved some $18K). Now, it’s a high quality monitor only solution (no capture with the MXO2) but it is the first time that Avid have overtly used third-party hardware for anything. (Avid OEM’s hardware from AJA for their DS systems but it’s not promoted widely)

Full 4:4:4: HD-RGB processing

Internal processing now has support for full colorpsace through color correction, keying and effects work. This will particularly appeal to those finishing in house with Media Composer. Add in a Nitris DX system and you can digitize, process, monitor and output in HD-RGB using the two HD SDI connectors needed to handle the high-bandwidth resolutions. (So, while you can work and process in Media Composer without loss due to colorspace decimation and output to file-based workflows, output to tape will, apparently, require a Nitrix DX-based system to access the dual HD-SDI connectors required for bandwidth this high.)

I’m liking this new management at Avid. Media Composer 4 was a great release and this followup – just a year later – has really set the bar higher for the industry as a whole. These are the type of features an editor will really appreciate.

That’s just Media Composer. I’ll be at the Avid Press event later today to learn more about Media Composer 5 and the new “cloud-based” editing I wrote about a couple of days ago.

Read more about the editorial features and more at http://www.avid.com/US/products/Media-Composer-Software/features

Categories
New Media Studio 2.0 Video Technology

What are the four major trends in production?

Having just got back from an North East trip – New York, Boston and Meriden/North Haven CT – I’ve had a good opportunity to think and observe trends outside my own environment. I see four major trends happening across production and, despite the publicity and inevitable NAB push, I don’t think 3D stereoscopy is among them (at least not yet).

Stereoscopy is indeed a trend in feature film production with an impressive percentage of last year’s box office attributable to 3D movies, but it will be a long time before it’s more than a niche for non-feature production. In fact the supply of 3D content vs the number of theaters equipped to display, is probably going to limit 3D distribution to the major studios and their tentpole releases.

That said, this year’s NAB is likely to be full of 3D capable rigs, cameras and workflows. For what display?  Until the viewing end is more established production in 3D won’t be that important.

Right now the trends I’m observing are: more multicamera production; extensive use of green screen even for “normal” shots; 3D sets, objects and even characters; and a definite trend toward larger sensor cameras (both DSLR and RED).

Multicamera Production

The appeal is simple: acquire two angles on any “good” take. Of course reality television takes this to almost-ridiculous levels with up to 68+ hours recorded for every day of the show’s shoot. On more reasonable shows, Friday Night Lights shoots multicamera in real world locations for a very efficient production schedule.

While it no doubt saves production time, and therefore cost, it can limit shot availability (as one camera ‘sees’ another) or more bland lighting (to make sure each camera angle is well lit). Multicamera studio shoots – the staple of the sitcom – tend to be lit very flat, but Friday Night Lights doesn’t suffer for the multicamera acquisition.

All major editing software packages support multicamera editing. We’ve also seen an increase in requests for multicamera support in our double-system synchronizing tool Sync-N-Link.

Part of the reason that multicamera acquisition is becoming more practical is that the cost of buying or renting camera equipment has dropped dramatically, so that three cameras on a shoot are not necessarily a budget buster.

Green Screen (Virtual sets)

If you haven’t already seen Stargate Studio’s Virtual Back Lot reel, do it now. Before seeing it I had the sense that there was a lot more green screen used out there, but I had no idea that shows I’d watched and enjoyed employed green screen. The Times Square shot from Hereos, for example, did not feel at all composited. When simple street scenes are being shot green screen – things that could easily be shot in the real world – then you know it has to be for budgetary reasons.

Green screen (and blue screen for film) technologies are well proven. There are good and inexpensive tools that fit within common workflows to build the green screen composite. In other words, the barriers to entry are simply the skill of the Director of Photography on the shoot, and that of the editors/compositors in post.

When 70% of a show, like Sanctuary uses virtual sets, the necessity for anything beyond a good screen screen studio, with a good lighting kit and some smarts seems less important.

3D sets and enhancements

The third major trend goes hand-in-hand with the use of Virtual Sets: sets that are created in the mind of a designer and rendered “real” with 3D software. There are literally hundreds of thousands of object models available for sale (or free) online. You can hardly read a production story now that does’t feature 3D set or character extensions.

I should probably add motion tracking as another technology coming into its own, because it’s an essential part of the incorporation of actors into 3D sets, or the enhancement of character with 3D character extensions.

Larger Sensors

Fairly obvious to all, I would think, but the trend toward larger sensors includes the DSLR trend as well as RED Digital Cinema and the new Arri Alexxa. Wherever you look the trend is toward larger sensors with their sensitivity improvements, greater control of depth of field and drop-dead gorgeous pictures. Among other uses they make perfect plates for backgrounds in green screen work!

All four (plus motion tracking) trends contribute to reducing production cost, making more shows viable with ever fragmenting audiences.

Categories
Apple Pro Apps Video Technology

What is Apple doing with Final Cut Pro?

We were talking a couple of night ago and discussing how various of the NLE developers are dealing with the Carbon/Cocoa transition. But before I get to that a disclaimer and some background.

Disclaimer: I have no inside knowledge. If I did it would almost certainly be under NDA and therefore would not be shared. Everything I post here is based on publicly available knowledge, if not that commonly known. (Depends on how much you care about code and most people don’t.) So, I present data points, interpretation and extrapolation: in plain English, guesswork, but intelligent guesswork! PLEASE DO NOT CONSIDER THESE RAMBLING “RUMORS” OR HAVING ANY INSIDE KNOWLEDGE. THEY DO NOT AND ARE NOT RUMORS.

Final Cut Pro, like most software developed originally for OS 9, is built on a type of code called (for short) Carbon. The Carbon APIs (Application Programming Interface – the building blocks of code) allowed Final Cut Pro, Photoshop, Illustrator, Media Composer and thousands of other applications to make the transition from OS 9 to OS X “reasonably painlessly”. (Meaning, hard but not impossible). Carbon code runs just fine on OS X and is no less efficient in and of itself than the more modern Cocoa code.

So-called Cocoa code is the native language for OS X. It is built almost completely on NeXTstep, which Apple acquired when they acquired NeXT (and Steve Jobs) as a replacement for OS 9. Now, originally Apple said (at the 2006 WWDC) that there would be 64 bit versions of the Carbon APIs. This would have meant that Final Cut Pro, Media Composer, After Effects, Photoshop, Illustrator, et al. would be able to move to 64 bit versions without major rewrite. And so it was good.

Until a year later. At WWDC in July 2007 Apple reversed that decision and said that any application that wants to go to 64 bit would have to be rewritten to Cocoa.  Much gnashing of teeth in the ProApps camp and at Avid and Adobe. Not only can’t they go to 64 bit without rewriting but Final Cut Pro cannot start to use OS X technologies like Grand Central Dispatch and OpenCL until that rewrite is done.

I’m much less familiar with Adobe’s code but the current version of Premiere Pro was ported to OS X in the modern code era and is almost certainly Cocoa where it hits the OS X road. It is highly likely that the majority of the code is in a format that is common to both platforms with mostly interface-specific code for each platform.

That’s also likely with the Media Composer code but I have reason to believe that Avid have been progressively rewriting functional blocks of code from Carbon to Cocoa over the last several releases (since mid 2007 probably)!

Most of the ProApps are already written in Cocoa: Soundtrack Pro, Motion, DVD Studio Pro (when it was based on Spruce not Astate’s code) and Compressor. These are already in a form that makes it relatively easy to move forward to take advantage of modern OS X technologies.

Not so Final Cut Pro.  Now we do know that most of what has been added to Final Cut Pro in recent versions has been written in Cocoa. Apple’s Xcode development tool allows a mixture of code types in the one application. I’m uncertain whether Multicam is written in Cocoa but I’d expect it to be. HDV Log and Capture; Log and Transfer; Share, Master Templates etc are clearly also written in Cocoa. (The main evidence is that the interface is using the “ProApps Kit” interface used in Motion, Soundtrack Pro et al.)

So to the question that started the post: “Are Apple rewriting parts of the code as they go?”  I think the answer is yes.

One really strong piece of evidence is the new Speed Change tools in Final Cut Pro 7. The new interface is ProApps kit, not Final Cut Pro’s interface elements, which by itself suggests new Cocoa code. What is stronger evidence is that speed changes in XML files give different results when imported to Final Cut Pro 6 than they do when imported to Final Cut Pro 7. This is very strong evidence that new code is involved. (The old code would give the same result even with a new interface.)

One would have to extrapolate that the new Marker functions (with their new interface) has also been given new code but that’s much less certain as the Marker interface still shows the original Final Cut Pro interface style with new elements added. (Compare the Speed Change dialog and Marker dialog to see the difference.)

The rewrite to Cocoa, even assuming they don’t make fundamental changes* is very time consuming and a lot of hard work to rewrite and test. That there is evidence in the current release of work already complete strongly suggests that the team is hard at work doing what’s necessary to bring Final Cut Pro into the modern Cocoa OS X code era. But don’t expect to see a converted release any time soon. There’s a lot of work that the QuickTime team has to do to add functionality to the underlying QTKit API (The modern QuickTime API for programmers) that an updated Final Cut Pro needs. Right now there’s no support for QuickTime metadata in QTKit, for example.

* Fundamental Changes. We’ve argued this instead of watching TV as well. Most of Final Cut Pro’s functionality is just fine. There’s not a lot to be gained by totally redesigning and redeveloping the Transition Editor, for example. There are, however, two areas that I think it would behove Apple well to rethink: Media Management and Metadata views. Media Management in Final Cut Pro is now reliable, except for a few edge cases (largely to do with dual system and merged clips). Whether or not to spend a lot of effort (and dollars) to improve it that last little bit for the relatively small customer base that would benefit is a management decision I’m glad I don’t have to make!

I do think they need to do something more flexible with metadata support. Non-tape sources come complete with comprehensive metadata that Apple capture and insert into the media file. This support was added in my all-time favorite Final Cut Pro release – 5.1.2! Unfortunately the Bin interface has limited flexibility. While there are a lot of viewable columns there’s no way to add a column unless you’re the Apple engineer that does that!  Other NLEs are much more flexible. Media Composer will add as many columns for whatever metadata you ask it to display. Not so Final Cut Pro where the only simple way to view QuickTime metadata in a QuickTime file is with my company’s miniME. (Go on, download it. The free demo lets you view all QuickTime metadata in a file and export it to an Excel spreadsheet. Buy it and you can remap the metadata into Final Cut Pro’s columns.)

It takes years to make major transitions in software. QuickTime metadata support came in late 2006. With the advent of Log and Transfer, that support became valuable. So my informed guess is that a future release of Final Cut Pro will allow that metadata to be viewed and used in Final Cut Pro. It’s all there in the file and in any XML export. To me that suggests a foundation for some future construction.

Like I said, nothing more than intelligent guesses. NOT RUMORS. NOT INSIDE INFORMATION. Just me joining dots and I’m bound to be wrong about half of it. I just don’t know which half!

Categories
Interesting Technology Metadata Video Technology

What is Transcriptize and what will it do for me?

Occasionally I do some work for my day job at Intelligent Assistance!, where we’re actively adding to our metadata-based workflow tools. This time taking the speech transcription metadata from the adobe suite and making it accessible to producers who want text or Excel versions, or even into FCP with the transcription placed into colored markers (one color per speaker). With Transcriptize you can also name the speakers, something not possible in the Adobe tools.

Here’s my interview with Larry Jordan where we announced it and the press release is below.

“Announcing Transcriptize on the Digital Production BuZZ”

Transcriptize expands the usefulness of Adobe Speech Transcription

Take transcriptions from Adobe Production Bundle to Media Composer, Excel and Final Cut Pro.

Burbank, CA (February 12, 2010) – Intelligent Assistance, Inc has introduced a new software tool that takes Transcription XML from Adobe Premiere Pro CS4 or Soundbooth CS4 and converts it text, Excel Spreadsheet or Final Cut Pro clip markers.

“Late last year, Larry Jordan asked if we could create something to make the Adobe Speech Transcription more available”, says Intelligent Assistance’ CEO Philip Hodgetts. “We thought that was a great idea and Transcriptize is the result, less than two months later.”

Transcriptize imports the transcription XML from the Adobe Production Bundle and allows editors and producers to name the speakers – something not possible in the Production Bundle. From there users have the option to:

Export a plain text file, suitable for the needs of a producer or to import into Media Composer’s Script Sync engine.

Export an Excel spreadsheet with a variable number of words per row – Perfect for a producer.

Open the XML from a Final Cut Pro clip and add the transcription to Markers where:

There are a variable number of words per Marker (including one Marker per speaker)

The speaker name is placed in the Marker name

Transcription appears in the clip Marker comment

Marker colors are used to identify each speaker (FCP 7 onward).

The transcription can be searched within Final Cut Pro.

Markers can be easily subclipped based on transcription content.

Transcriptize is available now from www.assistedediting.com/Transcriptize/. MSRP is US$149 with an introductory offer of $99 until the end of February 2010. NFR versions for review are available, contact Philip Hodgetts, details below.

Categories
Distribution Media Consumption Studio 2.0 The Technology of Production

What about the iPad and Media Production?

On October 31 last year Edo Segal wrote an article on TechCrunch with the title For The Future Of The Media Industry, Look In The App Store. The article is definitely worth a read but this jumped out at me:

But the entertainment industry has a vested interest in the success of this new type of convergence, as within it lies the secret to its continuing prosperity. The only way to block the incredible ease of pirating any content a media company can generate is to couple said experiences with extensions that live in the cloud and enhance that experience for consumers. Not just for some fancy DRM but for real value creation. They must begin to create a product that is not simply a static digital file that can be easily copied and distributed, but rather view media as a dynamic “application” with extensions via the web. This howl is the future evolution of the media industry.

It brings together some of the thinking I’ve been doing on how to challenge the loss of revenue from direct consumption or from advertising revenue when digital files of programming and music are so easily shared and copied. Techdirt.com like to summarize their approach as CwF + RtB = financial success: Connect with Fans and give them a Reason to Buy some scarce goods. Many musicians are already doing this and the results are summarized in the article The Future of Music Business Models (and those who are already there).

I agree that CwF + RtB is part of the future: we can’t charge for infinitely distributable digital goods but we can charge for scare goods (or services) promoted by the music.

But I’m not as sure that will work in the same way for the “television” business, which I define as being “television style programming professionally produced” even if it’s never broadcast on a network on cable. Certainly it will be possible to sell merchandising around programming, and everyone is encouraged to do that.

I’ve also written and presented – as long ago as my Nov 2006 keynote presentation for the Academy of Television Arts & Sciences – that producers and viewers have to be more connected, even to the extent of allowing fan contributions.

Well, last night I had something of an epiphany that bought together Edo Segal’s thoughts and my own as I contemplated the implications of the recently announced Apple iPad.

As a brief aside, I find the iPad to be pretty much exactly what I was expecting (although I thought maybe a webcam for video chat) and interesting. Although I don’t see where it would fit in an iPhone/Laptop world, I can see plenty of uses particularly for media consumption. (For example a family shares an iMac but each of the older children have their own iPad for general computing, only using the iMac for essays etc.)

But the iPad doesn’t really lend itself to static media consumption as it has been: where the producer sends stories fully finished and complete to viewers who passively consume. That’s when the import of Edo’s comment struck: there is more of a future in media consumption for those producers who create the whole environment.  This has definitely been done by many movies and shows but usually with more of a consumption-of-information about the show, rather than a rich interactive experience where fans of the show are as important as the producers.

The future of independent production and media consumption is an immersive environment (website, or better yet and iPad app) with:

  • Content
  • Community (forums, competitions)
  • Access to the wider story, side stories or “back story” in various media formats
  • Character blogs
  • Cast and crew blogs
  • Fan contributions and remixes.

Such an experience would be almost a cross between a typical television program and a video game environment. Sure programming is part of what can be consumed on the site; but there are competitions, games, back stories; additional visual material edited out of the program source, with additional shooting, using technologies like Assisted Editing.

Any unauthorized distribution of content will only be distribution the content, not the experience of the program in its full glory.

Now, there’s no particular reason why this couldn’t be largely done on a website, but it is as an immersive iPad app that I think it will really be fantastic. The iPad is very immersive and tactile. It presents no “border” (i.e. browser window and other computer screen elements) to distract from the programming. It begs to be interacted with because holding it in place to watch a 22 or 44 minute show doesn’t appear to be going to be all that great.

There’s one more selling point for the iPad: it allows in-app sales, so some of the “reasons to buy” can be sold very transparently without even leaving the app’s environment. Avatars, screen savers, certain games or activities might carry a small charge. Yes, even the media itself (or some of it) could carry a small transaction charge. Smooth, frictionless sales in an environment optimized to engage people in the story of the show.

Apple’s iTunesLP format is a very small start in this direction by building a micro-site for the album artwork. This is very powerful because it supports most modern web technologies in a tight package and interactive features (all, b.t.w., without Flash but looking a lot like Flash).

Edo has some further good ideas and I recommend reading the article at the top of this post.

Categories
Random Thought Studio 2.0 The Business of Production Video Technology

Why are most production workflows inefficient?

In my experience few productions – be they film or television – are well planned from a workflow perspective. It seems that people do what’s apparently cheapest, or what they have done in the past. This is both dangerous – because the production workflow hasn’t been tested – and inefficient.

In a perfect world (oh *that* again!) the workflow would be thoroughly tested: shoot with the proposed camera, test the digital lab if involved; test the edit all the way through to the actual output of the project. Once the proposed workflow is tested it can be checked for improved efficiency at every step. Perhaps there are software solutions for automating parts of the process that require only small changes to the process to be extremely valuable. Perhaps there are alternatives that would save a lot of time and money if they were known about.

Instead of tested and efficient workflows, people tend to do “what they’ve done before”. When there are large amounts of money at stake on a film or TV series it’s understandable that people opt for the tried and true, even if it’s not particularly efficient because “it will work”.

Part of the problem is that people simply do not test their workflows. I’ve been involved with “film projects” (both direct to DVD and back out to cinematic release) where the workflow for post was not set until shooting had started. In one example the shoot format wasn’t known until less than a week before shooting started.

Maybe there was a time when you could simply rely on “what went before” for a workflow, but with the proliferation of formats and distribution outputs, there are more choices than ever to be made.

Which brings me to the other part of the problem. Most people making workflow decisions are producers, with input from their chosen editor. Chances are, unfortunately, that neither group are very likely to truly understand the technology that underpins the workflow – or even why the workflow “works”. They know enough of what they need to know to get by but my experience has been that most working producers and editors do not actively invest time into learning the technology and improving their own value.

And when they’re not working, they’re working on getting more work. Again, not surprising.

But somewhere along the way, we need producers to research and listen to advisors (like myself) who do understand the workflow and do have a working knowledge of changing technology that can be make a particular project much more efficient to produce, but I have no idea how to connect those producers with the people who can help.

We’ve seen, in just a little under two years, how technology can improve workflows, just with our relatively minor contributions:

Rent a couple of LockIt boxes (or equivalent) on set and save days and days synchronizing audio and video from dual system shoots;

Log your documentary material in a specific way, and take weeks off post production finding the stories in the material (Producers can even do a pre-edit);

Understand how to build a spreadsheet of your titles and how to make a Motion Template and automate the production of titles (and changes to same).

If you know you can recut a self contained file into it’s scene components, how does that change color correction for your project;

Import music with full metadata.

These are all examples of currently-available software tools from my company and others that are working to make post production more efficiently. I wrote more about this in my Filling in the NLE Gaps for DV Magazine.

My question though, is how do we encourage producers to “look around and see what’s available” and open up their workflows to a little modern technology. To this end, Intelligent Assistance is looking to work closely with a limited group of producers in 2010 to find ways to streamline, automate and make-more-robust postproduction workflows. So, if you’re a producer and want to save time and money in post, email me or post in the comments.

If you’ve got ideas on how encourage producers move toward more metadata-based workflows? How do we get the message out?

Categories
The Technology of Production Video Technology

What can some kids do with a “green screen” kit for Christmas?

On the Yahoo-based Final Cut Pro list MarkB posted this just a few minutes ago:

Gave my kids (14 & 16) a green screen kit from Cowboy Studio for Xmas. The older one does a sports video blog, the younger one shoots and edits it (Canon HV20 camera, Final Cut Express, 4-year old iMac).
They used their new chroma key trickery today. I helped them set up the green screen, gave the younger one a 5-minute lesson in how to use my DV Garage plugin, then stayed out of it except for a tip or two. This is what a 14-year old kid can do first time:
http://www.youtube.com/user/JLBsportsTV

Watch the video or at least the first couple of minutes. (I’m not that into football/soccer so it doesn’t mean much to me) but look at the work.Not only is the 16 year old good talent, but the way it’s put together is damned nice too. (In this style of presentation I’ll overlook my long-standing distrust of jump-cuts and live with fact that it’s become an acceptable style: heck in this example I think it works fine.)

Mark mentions that the keyer they used was DV Matte Pro, which I’ve also had a lot of success with: using it on A Musical Journey with Richard Sherman, on the 40th and 45th Anniversary Edition Mary Poppins DVD. After testing all that were available that’s what gave us the best results (although it does have a different approach to fine tuning edges than most keyers, which threw me at first).

I’ve long argued that we have to constantly be improving our skills, because those coming up behind us are staring with a whole lot better craft/technical skills that we did. In fact, we have to keep learning to keep up and make sure our experience and people skills are a whole lot better.

But it does make you wonder what these guys will do with their Christmas “green screen kit” if ever they discover 3D. (I suspect that would be the 14 year old’s realm.) Does easy, accessible keying technology really change production forever?