Categories
Apple Pro Apps Assisted Editing Metadata

Some thoughts on Metadata

If it’s so important, why isn’t metadata easier? How do we make it easier?

There are many kinds of metadata, and previously I’ve noted six that are important to production. There’s a whole second category of metadata that’s more related to asset management, but they share the common goal of being able to find a specific shot, as quickly as possible. In this post I’m talking about what would be Added Metadata of the six.

For scripted material, this is relatively easy, particularly with tools like QR Slate, Movie Slate, and the like now available. For scripted material you have known assets: Scene, shot, take, character, etc., as well as one or two free form fields like Comment. It is with un-scripted material that we run into problems not solvable with those solutions (at least in their current forms).

For reality and documentary production, and the many corporate and educational parallels, there is no advance script. No shot breakdown, because it’s unknown what might happen. Typically, this information is added inside the NLE during the logging phase. It’s this problem that Adobe are addressing with Prelude, and that Randy Ubillos addressed for Final Cut Pro before his producer’s logging tool became iMovie ’08! It’s the problem I’m trying to address while working on the Solar Odyssey show.

Now, I set out to shoot a show about the first part of Ra’s journey around the Great Loop waterway – an up to five month adventure. Turns out I shot the “how the boat was finished, very much delayed, and its first journeys” instead. Lots of good material, but no clear story, particularly early on.

Logging this material was killing me. So, we started working on a metadata entry tool, and I kept logging stuff the old fashioned way – after ingest into Final Cut Pro X. Some days I used the metadata logging tools, and some I did not. There were days before the first iteration of the tool, days where the action happened remote to where it could be logged, and days where I had to shoot and couldn’t devote any time to logging as we went.

That is the goal: to get basic log notes by tagging what’s happening on camera in real time, and translate that into useful metadata inside the NLE. Working both ways  in parallel, forced me to think about logging more consistently (like the machine would) and what logging I wanted. That’s where I discovered that I can’t have my perfect tool just yet.

Tracking and logging Person – who’s on camera; What are they doing; About what, Where. Person, action, noun, location. Clip names are built from Person(s)+Action(s)+Noun. And that’s perfectly fine for b-roll, or material that has obvious action. It’s also a huge benefit to get that within seconds of ingesting, transferring time from the edit bay to the production day. The assumption being that someone has time to log on set – to an iPad app rather than on paper as it probably is being done now. Note too that we’re not logging the specifics of a shot, just what’s happening during the time frame. This is applied to media shot in that time frame.

What I found is that I still need to do a lot of fairly manual logging, even with this organizational structure  in place, to get detail from the dialog. Those long takes where someone is talking, that help drive story and character, are still the logging bottleneck until some transcription technology automates the process. (I’d love file access to the Dictation function coming in OS X Mountain Lion, but I don’t think they’ll be open to developers this time round.)[Update: for those who are really observant! As of the start of Solar Odyssey, we are using Dictate for a “comment” keyword. The words dictated become the Note for that Keyword.]

The use of these snippets, that tend to become selects, leads to the question of how best to integrate the information. I’m working in Final Cut Pro X and the first, obvious response, would be “Keyword Collections”. Except each quote snippet would be a unique Keyword Collection and that would be clumsy.  I quickly moved to using Favorites for the job. Placing the transcript or summary of what was said, into the Favorite name. Not ideal, but the text is searchable. Then I was reminded that Favorites (particularly not names) do not translate to the XML, so even if we captured this information in the field, or generated it programmatically, there would be no way to sent it to Final Cut Pro X. Oops!

My current strategy is an evolution. I use relatively few Keyword Collections – mostly to gather the main themes of the day, or by character, and other information. Now, we categorize this source added metadata into People, Places, Actions, Keywords internally, and that’s important metadata that we don’t want to lose. Unfortunately Final Cut Pro X doesn’t support this type of key-value pair, so we used prefixes like p: k: a: to the keywords. This also makes it easy to drag these into an optional People, Places or Actions  folder (as we cannot do that via XML yet).

To those Keyword Collections I added some generics: Action, Quote, Problem. I apply these as Keyword Ranges, but fill in specifics into the associated notes field (which is supported in the XML, so I could export to a different type of database for asset management, if I wanted). The Notes field can apparently be any length and is great for transcript, semi-transcript or summary details.

An advantage of this approach: it allows me to use a Favorite to highlight, perhaps, a particular Quote. Action is used to break up a long clip that has many different identifiable actions happening. Again the detail goes into the Notes field. Each entry has its own Notes option, so it’s very versatile. (I now have the decision to make as to whether or not I manually swap over my Favorites to Quotes in the earlier logging!)  We’re still working on how best to add these notes from the field, when there’s generally not sufficient time for typing.

In a perfect world, I’d be able to mark certain time periods as “transcribe for me” but I don’t see that coming for at least another year or two. Even when it does, I’d probably still want to listen through large sections to get tone, but if I can get to that stage faster, then my goal is achieved.

As to why entering metadata isn’t easier? Along with most worthwhile things in life, it comes with some work attached. That doesn’t mean we can’t work to make it easier.

 

23 replies on “Some thoughts on Metadata”

Interesting post Mr. H.,

Maybe leveraging Dictation to enter logging detail into the Notes field – either in realtime (on set / on location logging) or after the fact depending on the shooting circumstances – will also help streamline the more unavoidably manual apects of the logging process.

Thanks for the food for thought.
Andy

Well if only FCPX would do something simple like … Oh maybe SUBCLIP this might be easier.

I’ve certainly not missed subclips because functionally I can do more than simple subclips.

Yes but you admit in that article that would be helpful. Adding true subclips would be very helpful.

I certainly did not admit that subclips would be helpful. Not at all. In fact just the opposite. We’ve been discussing subclips in detail today (in the context of their translation from 7toX) and I really, really don’t want anything like hard ended subclips back in FCP X. They would be unhelpful and get in the way.

You admit all the keyword collections would be clumsy, your favorite method you say is not ideal. That begs the question of what would work. A subclip would be perfect to oraginize long takes where someone is talking. That’s what subclips are for. It’s silly to say they would get in the way when that would be a good answer to your problem. There is a reason they are in NLE applications, they work. FCPX could implement them too and then use the other organization options on top. It would be the best of both worlds.

You seem to like to be argumentative Alan. Despite the way I’ve got it now not being “perfect” it is much better than any previous NLE I’ve worked with and is far better than FCP 7 for example. Subclips would NOT be any help to me. They would get in the way. Having hard ended pieces means I have to work much harder at being precise on the trim/select, and I can perceive no advantage that hard ended subclips would give me that Keyword ranges give me now. I can also see how Subclips would get in the way of the way I’m now working.

You may not like that, or agree, but please don’t keep telling me that I’d be better off with Subclips when I’ve already told you categorically that there is no benefit for me in Subclips.

End of my contribution to this discussion.

You’re funny. You can’t ever and won’t ever admit that if it’s not in FCPX then it is no good for you or not needed. I hate to break it to you Philip that the more you react like that the more you look like you’re in the pocket of Apple. You can choose not to publish this comment if you want but that’s quite the perception. You often can’t admit benefits to other things since you aren’t an editor. You’re a computer guy who “edits” some. Quite the difference.

Really? I’m a computer guy who “edit’s some”. Funny how i’ve done nothing much else for the last six weeks. Try adding some facts to your posts Alan and we’ll have some more respect for you.

Apparently, to you, if I do not think like you that every feature in FCP 7 has to be exactly the same in FCP X to be “acceptable’ then I’m in the pocket of Apple? You’re funny.

I’ve got lots of things I’d like to see changed or added to FCP X. I do not think it’s perfect but at the same time, I recognize the value of the new over the old and appreciate that.

Oh, I’m also the guy that will change editing forever by taking out the boring parts using metadata. Let’s check back in 10 years to see who’s made the biggest contribution to the industry. In fact, want to compare right now who’s actually made the biggest contribution? How about you start with your achievements? The software you’ve made to make workflow easier? The techniques you’ve developed? The people you’ve trained?

Oh and yes, let’s also hear about your editing history so we can judge whether or not you’ve made a lasting contribution to the industry, or have just edited a few pieces. Quite the difference.

I can tell I’ve edited a lot more than you as I’ve never seen a single thing you’ve cut. I see no links to a reel. I have nothing online as that’s not what I do. I’m lucky to make email work but I can make then NLE sing. I’m also not on the blogs and the twitters spouting that i know everything better than everyone else.

Just enough time and ability to get on blogs and tell people what they’re supposed to like and how they’re supposed to work. Interesting.

You are a funny fellow. I guess I’ve never seen anything you’ve cut either.

All I want is to see some editing work from this guy. Don’t think that’s too much to ask

Like you Alan, I don’t have any stuff online, but I have stuff in every college in Australia. And you can check IMDB if you care as you have my full name, unlike yourself.

But what’s the point? Whether or not I’m a good, or average, editor does not change the fact that metadata-based automation is going to be a large part of the future of media production (at many levels) and that’s my focus. As for the claims I made – I think my track record already shows I’ve made a fairly substantial, and lasting, contribution to the industry – 7toX, Xto7, prEdit, First Cuts, Sync-N-Link (used by pretty much every double system “Hollywood” Production that works with FCP), etc. Plus the most comprehensive interactive training for FCP 1-5… I feel I can stand on my track record, without exaggerating.

Just so you don’t get the ridiculous impression that it’s just Philip; I don’t want traditional subclips back either. They don’t fit into the flexible metadata oriented Event structure.

I think the real question here is how much time have you spent editing with FCPX to come to your conclusion?

I think you could be falling into a trap of over logging. At the end of the day you still need to review your footage and FCPX makes logging on the fly so so easy. I’m an advocate of logging as little as possible – sorting your data into bite size chunks – in the edit.

When I edit with X I generally use 4 of 5 keyword collections:

1) Interview
2) Actuality
3) B-Roll
4) GVs
5) Good Shots ( stuff that stands out as ‘must go in somewhere”)

Then I progressively add “tags” into the notes fields as I edit. e.g. peoples names, locations, scene and build up smart collections.

I used to be a meticulous logger (only ever used list view) until X What changed was the event library lets me see the shots. All I need to do now is roughly filter things in. I’ve even found myself using that ‘amateur’ iMovie like feature of expanding the shots out. It is great, I can quickly see whole shots over time.

I think your iPad app sounds cool but you don’t need to log obvious things you can see (People, Places, Actions – Call it visual meta data 😉 ) because you still have to review footage. I would be more interested in a app that records where the good actuality or sound bites are or the themes that are developing. Anything that can give me non visual guidance. Or after an event happens the director could do a piece to camera into the iPad and dictate instructions while they are fresh in their head. That then syncs up with the media.

just some thoughts…

If you’re leaving all the editing and logging to a human, I agree. Given that my goal is to automate as much of the process as possible, I disagree 🙂 Right up to the point of first string out, like happens in a lot of reality already (where a story editor does the first string out and then passes it to an editor to polish and cut to duration.)

The computer cannot see “the obvious” so it has to be in there in metadata.

The input of voice-based metadata has come up before, but I’d rather derive directly from what was being said.

Hey Philip,
Thanks for thinking about this stuff. We are all looking forward to solve this.
Logging is a big part of editing, and making it smarter is essential.
As for the kiddy discussion, I am happy not to use subclips ever again.

I am sorry this whole discussion fell into subclip vs non subclip. Our problem is we shoot things and have to keep them in perpituity. Logging is critical if we are ever to find stuff we shot 4 years ago on a shoot for something we are editing today. The ability to log in depth and have multiple folks be able to search by keyword etc is vitally important.

I wish there was an easy almost graphical way to make it happen, FAST in post as we injest or soon after. Problem is, consistancy. One student aid might see Mountain Lion, the next calls it a Cougar, and the third calls it a PUMA or a Catamount or whatever. You get the point. It is a major problem for us and many agencies.

“One student aid might see Mountain Lion, the next calls it a Cougar, and the third calls it a PUMA or a Catamount or whatever. You get the point. It is a major problem for us and many agencies.”

Indeed this is one fairly simple point that is huge and not addressed in many workflows. For when you have one logger log the footage and one editor edit it many of these issues are minimal, but when you have shared storage with many hours of footage from several seasons of reality tv or documentary that need to be searchable by editors/producers/writers/etc. it becomes very important.

I would love to get some feedback from anyone in this thread about Squarebox’s CatDV software, especially Philip? What is it that’s important that you can’t do with CatDV that you are doing with FCPX? This isn’t a challenge, we are looking at different systems and at this point building a database with specialized logging/cataloging software that is readable by different NLE’s and even Excel etc. seems to make sense, especially since the NLE-of-the-year keeps changing and none of them are very focused on logging, like how CatDV does scene detection and standardizes keyword spelling.

Thanks for your input, really enjoy this blog!!
-Chris Lawes

You make some very valid points about standardization of terms. That’s one thing that our “Logger” (working title only) system – in development – will help with, as it’s done with buttons, not free form, so it forces standardization.

As for CatDV, I’ve been a fan for a while now. I tend to think of the difference between building a permanent library and building a project, but in reality there’s no conflict as the work you do in FCP X translates to CatDV and vice verso. I would disagree on the “focus on logging” comment – I think FCP X does focus a lot on logging, in so far as it has the tools to log extensively and comprehensively.
But just having the tools doesn’t mean they will be used (you can lead a horse to water…) And being so different, perhaps there’s still some exploration on how best they can be used (if indeed there is one “best” way).

Comments are closed.