Categories
Lumberjack Machine Learning Metadata

Speech-to-Text: Recent Example

For a book project I recorded a 46 minute interview and had it transcribed by Speechmatics.com (as part of our testing for Lumberjack Builder). The interview was about 8600 words raw.

The good news is that it was over 99.98% accurate. I corrected 15 words out of a final 8100. The interview had good audio. I’m sure an audio perfectionist would have made it better, as would recording in a perfect environment, but this was pretty typical of most interview setups. It was recorded to a Zoom H1N as a WAV file. No compression.

Naturally, my off-mic questions and commentary was not transcribed accurately but it was never expected or intended to be. Although, to be fair, it was clear enough that a human transcriber would probably have got closer.

The less good news: my one female speaker was identified as about 15 different people! If I wanted a perfect transcript I probably would have cleaned up the punctuations as it wasn’t completely clean. But reality is that people do not speak in nice, neat sentences.

But neither the speaker identification nor the punctuation matter for the uses I’m going to make. I recognize that accurate punctuation would be needed for Closed (or open) Captioning for an output, but for production purposes perfect reproduction of the words is enough.

Multiple speakers will be handled in Builder’s Keyword Manager and reduced to one there. SpeedScriber has a feature to eliminate the speaker ID totally, which I would have used if a perfect output was my goal. For this project I simply eliminated any speaker ID.

The punctuation would also not be an issue in Builder, where we break on periods, but you can combine and break paragraphs with simple keystrokes. It’s not a problem for the book project as it will mostly be rewritten from spoken form to a more formal written style.

Most importantly for our needs, near perfect text is the perfect input for keyword, concept and emotion extraction.

Categories
Machine Learning Metadata

IBM Watson is a Sports Guru?

Two recent announcement place IBM’s Artificial Intelligence play, Watson, right in the sports spotlight.

Watson is being used for tagging World Cup coverage, and the relationship with Wimbledon from picking highlights and enhancing user experience to, this year, designing the poster!

Categories
Artificial Intelligence Lumberjack Metadata The Business of Production

Modern Logging and Pre-Editing Approaches: “House Hunters” Style Reality TV

This is the first time I’ve taken a deep look at a TV show and worked out what I think would be the perfect metadata workflow from shoot to edit bay. I chose to look at Pie Town’s House Hunters franchise because it is so built on a (obviously winning) formulae, and I thought that might make it easier for automation or Artificial Intelligence approaches.

But first a disclaimer. I am in no way associated with Pie Town Productions. I know for certain they are not a Lumberjack System customer and am also pretty sure they – like the rest of Hollywood – build their post on Avid Media Composer (and apparently Media Central as well). This is purely a thought exercise built around a readily available example and our Lumberjack System’s capabilities.

Categories
Adobe Apple Augmented Reality Business Interesting Technology Lumberjack Machine Learning Metadata The Business of Production The Technology of Production

2017 – 2018 Introspection

If I was to summarize 2017 it would be: AI, HDR, VR, AR and Resolve. If you missed any trend they would be Artificial Intelligence (really Machine Learning); High Dynamic Range; Virtual Reality (i.e. 360 or 180 degree video); Augmented Reality; and Blackmagic Design’s Resolve 14.

As Augmented Reality is composited in at the viewer’s device, I doubt there will be any direct affect on production and post production.

Virtual Reality has had a good year with direct support appearing in Premiere Pro CC and Final Cut Pro X. In both cases the NLE’s parent purchased third party technology and integrated it. Combined with the ready availability of 360 cameras, there’s no barrier to VR production.

Except the lack of demand. I expect VR will become a valuable tool for a range of projects like installations, telepresence and travel, and particularly in gaming, although that’s outside my purview.

What I don’t expect is a large scale uptake for narrative or general entertainment functions. Nor in most of the vast range of video production. It’s not a fad, like 3D, but will likely remain a niche in the production world. I should point out it’s very possible to make good money in niches!

Conversely I would not buy a new screen without it being HDR compatible – at least with one or two of the major HDR formats. High Dynamic Range video is as big a step forward as color. I believe it provides a fundamentally better viewing experience than simply upping the pixel count.

High Dynamic Range is supported across the most important editing software but suffers from two challenges: the proliferation of competing standards and studio monitoring.

The industry needs to consolidate to one standard, or sets will have to be programmed for all standards. None currently are. Ultimately this will change because it has to, but some earlier set purchasers will probably be screwed over!

HDR studio monitors remain extremely expensive, and hard to find. There’s also the problem of grading for both regular and high dynamic range screens.

I have no doubt that HDR is fundamental to the future of the “television” screen. It will further erode attendance in movie theaters as the home experience is a better image than the movie theater, and you get to control who arrives in your media room!

In 2017 Resolve fulfilled it’s long growing promise of integrating a fully feature NLE into an excellent grading and DIT tool. One with a decent Digital Audio Workstation also integrated. Blackmagic Design are definitely fulfilling their vision of providing professional tools for lens-to-viewer workflows, while continuing to reduce the cost of entry.

When you hear that editors in major reality TV production companies don’t balk at Resolve, despite being Media Composer traditionalists, I do worry that Avid may be challenged in its core market. Not that any big ProdCo has switched yet, but I wouldn’t be surprised to see significant uptake of Resolve as an editing too in 2018.

My only disappointment with Resolve is that, as of 14.1, there is now way to bridged timed metadata into Resolve. Not only does that mean we cannot provide Lumberjack support, but no transcript (or AI derived metadata) import either. It’s frustrating because version 14 included Smart Collections that could function like Keyword Collections.

In another direct attack on Avid’s core markets, both Resolve and Premiere Pro CC added support for bin locking and shared projects. Implemented slightly differently by each app, they both mimic the way Media Composer collaborates. Resolve adds a nice refinement: in app team messaging.

The technology that will have the greatest affect on the future of production has only just begun to appear. While generally referred to as Artificial Intelligence, what most people mean, and experience, are some variation on Machine Learning. These types of systems can learn (by example or challenge) to expertly do one, or two tasks. They have been applied to a wide range of tasks as I’ve written about previously.

The “low hanging fruit” for AI integration into production apps are Cognitive Services, which are programming interfaces that help interpret the world. Speech-to-Text, Facial recognition, image content recognition, emotion detection, et. al. are going to appear in more and more software.

In 2017 we saw several apps that use these speech-to-text technologies to get transcripts into Premiere Pro CC, Media Composer and Final Cut Pro X. Naturally that’s an area that Greg and I are very interested in: after all we were first to bring transcripts into FCP X (via Lumberjack Lumberyard). What our experience with that taught us is that getting transcripts into an NLE that doesn’t have Script Sync wasn’t a great experience. Useful but not great.

Which is why we spent the year creating a better solution: Lumberjack Builder. Builder is still a work in progress, but it’s a new NLE. An NLE that edits video by editing text. While Builder is definitely an improvement on purely transcription apps, it won’t be the only application of Cognitive Services.

I expect we at Lumberjack System will have more to show later in the year, once Builder is complete. I also expect this is the year we’ll see visual search integrated into Premiere Pro CC. Imagine being able to search b-roll by having computer vision recognize the content. No keywording or indexing.

Beyond Cognitive Services we will see Machine Learning driving marketing – and even production – decisions. In 2018, the terms Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks will start appearing in the most unexpected places. (While they describe slightly different things, all those terms fall under the Artificial Intelligence umbrella.)

I’m excited about 2018, particularly as we do more with our new intelligent assistants.

Categories
Interesting Technology Lumberjack Machine Learning Metadata

Lumberjack, IBC and FCPX World

If you’re not going to be at IBC then move on, but if you’re going you’ll probably want to be at the FCP X World Event, particularly on Saturday at 12:15 and Sunday at 2:15 when Lumberjack System will be previewing the latest addition to the Lumberjack family.

Categories
Artificial Intelligence Item of Interest Metadata Nature of Work

Wimbledon use IBM Watson AI to “Serve up Highlights”

The report isn’t clear on exactly how Watson’s “AI” is being used but the article says that they are “now curating the biggest sights and sounds from matches to create “Cognitive Highlights,” which will be seen on Wimbledon’s digital channels.”

Apparently using Watson cognitive services to recognize a significant moment, and pull it together with cheers and social media comments to make a 2 minute video.

The AI platform will literally take key points from the tennis matches (like a player serving an ace at 100 mph), fans’ cheers and social media content to help create up to two-minute videos. The two-week tourney at the All England Lawn Tennis and Croquet Club, complete with a Google Doodle to celebrate Wimbledon’s 140th anniversary, began Monday.

 

Categories
Machine Learning Metadata

Apple buys (another) Facial Recognition Company – AI Based

In September 2010 Apple purchased Swedish facial recognition company Polar Rose, and today we learn they’ve purchased Israeli startup RealFace: “a cybersecurity and machine learning firm specializing in facial recognition technology”.

What is different between the two purchases is that this latest is based on machine learning.

…the startup had developed a unique facial recognition technology that integrates artificial intelligence and “brings back human perception to digital processes”. RealFace’s software is said to use proprietary IP in the field of “frictionless face recognition” that allows for rapid learning from facial features.

Another step towards our software identifying and labelling people in our media.

Categories
Lumberjack Metadata

Lumberjack Story Mode in Action

With Lumberjack System we don’t focus enough on Story Mode. Of late Transcript mode and Magic Keywords have taken the main focus, and of course the primary real-time logging and pre-editing tools are well known by now.

But Story Mode is ultimately move valuable if the project continues more than a one or two day shoot. Story mode lets us send Lumberjack logged Final Cut Pro X Events or Libraries back to the Lumberyard app to create string-outs from all the footage.

This recently became very valuable for a recent project: extracting the conversations on Final Cut Pro X from nearly 20 episodes of Lunch with Philip and Greg for an upcoming documentary.

Categories
Artificial Intelligence Family History Video Project Metadata Random Thought

2016 in Review

2016 was a year of consolidation and growth for Greg and I: citizenship, green card, artificial intelligence and a house and yard dominated the year. 2017 looks like being another interesting and exciting year.

Philip and Greg 2017. Thanks to Kay Stammers and Tristan Parry.
Philip and Greg 2017. Thanks to Tristan Parry.

Categories
Apple Pro Apps Intelligent Assistance Software Metadata

Introducing FindrCat

FinderCat is happy because his Keywords will travel with the media files.
FindrCat is happy because his Keywords will travel with the media files.

I’d like to introduce you to our first new piece of software for about two years: FindrCat. FinderCat is an easy-to-use app that converts your Final Cut Pro X Keywords into Finder Tags, so you can then filter and search for your media via Finder. In a world of Media Asset Management (MAM), and Digital Asset Management (DAM) this is a ‘no M’am’ asset organization tool.

The biggest advantage is that the FCP X keywords now travel with the media files, and will return to FCP X as keywords when re-imported, on any system.