The latest episode of The Terence and Philip Show features Zack Arnold, keeping fit while working in post production, and achieving full potential. It’s also our longest show ever.
The latest episode of The Terence and Philip Show features Zack Arnold, keeping fit while working in post production, and achieving full potential. It’s also our longest show ever.
One of the smart algorithms that developers can call on is Sentiment Analysis (by that or another name). Sentiment Analysis simply reads the sentiment – positive, neutral or negative – from a body of messages. It can also provide the same information on single ‘documents’, which could be transcripts.
MonkeyLearn – one of the providers of these smart algorithms – has an example of sentiment analysis from the current electoral cycle.
My question is, does this sort of metadata about the content of media, provide any benefit for post production processes; in sorting or organizing footage; or is this something you’d ever want to search for?
I’ve (along with many other people) have been beta testing SpeedScriber, an unreleased app that combines the power of an API for speech to text with a well thought out interface for correcting the machine transcription. Feed the SpeedScriber output to Lumberyard (part of Lumberjack System) and extract Magic Keywords and in a very short period of time (dependent largely on FCP X’s import speed for the XML) and you have an organized, keyworded Event with a fully searchable Transcript in the Notes field.
Microsoft claim a milestone with their Cordana speech to text transcription service, hitting an accuracy rate of 93.1% or a failure rate of 5.9%, which is reportedly the same accuracy as you’re paying $1 or $2 a minute for right now.
No human transcriber is completely accurate. There are generally some words that are unclear, or technical terms not known to the human transcriber that need correcting in a transcript.
I’ve also been one of the beta testers on SpeedScriber, which is built around an automatic engine, and have been very impressed with the accuracy, particularly with American accents. Accuracy dropped a bit when it had to deal with my still-mostly-Australian accent.
One of the powerful way Artificial Intelligence ‘learns’ is by using neural networks. Neural Networks are trained with a large number of examples where the result is known. The Neural Network adjusts until it gives the same result as the human ‘teacher’.
However, there’s a trap. If that source material contains biases – such as modeling Police ‘stop and frisk’ – then whatever biases are in the learning material will be contained in the subsequent AI modeling. This is the subject of an article in Nature: There is a blind spot in AI research and also the praise of Cathy O’Neil’s book Weapons of Math Destruction that not only brings up that issue, but the problem of “proxies”.
Proxies, in this context, are data sources that are used in AI programs that are not the actual data, but rather something that approximates the data: like using zip code as a proxy for income or ethnicity.
Based on O’Neil’s book, I’d say the authors of the Nature article are too late. There are already institutionalized biases in very commonly used algorithms in finance, housing, policing and criminal policy.
In just two years, the Artificial Intelligence technologies available through IBM Watson has exploded. Artificial Intelligence is a rapidly growing field.
The New York Times has an interesting article that provides a background to the business behind Watson.
Also new for today is an article at the Mac Observer about Apple’s AI intentions.
Rather than take up more screen real estate with a new button, we repurposed an existing function in Lumberyard. Previously, any logged Keyword Range less than 5 seconds long was ignored. We figured anything that short was a mistake. Now it creates a Marker.
The Marker will be named using the Keyword as the name, but it will be applied at the starting point as a single frame Marker.
Tim Cook – Apple’s CEO – has said in a new interview with BuzzFeed, that Augmented Reality (AR) will be more important than Virtual Reality (VR). Virtual Reality creates a new environment that is immersive for the viewer. Augmented Reality overlays computer generated data on the real word (as captured by a camera).
While VR is undoubtedly going to be a significant technology in the future, I think it will mostly enhance games, exhibits and remote presence rather than everyday activities. AR can overlay translated text over foreign signage. AR can create geotagged games like the recent Pokemon Go.
I can see how AR will become part of everyday life. I’m not sure I see the same for VR.
As you probably all know, I have two day jobs heading Intelligent Assistance Software and Lumberjack System. We’re very proud of the work we’ve done through both companies. We make a decent income from them for sure, but what makes us particularly happy when our tools get people’s work done faster. They get to go home to their families earlier and production has less drudgery.
So it pleases us greatly when that gets recognized, as it did this trip.
In this episode we discuss the future of Avid and how AI will affect post production. Only one subject has a positive looking future!