The New iPhone’s Face Recognition Capabilities Could Redefine Privacy http://t.co/WayE1Abv
Following on the heels of yesterday’s post about facial recognition in the cloud here’s information on how Apple are applying the technology they gained when they acquired Polar Rose last September, at least within iOS frameworks.
When coders dug through Apple’s beta versions of iOS5 they found what were deemed to be “highly sophisticated” API systems that let an iPhone automatically track eye positions and mouth positions (so the angle to the user, and possibly where their attention is being directed could be calculated) as well as passing key data on to a face recognition algorithm that would be accessible to all apps…not just Apple’s own.
Combine this with the Nuance-licensed voice recognition technology in Siri – also new with iOS 5 and iPhone 4S – and we have the foundation of a very powerful metadata generation system that would automate naming people in clips and form the basis of speech transcription and then keyword extraction.
In my dreams these are technologies that will come to Final Cut Pro X 10.2 or 10.3 in future years.
2 replies on “The New iPhone’s Face Recognition Capabilities.”
I’m not sure it will be that long, Philip. This would essentially be an adjunct to the existing “face detection” that already exists.
If I were to guess Apple short-term roadmap for FCPX… The next year will see Apple continuing with it’s 10.0X releases, with .02 coming likely before March, bringing the Multi-cam and broadcast monitoring support they’ve mentioned, as well as yet unannounced features. My sense is that the majority of next years updates will go towards achieving feature parity with FCP7. This will probably wrap up with the release of FCP10.1, at which point they can venture into new territory. I’m sure there will be some “new” stuff thrown in, but stability and making FCPX work with a broader range of workflows is likely their priority at this point.
But I could easily see facial recognition as a 10.1 feature. Apple already has so much invested in it with iPhoto, and now iCloud. The other natural carryover from iPhone is Siri’s voice detection, which could be added into FCPX as auto transcription of interviews.
It’s hard to believe, but with the technology already in place, I could see simply having to ask FCPX, “Is there a clip of Fred talking about short-term loans?” and a list of clips containing those parameters auto-sorting into the Event Library.
Overall I agree. This first release will bring parity with FCP 7 (for the features they plan to implement) and then it gets exciting.
Siri’s voice detection is, as I’m sure you know, Nuance’s technology – one of the two best speech to text technologies in existence (the other is Google’s). Hopefully Apple’s license with Nuance goes beyond Siri on the iPhone.
Your last paragraph beautifully sums up where all this is going.