Facial recognitiion in the cloud http://t.co/kznweJhC
At one level this is kind of scary – these were the folks who discovered a Social Security number way too often, from a casual photograph in the street – at the level of production automation it shows the direction we’re heading for automatically generating metadata for postproduction.
 In their most recent round of facial recognition studies, researchers at Carnegie Mellon were able to not only match unidentified profile photos from a dating website (where the vast majority of users operate pseudonymously) with positively identified Facebook photos, but also match pedestrians on a North American college campus with their online identities.
and
In our third experiment, as a proof-of-concept, we predicted the interests and Social Security numbers of some of the participants in the second experiment. We did so by combining face recognition with the algorithms we developed in 2009 to predict SSNs from public data. SSNs were nothing more than one example of what is possible to predict about a person: conceptually, the goal of Experiment 3 was to show that it is possible to start from an anonymous face in the street, and end up with very sensitive information about that person, in a process of data “accretion.” In the context of our experiment, it is this blending of online and offline data – made possible by the convergence of face recognition, social networks, data mining, and cloud computing – that we refer to as augmented reality.
The benefit for documentary/reality postproduction would be to have every person in a shot identified automatically and tagged with name (and perhaps other relevant details).
And then apply  Affectiva‘s emotion detecting algorithms…