Categories
Assisted Editing Interesting Technology Metadata The Business of Production

Google adds to smart APIs

Google adds another smart API to join the many that already exist.

Google today launched a new API to help parse natural language. An API is an Application Programming Interface, that developers can use to send data to, and get a response back. Natural Language Parsing is used to understand language that is available in computer-readable form (text). Google’s API joins an increasingly long list of very smart APIs that will understand language, recognize images and much more.

A lot has changed since I last wrote about Advances in Content Recognition late last year.

These APIs are developer tools for integrating into apps to make them smarter. The current barrier to automating basic editing tasks is the generation of metadata. These APIs generate the metadata that tools like Intelligent Assistant’s Serendipity engine (used in First Cuts now off the market) can process to build stories. The better the metadata, the more assistance can be provided to editing workflows.

Google’s new API gives developers”

access to Google-powered sentiment analysis, entity recognition, and syntax analysis.

Entity recognition is the ability to:

automatically identify and label the people, organizations, locations, and events mentioned in text

Syntax analysis is the first step to understanding the content and context of text, which can then be fed into whatever app is in development.

Sentiment analysis is what you’d expect:

the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer’s attitude towards a particular topic, product, etc., is positive, negative, or neutral

That’s not quite emotion detection, but then other people have that in hand.

Google’s new API joins others, like the Cloud Speech API, which is now also available in public beta, but unfortunately limited to 2 minute chunks; the Vision API (understanding the content of an image) and the Translate API (translating between languages). Google uses these APIs in their own products.

They are also not the only entity offering similar APIs. Lumberjack System uses a Keyword Extraction API from Monkey Learn, which has other APIs available.

IBM Watson has Language, Vision, Speech and Data APIs, including speech-to-text and keyword extraction.

Similarly Microsoft Cognitive Services has a wide range of APIs available, including Computer Vision, Emotion, Face and Video APIs

This isn't the full list of Microsoft Cognitive Services APIs.
This isn’t the full list of Microsoft Cognitive Services APIs.

There are many others. What is obvious is that the ability for our apps and workflows to understand speech,  interpret images, recognize faces and emotions and convert it to content metadata. The APIs are amazingly affordable.