Categories
Interesting Technology Random Thought

E3 and what it means for the future of production

This week was my first visit ever to an Entertainment Electronics Expo, usually written as E3. Entertainment in this context means gaming – computer games. My desire to visit E3 was driven by a deep-seated feeling that I was ignoring an important cultural trend because it doesn’t intersect my life direction – I don’t play “video games” and I don’t have children, nieces or nephews nearby. But it’s hard to ignore an industry that reportedly eclipsed motion picture distribution in gross revenue last year. It hasn’t – Grumpy Gamer puts things into perspective.

That doesn’t mean that the gaming industry statistics are anything but impressive: Halo 2’s first day sales of US$125 Million eclipses the record first weekend box office (Spiderman 2002) of $114 milliion – a statistic the gaming industry likes to promote but isn’t as impressive when you consider whole-of-life revenue. Grumpy Gamer again. Gaming is a huge business and E3 had many booths that represent a multi-million dollar investment in the show by the companies exhibiting. E3, like NAB is an industry-only show (18+ only and industry affiliation required) and this year attracted 70,000 attendees (vs NAB 2005 with about 95-97,000) in a much smaller show area. This is big business.

But what has this got to do with “the present and future of production and post production”? There are three “game” developments that will, ultimately impact video production, post production and distribution. This is quite aside from the fact that, right now, video production for games is a big part of the expense of each game. Most games have video/film production budgets way above those of a typical independent film’s total budget. This presents an opportunity for savvy producers to team up with game producers for “serious games” – games aimed at education or corporate training. Video production alone is rapidly becoming a commodity service so working with a games company for their acquisition is a value add and opportunity in the short term.

Microsoft’s Steve Ballmer

Take today’s passive video content, add a little interactivity to it. Take today’s interactive content, games, and add a little bit more video sequencing to it. It gets harder and harder to tell what’s what…

In the longer term three trends will impact “video” production: graphics quality and rendering, “convergence” (yes, I hate the term too, but don’t have a good alternative), and interactive storytelling.

Graphic Powerhouse

We all owe the gaming community a debt of gratitude for constantly pushing the performance of real-time graphics and thus the power of graphics cards and GPUs. The post-production industry benefits from real-time in applications like Boris Blue, Apple’s Motion, Core Video in OS X 10.4 Tiger and other applications. Without the mass market for these cards they’d be much more expensive and would not have advances as quickly as they have.

The quality of real-time graphics coming from companies like Activision with their current release F.E.A.R. in a standard definition game, and the quality of their upcoming HD releases for next-generation gaming consoles is outstanding. In one demonstration (actual game play) of a 2006 release, the high definition graphics quality, including human face close-ups was really outstanding. Extrapolate just a few more years and you have to wonder how much shooting will be required. If we can recreate, or create, anything in computer simulation in close to real time (or in real time) at high definition, what’s the role of location shooting and sets? Sky Captain and the World of Tomorrow and Sin City have shown that it’s possible to create movies without ‘real’ sets, although both movies seemed to have needed to apply extreme color treatments to disguise the lack of reality (or was that purely motivated by creativity).

Actors could be safe for a little while, because of the ‘Uncanny Valley’ effect, although the soldiers in Activision’s preview were close-to-lifelike in closeup, as long as you didn’t look at the eyes – the dead giveaway for the moment. Longer term (five + years) realistic humans are almost certain to come down the line. At that point, where is the difference between a fixed path in a game, and a video production?

Convergence

NAB has, until this year, long been “The Convergence Marketplace” without a lot of convergence happening. However, the world of gaming has converged with the world of movies a long time ago. It is standard practice for a blockbuster movie release to have a themed game available – Star Wars III: Revenge of the Sith and Madagascar have simultaneous movie and game releases. Activision’s game development team were on the Madagascar set and developed 20 hours of game-play following the theme of a 105 minutes movie!

Similarly Toronto-based Spaceworks Entertainment, Inc. announced at E3 a Canada/UK co-production of Ice Planet the TV series and game are to be developed together, again with the game developers on the set of the TV series shoot. Although the game and TV series can be enjoyed independently the plan is to enhance the game via the TV show and the TV show via the game. Game player relevant information can be found throughout each of the 22 episodes of the series’ first season – the first of five seasons planned in the story arc.

Whether or not Spaceworks Entertainment are the folks to bring this off eventually it will happen that there is interplay between television and related game play. Television will need something to bring gamers back to the networks (cable or broadcast) if there’s a future to be had there. (Microsoft, on the other hand, wants to bring the networks to the gamers via the Xbox 360.)

Interactive Storytelling

The logical outcome of all this is an advanced form of interactive storytelling that could supplant “television” as we know it. Or not. Traditionally television has been a lean-back, turn my mind off medium and I imagine there will continue to be a demand for this type of non-involved media consumption in the future that won’t be supplanted by a more active lean-forward medium. However, the lean-forward medium will be there to supplement and, for many people, replace the non-involved medium.

Steven Johnson’s Everything bad is good for you makes some valid points that suggest that the act of gaming might be more important than imagined (and less bad for you). From one review:

The thesis of Everything Bad is Good for You is this: people who deride popular culture do so because so much of popcult’s subject matter is banal or offensive. But the beneficial elements of videogames and TV arise not from their subject matter, but from their format, which require that players and viewers winkle out complex storylines and puzzles, getting a “cognitive workout” that teaches the same kind of skills that math problems and chess games impart. As Johnson points out, no one evaluates the benefit of chess based on its storyline or monotonically militaristic subject matter.

In the same vein, and a little aside, I was amused by this comment posted in Kevin Briody’s Seattle Duck blog that hypothesises how we would have contemplated books had they been invented after the video game.

“Reading books chronically understimulates the senses…
Books are also tragically isolating…
But perhaps the most dangerous property of these books is the fact that they follow a fixed linear path. You can’t control their narratives in any fashion: you simply sit back and have the story dictated to you…”

How far will we go with non-linear paths? Most games today have fairly limited, linear paths to a single destination, with a lot of flexibility in the journey. I’m no fan of first-person shooter games but can imagine becoming more involved with another type of story. Don your 3D immersion headset, or relax in your lounge with the 60″ wall-mounted flat panel, and join me in this week’s episode of (say) Star Trek TNG. Choose your character and participate in the story appropriately. Clues as to your behavior would be an optional “cheat” track (not dissimilar to the podcasts accompanying this year’s Battlestar Gallactica episodes). The rest of the characters would guide the story and respond to your participation, to whatever outcome. Whatever character role you took, would control the perspective of the show that you saw (when involved a scene).

Is this a game? Is it “television”? Is it something else? Storytellers have adapted their stories for specific audiences from the first day there were stories. Roaming storytellers would adapt details for kings or commoners, for this geographic region over that (often for their own self-preservation) so adaptive (interactive) storytelling isn’t new, just new to modern electronic media. Do a search for “interactive storytelling” at google.com and you’ll find many links. I just found that my hypothesis above has a name Mixed Reality Interactive Storytelling! The marketing people will have to massage that into something that would capture popular imagination.

None of this will have much impact on typical production this week, next year, or the five years following that, but some time in the future, at least some elements will have crossed over. In a very real sense, I went to E3 last week to get a sense of “the future of video production”.

Categories
Apple Interesting Technology Random Thought

iTunes becomes a movie management tool

iTunes has been doing movies for some time now – trailers from the Apple Movie Trailer’s website have been passed through iTunes for full screen playback, leading many to believe that Apple were grooming iTunes for eventual movie distribution.

Well, iTunes 4.8 will do nothing to dispel the rumor mongers – in version 4.8 iTunes gains more movie management and playback features, including movie Playlists and full screen playback. Simply drag a movie or folders of movies (any .mov or .mp4 whatever the size) into the iTunes interface and they become Playlists.

Playback can be in the main interface (in the area occupied by album artwork otherwise); in a separate movie window (non-floating so it will go behind the iTunes main interface) or to full screen. Visual can be of individual movies or of playlists – audio always plays the playlist regardless of the setting controlling the visuals.

If one had to speculate (and one does, really in the face of Apple’s enticement) it certainly seems that Apple are evolving iTunes toward some movie management features. The primary driver of this development in version 4.8 is the inclusion of “behind the scenes/making of” videos with some albums. For example, the Dave Matthews Band “Stand Up” album in the iTunes Music Store features “an (sic) fascinating behind-the-scenes look at band’s (sic) creative process with the bonus video download.” The additional movie content gives Apple the excuse to charge an extra $2 for the album ($11.99 while most albums are $9.99).

There is a lot of “chatter in the channel” about delivery of movies to computers or a lounge room viewing device (derived from a computer but simplified). Robert Cringely, among others, seem to think the Mac Mini is well positioned for the role of lounge room device. Perhaps, others like Dave TV think a dedicated box or network will be the way to go. Ultimately it will be about two things: content and convenience.

Recreational Television and movie watching is a “lay back” experience – performed relaxed on a comfortable chair at the end of a busy day with little active involvement of the mind. Even home theater consumption of movies is not quite the same experience as a cinema (although close enough to it for many people.) It will take a major shift in thinking for the “TV” to become a “Media Center” outside of the College Dorm Room. We’re still many years from digital television broadcasting being the norm, let alone HD delivery to in-home screens big enough to actually display it at useful viewing distances. (If you want the HD experience right now on a computer screen Apple have some gorgeous examples in their H.264 HD Gallery. QuickTime 7, a big screen and a beefy system are pre-requisites but the quality is stunning.)

Apple do not have to move fast, nor be first, with the “Home Media Center” to ultimately be successful. Look at what happened with the iPod and iTunes in the first place. The iPod was neither the first “MP3 Player” nor some would argue “the best” but it had a superior overall experience, aided by a huge ‘coolness’ factor. So, even if Apple are planning an ‘iTunes Music Store for Movies” some time down the path, it’s not something I’d expect to be announced at MacWorld January 2006 or even 2007!

In the meantime, the new movie management features in iTunes are great. This is not a professional video asset management tool, we’ll have to look elsewhere for that (something I hope the Pro Apps group would be working on) but it is a tool for organizing and playing videos. I have collected show reels and other design pieces I look to for creative inspiration but until now there was no way of organizing them easily. Now I can import them all to iTunes, create play lists for “titles”, “3D”, “design”, “action” and so on for when I need inspiration. Movies can be in multiple play lists, just like music.

I can wait to see what Apple have planned in the future, in the meantime, I’m happy with a new tool in my toolbox.

Categories
Random Thought Video Technology

JVC confirms ProHD strategy

As reported previously in the Pro Apps Hub, JVC’s ProHD strategy is a marketing catch-all for all their HD offerings based on MPEG-2 transport streams. Included in the stragegy is the HDV KY-HD100U and the HDV+ GY-HD7000U.

Available in “early summer” the KY-HD100 is a professional-level HDV camcorder with solid camera-operator-friendly features that justify the ProHD moniker. Three 1/3″ CCDs sit behind a removable lens, although standard is a Fujinon 16x lens developed with JVC. The GY-HD100 records at 30p and 24P at 1280 x 720 resolution in HD and at 29.97 in DV. 24P is accommodated within the 60i standard framework by repeating a 2:3:3:2 pulldown like the only used by Pansonic in the AG-DVX100using MPEG2 “flags” to flag certain whole frames need to be duplicated, so it does 3 frames and then 2 frames (not fields like the DVX would do) and hence embeds 24p in a 60p video stream, but only records 24p frames of data to tape – quite clever really. This allows JVC to offer 24P even though it was not part of the original HDV specification, without deviating from the specification. [Thanks to Graeme Nattress for the correction.]

See Hub news on March 10 for more details on both cameras. At US$6295 JVC have come in well “under $10,000”.

Categories
Random Thought Video Technology

Panasonic announce P2 Camcorder

Panasonic generated a lot of buzz at NAB with the announcement of the AG-HVX 200 multi-format camcorder expected to sell for $5949?? without media. The AG-HVX 200 camcorder is a small-form-factor unit with a built in DV deck for DV25 recording and P2 solid state media support for DVCPRO 50 and DVCPRO 100 recording. Reportedly the FireWire output is also active for recording DVCPRO 50 or HD to a tethered FireWire deck. Panasonic are talking with Focus to develop support for the FireStore HD.

The camera itself is an impressive, multi-format, multiple frame rate device. In DVCPRO HD it supports 1080 i at 30i (60 fields), 1080 p at 24 frames/sec, 720 P at 60 or 24 Progressive frames. In 720 P mode is supports variable frame rates like the Varicam to the P2 media. It has 3 x 1/3″ native 16:9 CCDs.

Panasonic plan a bundle with two 8 GB P2 memory cards for US$9999 – an indication of just how far we have to go before solid state media becomes a viable proposition outside news gathering and other niche markets. While P2 media can be used directly as a source in many NLEs- Final Cut Pro adds native support for this media – it’s not viable to retain the P2 memory cards during editing. Most commonly the card’s contents is immediately dumped to hard drive. Panasonic announced a unit specifically for the purpose recently: the AJ-PCS060 portable hard drive with a P2 card slot. [Hub news February 14th]

Having the media on hard drive makes it instantly available for editing, but does not address the need for archive. Either the hard drives need to be permanently retained for archive or the media needs to be copied to another format for archive. This is more handling than most people are used to.

The AG-HVX 200 won’t ship until some time in the fourth quarter of the year, leaving JVC and Sony a long lead time for the competing HDV to become established. With 37,000 FX1 and Z1U units sold, according to the Apple presentation, in just the first six months, that’s a huge lead for Panasonic to catch up with, particularly since JVC will be shipping their KY-HD100U nearly six months ahead.

Categories
Apple Pro Apps

Apple’s NAB announcements [updated]

Although no new applications were announced, Apple upgraded all the Pro Video Apps with new versions of Final Cut Pro, Soundtrack Pro, Compressor 2, Motion 2, LiveType 2, DVD Studio Pro 4 and Shake 4.

In their Sunday morning presentation at Paris, Las Vegas, Apple announced upgrades across the their Pro Video line, consolidating the tools in the $1299 Final Cut Pro Studio. With Final Cut Pro alone priced at $999, the Studio becomes the purchase option of choice if you want Final Cut Pro and any of the other applications. In depth articles will follow, but here’s the 20,000 ft view.

The suite features improved integration across the suite with automatic asset updating from application to application but no dramatic changes to workgroup editing.

Final Cut Pro 5
Key new features are Multicam, Multichannel audio input and support for HDV and P2 media natively. Multicam allows up to 128 angles to be switched in a Multiclip. 4, 9 or 16 angles can be displayed and switched at a time. Final Cut Pro 5 supports tapeless media from Panasonic’s P2 and native IMX support (and keep an eye out for Panasonic’s new camera – P2 media and DV tape for the best of both worlds). MXF media from XDCAM is supported with a 3rd party plug-in from Flip4Mac (Telestream). Final Cut Pro HD works seamless with almost any type of media. HDV media is supported natively. It’s not clear whether or not media can be mixed in a Sequence without rendering. Since it’s not featured, probably not.
RTExtreme has been extended with a new Dynamic RT architecture that adjusts the amount of real-time according to the processor and graphics card speeds – as speeds increase, more real-time will become available. During playback Dynamic RT looks ahead in the timeline and dynamically adapts rather than suddenly stopping playback. Real-time speed change with frame blending is new to version 5.
Final Cut Pro now allows simultaneous import of up to 24 channels of audio. Final Cut Pro audio can now be controlled on any control surface that supports the Mackie Control Protocol meaning that Final Cut Pro mixing can be done a hardware mixer.!
Motion 2
Motion had the most dramatic update with new features that bring the application up to a truly material application for motion graphic design. New interaction techniques – including controlling parameters with a MIDI controller (did anyone say VJ?) – and Replicator for building patterns of repeating objects like flocks of birds. Replicator gives more control than a particle generator and comes with 150 patters with controllable parameters.
Rendering depth has been beefed up to 16 and 32 bits per channel float for those who need it. 32 bit processing is done on the CPU. Motion on Tiger supports more than 4 GB of RAM.
Motion also gains the third dimension with a new 3D distortion filter that allows pseudo 3D with beautiful transparency and effects in real time. A new GPU accelerated architecture lets 3rd parties access the GPU acceleration so Boris, Zaxwerks and DV Garage plug-ins now display in real time.
Soundtrack Pro
Although it shares part of a name with Soundtrack, Soundtrack Pro is far more positioned for a "regular editor" replacement for Pro Tools than simply for scoring music for video. Soundtrack Pro retains the loop editing functionality of Soundtrack, but adds waveform editing, sound design (including a library of sound effects) and includes more than 50 effects from Logic.
Soundtrack Pro comes complete with "search and destroy" tools for most common audio flaws – clicks & pops, AC hum, DC offset, phase and Clipped signal, plus tools for ambient noise reduction and automatically fill gaps with natural sound.
DVD Studio Pro
With an upgrade to version 4, Shake is HD ready with built-in support for H.264 encoding (adopted by both Blu-ray and HD DVD camps) and direct encoding from HDV without intermediate format conversion. Distributed processing using Qmaster for encoding and built-in AC3 encoding (no need to use A.Pack) and enhanced transition support headline DVD Studio Pro’s new features.
On the technical side, DVD Studio Pro 4 supports VTS editing for greater playback performance by allocating menus throughout VTS folders to overcome 1GB menu limitations. GPRM partitioning enhances the scripting options for highly interactive DVDs, for example jumps to motion menu loops to avoid repeating introduction transitions.
LiveType remains part of the Final Cut Pro package and is at version 2. Visually the interface does not appear to have changed. Most of the changes are under the hood with changes to the LiveFont format to support Unicode and vector fonts.

More soon.

Categories
Business & Marketing Interesting Technology

Can a computer replace an editor?

Before we determine whether or not a computer is likely to replace an editor, we need to discuss just exactly what is the role of an editor – the human being that drives the software (or hardware) that edits pictures and sound? What do they bring to the production process? Having determined that, perhaps we can consider what it is that a piece of computer software might be capable of now or in the future.

First off I think we need to rid ourselves of any concept that there is just one “editor” role even though there is only one term to cover a vast range of roles in post production. Editing an event video does not use the same skills and techniques as editing a major motion picture; documentary editing is different from episodic television; despite the expectation of similarity, documentary editing and reality television require very different approaches. There is a huge difference in the skills of an off-line editor (story) and an on-line editor (technical accuracy) even if the two roles are filled by the same person.

So let’s start with what I think will take a long time for any computer algorithm to be able to do. There’s no project from current technology – in use or in the lab – that would lead to an expectation that an algorithm would be able to find the story in 40 hours of source and make an emotionally compelling, or vaguely interesting, program of 45 minutes. Almost certainly not going to happen in my lifetime. There’s a higher chance of an interactive storytelling environment à la Star Trek’s Holodeck (sans solid projection). Conceptually that type of environment is probably less than 30 years away, but that’s another story.

If a computer algorithm can’t find the story or make an emotionally compelling program, what can it do? Well, as we discovered earlier, not all editing is the same. There is a lot of fairly repetitive and rather assembly line work labeled as editing: news, corporate video, event videography are all quite routine and could conceivably be automated, if not completely at least in part. Then there is the possibility of new forms of media consumption that could be edited by software based on metadata.

In fact, all use of computer algorithms to edit rely on metadata – descriptions of the content that the software can understand. This is analogous to human logging and log notes in traditional editing. The more metadata software has about media the more able it is to create some sort of edit. Mostly now that metadata will come from the logging process. (The editor may be out of a job, but the assistant remains employed!) That is the current situation but there’s reason to believe it could change in the future – more on that later in the piece.

If we really think about what it is we do as editors on these more routine jobs, we realize that there are a series of thought processes that we go through and underlying “algorithms” that determine why one shot goes into this context rather than anther shot.

To put it at the most basic level, an example might be during editing content from an interview. Two shots of the same person have audio content we want in sequence but the effect is a jump cut. [If two shots in sequence feature same person, same shot…] At this point we choose between putting another shot in there – say from another interview or laying in b-roll to cover the jump cut. […then swap with alternate shot with same topic. If no shot with same topic available, then choose b-roll.]

That’s a rudimentary example and doesn’t take into account the value judgment that the human editor brings as to whether another interview conveys the story or emotion as well. Most editors are unfamiliar with their underlying thought processes and not analytical about why any given edit “works” – they just know it does but ultimately that judgment is based on something. Some learned skill, some thought process, something. With enough effort that process can be analyzed and in some far distant time and place, reproduced in software. Or it could except for that tricky emotional element – the thing that makes our storytelling interesting and worth watching.

The more emotion is involved in your storytelling output, the safer your job – or the longer it might be before it can be replaced. 🙂

Right now, the examples of computerized editing available now – Magic iMovie and Muvee Auto Producer use relatively unsophisticated techniques to build “edited” movies. Magic iMovie essentially adds transitions to avoid jump-cut problems and builds to a template; Muvee Auto Producer requires you to vet shots (thumbs up or down) then uses a style template and cues derived from audio to “edit” the program. This is not a threat to any professional or semi-professional editor with even the smallest amount of skill.

However, it is only a matter of time before some editing functions are automated. Event videography and corporate presentations are very adaptable to a slightly more sophisticated version of these baby step products. OK, a seriously more sophisticated version of these baby-step products, but the difference between slightly and seriously is about 3 years of development!

In the meantime, there are other uses for “automated” editing. For example, I developed a “proof of concept” piece for QuickTime Live! in February 2002 that used automated editing as a means of exploring the bulk of material shot for a documentary but not included in the edited piece. Not intended to be direct competition for the editor (particularly as that was me) it was intended as a means of creating short edited videos that were customized in answer to a plain language query of a database. The database contained metadata about the Clips – extended logging information really. In addition to who, where and when, there are fields for keywords, a numeric value for relative usefulness of the clip, a field for keywords to search for for b-roll [If b-roll matches this, search for more than one clip in the search result, edit them together and lay b-roll over all the clips that use this b-roll.]

So, right now, computer editing can be taught rudimentary skills. This particular piece of software knows how to avoid jump cuts and cut to length based on the quality criteria. It is, in fact, a better editor than many who don’t know the basic grammar of video editing. Teaching the basic grammar is relatively easy. Teaching software to take some basic clips and cut into a news item or even basic template-based corporate video is only a matter of putting in some energy and effort.

But making something that is emotionally compelling – not any time soon.

Here’s how I see it could pan out over the next couple of year. Basic editing skills from human-entered metadata – easy. Generating that metadata by having the computer recognize the images – possible now but extremely expensive. Having a computer edit an emotionally compelling piece – priceless.

It’s not unrealistic to expect, probably before the end of the decade, that a field tape could be fed into some future software system that recognizes the shots as wide, medium, close-up etc; identifies shots in specific locations and with specific people (based on having been shown examples of each) and transcribes the voice content and the text in signs and other places in the image. Software will recognize poor exposure, loss of contrast and loss of focus, eliminating shots that do not stand up technically. Nothing here is that difficult – it’s already being done to some degree in high end systems that are > $300,000 right now. From there it’s only a matter of time before the price comes down and the quality goes up.

Tie that together with a template base for common editing formats and variations and an editing algorithm that’s not that much further on than where we are now and it’s reasonable to expect to be able to input one or more source tapes into the system in the afternoon, and next morning come back to review several edited variations. A couple of mouse-clicks to choose the best of each variation and the project’s done, output to a DVD (or next generation optical disc), to a template-based website, or uploaded to the play-out server.

Nothing’s far fetched. Developing the basic algorithm was way too easy and it works for its design goals. Going another step is only a matter of time and investment. Such is the case with anything that is repetitive in nature: ultimately it can be reproduced in a “good enough” manner. It’s part of a trend I call the “templatorization” of the industry. But that’s another blog discussion. For now, editors who do truly creative, original work need not be worried, but if you’re hacking together video in an assembly-line fashion start thinking of that fall-back career.

Categories
Business & Marketing Video Technology

Avid buys Pinnacle – the fallout

The acquisition of Pinnacle will greatly strengthen Avid’s Broadcast video offerings, the area of their business that has been strongest in recent years but will create challenges in integrating product lines and cultures. It is a move that brings further consolidation to the post production business.

Pinnacle has been in acquisition mode for most of the last five years acquiring, among others, Miro, Targa, Dazzle, Fast and Steinberg (sold on to Yamaha recently). It has a diverse line of products in major product lines:

  • Broadcast Tools – Deko On Air graphics products (Character Generators) and MediaStream playout servers;
  • Consumer editing software and hardware – with 10 million customers;
  • Professional Editing – The Liquid product line acquired from Fast; and
  • Editing Hardware – Cinewave and T300 based on the Targa acquisition.

Pinnacle has achieved nine Emmy Awards for its Broadcast product lines.

There will be conflicts and opportunities for Avid. It presents Avid with a new opportunity to create a consumer brand and Avid CEO David Krall has announced that a new consumer division will be formed analogous to the M-Audio consumer audio division acquired last year. M-Audio is the consumer parallel to Avid’s Digidesign professional division. The acquisition also consolidates Avid’s position supplying the Broadcast markets, making the company more of a "one stop shop" for a broadcast facility. There is definitely engineering work to be done on integrating the two technology lines, but there are no particular challenges there, and savings are to be made in streamlining sales and marketing. In broadcast there are only pluses for Avid.

Consumer

Bringing the Avid brand into the consumer market has a slight risk of diluting the Avid editing brand – if consumers edit on "Avid" what’s special about professional editors? However, by carefully managing product brands over company brand, as has been done with M-Audio, there should be an opportunity to bring some of those retail customers up to Xpress or Adrenaline products as their need grows, similar to the way Apple have a path for their iMovie customers to move up to Final Cut Express or Final Cut Pro.

Hardware

Avid and Pinnacle have had a long relationship on the hardware side – Targa supplied the first boards Avid used for video acquisition and the Meridien hardware was designed to Avid’s specifications but manufactured by Pinnacle as an OEM. Whether Avid has any use for the aging T3000 hardware product line (like Cinewave based on the Hub3 programmable architecture that was the primary driver of the Targa purchase) is debatable: Avid have embraced the CPU/GPU future for their products and are unlikely to change course again.

Cinewave

It almost certainly spells the end of Pinnacle’s only Mac product – Cinewave. Rumors were spreading independently of the Avid purchase that Cinewave was at the end of its product life, possibly spurred by changes coming in a future version of Final Cut Pro that no longer supported direct hardware effects. Regardless of whether or not there was any foundation in that rumor, Cinewave is an isolated product in that product group and based on relatively old technology. It is a tribute to the design flexibility and engineering team that essentially the same hardware is still in active production four years after release. Whether the product dies because it’s reached the end of its natural life, or because Avid could not be seen to be supporting the competing Final Cut Pro, it’s definitely at an end.

Liquid

There is, however, one part of the integration that simply does not fit: Pinnacle’s Liquid NLE software. Avid are acquiring an excellent engineering team – the former FAST team out of Germany – but the two NLEs have no commonality. Integrating features from one NLE into another is not trivial as code-bases are unlikely to have any compatibilities, and attempting to move Avid’s customer base toward any Liquid editor is unlikely to have any success at all.

Avid could simply let the product line die. The Liquid range has not exactly sold like hotcakes. This scenario would bring the best of the features and engineers into the Avid family and we’d see the results in 2-3 years as engineering teams merged.

They could, of course, leave Liquid alone – set it up as a division within the company and leave it be. Avid have done that with Digidesign, Softimage and M-Audio. No radical changes and slow integration of technologies where it makes sense. Liquid have probably taken few customers from Avid to date – few Composer customers have moved to Liquid. Instead, Liquid has acquired new NLE customers or people moving "up" from other NLEs. Liquid’s strongest customer bases are in small studios and in broadcast markets.

Even though Avid have let Digidesign and M-Audio compete, even although there is some overlap, it’s hard to imagine keeping a full product line that directly competes with the flagship products – on cheaper hardware at lower cost. Hard to imagine, but not impossible. It would be the most consistent behavior based on past acquisitions but one that would require a delicate balancing act to retain the new customers Pinnacle are bringing to the fold, without risking cutting into the more profitable Xpress, Media Composer and DS products.

Transaction

The transaction values Pinnacle at $462 million based on Avid’s closing price yesterday and will be handled by a combination of cash and shares. Avid will pay about $71 million in cash and issue 6.2 million new shares to the holders of Pinnacle stock, who will then make up about 15% of Avid’s shareholders. The transaction has been approved by the Boards of both companies but must still be approved by regulators and shareholders and is not expected to close until the 2nd or 3rd quarter of 2005.

The companies expect savings in regulatory costs, marketing and sales. We can expect little to change in the short term except probably, some volatility in Avid’s stock price as people try and work out what it all means.

NAB is going to be interesting this year.

Categories
Business & Marketing General

What are good visuals?

Perhaps it’s my background in video production and my strong desire to match media and message, but I’ve been seeing some incredibly inappropriate ways of delivering a message “visually”. The specific example that prompted me to write is this one . The piece is actually a very interesting pseudo documentary looking back at how media changes – perhaps its content is blog-worthy some other time. What annoyed me about it was that it was being used as an example of a “good use of Flash” when in fact I thought the visuals were so poor that, in all probability, the choice of a visual medium was a mistake: if you don’t have visual content, don’t do visuals is a good rule of thumb, I think.

Another example of, imho, really lame visuals used to waste time and attempt to make a silk purse out of a sow’s ear is this marketing hype. Again, Flash used but poor quality visuals (blown up way too big), super slow pacing and a message that, to me is cloyingly saccharine. (On the last point I am probably alone – it’s been very successful as a viral marketing piece so it must appeal to a lot of people.)

What bothers me about these pieces and about a lot of podcasts is that they are incredibly inefficient. One regular podcast I once listened to (on the topic of Final Cut Pro et al) takes about 20 minutes a week to listen to, for what would be a 3 minute read on a web page because the podcaster simply reads a script (or seems to be reading a script). OK, it could be listened to during a commute or at a gym where the 20 minutes wouldn’t be an imposition but surely, if you’re going to do an audio medium it should be produced as an audio medium?

Ditto visual medium – I always have hated making a “video” for a client that was essentially an audio program that had to have visuals forced onto it. (Like the piece at the head of this article.) Have we forgotten the imaginative power of radio? I’ll bet the movie version of War of the Worlds due out soon has none of the impact of the original 1938 broadcast. There are great radio documentaries produced that would make awesome podcasts, instead we get lame “read my script” or “come into my office and chat” podcasts that have zero production value. The Media 2014 example has great writing, the audio production is excellent and the visuals (which probably took the most time) add very little, imho.

This is what worries me about vlogcasting – even basic video production requires some time – more time than most people want to put into a blog or podcast, so what’s going to happen? Gigabytes of bandwidth occupied by badly lit, poorly edited shakey-cam that is virtually unwatchable? It’s already happening: download the Ant vlogcasting client and try and find something worth your time watching. Little evidence of strong writing or great production there – at least in what I’ve found (and if you find something great, ping me on it so I can share the excitement).

Where’s this going? – well, there’s still going to be a role for production skills for some vlogcasting, particularly if we adopt channels of information models via subscription. (The “RSS, Vlogcasting and Distribution Opportunities” blog entry is back, after editing.) It’s another example of how production specialists will need to adapt, and advise clients on what the most appropriate distribution methodology is. Just having basic production skills won’t be enough, but they will be a marketable commodity and profitable when part of the full service we offer customers on their communication needs. Also necessary will be the judgment and sense to tell customers that they don’t need “a video” but rather a website or brochure will work better for them. Savvy people will have those skills as well – if not personally, within their network.

Categories
Business & Marketing Interesting Technology

RSS, Vlogcasting and Distribution Opportunities [Edited]

Earlier I wrote about podcasting and the rapid uptake. Well, there’s every indication that video podcasting, of some sort, will follow. I think this is a tremendous opportunity for content creators because podcasting isn’t about broadcast but in fact an opt-in subscription service. In any discussion of these subjects it keeps echoing in my mind that RSS – is really simple subscription management (and yes, conditional access is possible) and Blogs.

To draw some parallels with traditional media: blogs are the journalism and writing and RSS would be the publishing channel (the network). Blogs and podcasting are bypass technologies – they bypass traditional channels. If pressed for an explanation for the truly astounding growth of podcasting and blogging to a lesser degree, I would hypothesize that they are to some degree a reaction against the uniformity of voice of modern media, where one company owns a very large proportion of the radio stations in the US (and music promotion and billboards) and news media is limited to a half dozen large company sources with little bite and no diversity.

The “blogsphere” (hate that word but it’s in common use) broke the CBS faked service papers during the last Presidential election campaign, and even in the last few weeks has been instrumental in the firing of a high level CNN executive and revealing the “fake” White House journalist and his sordid past. Collectively at least this is real journalism – and more importantly, it’s investigative journalism of the sort that isn’t done by traditional news outlets.

Blogging is popular because it’s easy and inexpensive. Sign up for a free blog on a major service, or download open source blogging software like WordPress (like I use) running on your own server. In a few minutes your voice is out there to be found. In my mind it harkens back to days of Wild West newspapers where someone would set a printing press and suddenly, be a newspaper publisher. But unlike a newspaper, blogs have an irregular publishing schedule. You can bookmark your favorite blogs and check them (when you remember), or you can be notified by your RSS aggregator application when there’s a new post (the URL for the RSS feed for this blog is in the bottom right of the page if you want to add it to your favorites).

Podcasting is easy and inexpensive unless your podcast becomes popular – then the bandwidth expense becomes considerable. Podcasting is a superior replacement for webcasting or streaming that does not have to be in real time. It’s produced and automatically delivered for consumption when it suits the listener. Those are the key attributes that, in my opinion, contribute to its success. There’s no need for a listener to tune in at exactly the time it’s “broadcast” – listen or miss it – or even to remember to visit some archive and download. My own experience on DV Guys totally parallels this. DV Guys has a nearly-five-year history as a live show (Thursday 6-7 pm Pacific) but has always been more popular through its archives pages.

Shortly after the advent of podcasting we set up a podcast of the live show available by the next day. Since then DV Guys has enjoyed more listeners than at any other time in its life. People tended to forget to visit the archives site weekly, or every second week. Even visiting every couple of weeks was too much of a commitment for a show that, while entertaining and informative, wasn’t at the top of everyone’s “must do” list. But with a podcast DV Guys is ready, available whenever a listener has a few moments – at the gym, during a commute, while waiting for a meeting or at an airport. DV Guys, like most radio, is consumable anytime. Importantly, it puts the listener in control, not the creator, of the listening experience.

Podcasting audio has another advantage – it’s easy to create. Almost as easy as blogging but not quite so easy. There are, consequently, proportionally fewer podcasts than there are blogs, because of that higher entry requirement. Even then, most podcasts are simply guys (mostly guys) talking into a microphone from a prepared script, or a few people together talking around a microphone. More highly produced podcasts are rarer.

The simplicity of publishing a blog means that it can be published for as few as half a dozen people – in fact there are people looking to use blogs and wikis as part of a project management tool. Podcasting can reach thousands but in broadcast terms that’s a tiny niche market.

Here’s a new truism – almost all markets are niche markets. What these new publishing models do is aggregate enough people in a niche to make it a market. There’s a lot of money to be made in niches. Particularly in the US, where there are a multiplicity of cable channels, small niches in the entertainment industry can be aggregated with appropriate low cost distribution channels, into profitable businesses. There are a lot of niches that are too small to be have their needs met by even a niche cable network, so cable channels get subdivided, or there’s no content for small niches.

RSS, low cost production tools and P2P is your new distribution channel. This is the other side of the production revolution we’ve been experiencing over the last 10 years, when the cost of “broadcast quality” content has dropped from an equipment budget of $200,000 upward (for camera, recorder and edit suite with titling and DVE) to similar quality at well under $20,000 (and many people doing it for under $10,000). At the end, a computer (or computer-like) device will be one of the inputs to that big screen TV you covet and you’ll watch content fed via subscription when it suits. If it’s not news, it can come whenever, ready to be watched whenever.

The relative difficulty of producing watchable video content will further limit the numbers (as happened from blogging to podcasting) and the current state of video blogs will make experienced professionals cry. That should not stop you planning for your own network. Instead of “The Comedy Network”, the “Sci-Fi Network” etc, prepare for a world of the “The fingernail beauty network” or “Fiat maintenance network” or “Guys who like to turn wood on a lathe network”, et al. Content could be targeted geographically, or demographically. There are very profitable niches available. Two that I’ve been involved with, at the video production level were for people who like to paint china plates (challenging to light) and basic metalwork skill training. There’s no need to fill 24 hours a day 7 days a week with this network model. When new content is produced, it’s ready for consumption when viewers want. We do something similar with our Pro Apps Hub where we publish content from out training programs piecemeal, as it’s produced, before we aggregate the disc version.

Note that I am not, basically, talking about computer-based viewing. My expectation is that software and hardware solutions will evolve into something usable as a home entertainment device. “TV” is a kick-back, put your feet up experience, video on the computer is a lean-forward pay attention experience. While both could be used for the end target of this publication model, what I’m really talking about is content for that lean back experience.

Now, I don’t expect “Hollywood” (as a collective noun for big media) to embrace this model early, or even ever, but that doesn’t mean it’s not going to become viable. The most popular content will probably still go through the current distribution channels, however they evolve. It also doesn’t mean we’ll be restricted to small budget production either. It could (should) evolve models where the viewers were in much closer touch with the producers, without the gatekeeper model.

For example, the basic skill training video series I produced back in Australia was niche programming. There were, effectively, 75 direct customers in the small Australian market (smaller, I should point out, than California alone). No customer or central group had money for production but each one had $150 – $300 to buy a copy of a product. Since these were very simple productions requiring a small crew and being produced in a regional city, each project had about a 30% profit margin. If the same proportions applied to the US market, the budget would have doubled but the profit would have quadrupled or more.

Take another more current example. Star Trek Enterprise has been canned but the last season had 2.5 million viewers an episode with a budget of $1.6 million an episode. If each viewer paid 75c for the episode, delivered directly to their “TV storage device” (somewhere between a Media PC, Mac Mini or TiVo) then the producers would turn a profit of $200,000 an episode or 12.5% margin on what they were getting from the network. At 99c a show, that’s nearly 50% more revenue than was coming from the network. And the audience isn’t limited to just the US market – that same content can be delivered to Enterprise fans anywhere in the world. As the producer I could live on 13 episodes at $200,000 profit above and beyond previous costs (which presumably included some salary and profit). Moreover, producers wouldn’t be locked into rating cycles and the matching boom/bust production cycle.

It doesn’t matter if each high quality (HD if you want) episode takes 20 hours of download “in the background”. When it’s complete and ready to watch it appears in the play list as available.

Bandwidth would be a killer in that scenario – even with efficient MPEG-4 H.264 encoding, a decent SD image is going to require 1-1.5 Mbits/sec and HD is going to want 7 or 8 Mbits/sec. Assuming 45 minute episodes (sans commercials) that’s around 700 MB an episode per person, or in HD about 5 GBytes, per subscriber. Across 2+ million subscribers that’s going to eat my profit margin rather badly without another solution. There are technologies in place that could be adapted.

Assuming the bandwidth challenge is resolved. What’s left?

Two things mostly: the device that stores the bits ready for display on the TV and software to manage the conditional access (you only get the bits you bought) and playlists. Something like a video/movie version of the iTunes music store. We’ll need to wait for a big player like Apple or Sony to wield enough muscle for that, but in the meantime, we see the beginning with Ant but as a computer interface and without the simplicity and elegance of a Dish/Direct/TiVo user interface. But it will come.

Will you have your network business plan ready? I’m working on mine already.

Wired has another take. Videoblogging already has a home page and for a bit of thinking on the flip-side, how this might all work for the individual wanting to aggregate a personal channel, Robin Good has a blog article on Personal Media Aggregators in one of my favorite (i.e. challenging) blogs.

Categories
Business & Marketing

Companies come, companies go

The last Friday of the month I usually head over to Burbank to Alpha Dogs’ Editors Lounge – a great opportunity for editors in the LA area to hang out together and usually to geek out on some new technology or technique in the presentation. I usually love heading over there for the social time and the presentation.

Why not go this month? Well the Media 100 folk were presenting Media 100 HD and I hate attending a wake before the death (even if the corpse smells rather bad). What’s happening with Media 100 is very sad for me. My first purchase of a Media 100 back in 1994 was the single best business decision I ever made. Media 100 made the first NLE that was truly finished quality on the desktop with a relatively simple editing interface that, for me, brought high quality video into the computer as pixels for further manipulation. (My second purchase was COSA After Effects to manipulate those pixels!)

But Media 100 was often the underdog: the editing software’s strength was its simplicity – after all John Molinari’s intention was to democratize video editing – in an industry where more features was always the mantra. The hardware had the highest quality video codec in those early days and well into the transition to PCI cards. But they failed to adapt to a changing market, because at the heart, Media 100 was a hardware company and the world moved to software or more accurately is somewhere along the journey to having moved to software. The “democratization” that CEO John Molinari had spoken of in his Blood Secrets article originally published in Videography in June 1994 was much better fulfilled by Apple computer starting with the release of Final Cut Pro and iMovie. There’s more on Media 100’s vision my own 2001 article.

For a democratized industry software can fulfill the role much more affordably than hardware, and Media 100 were blind-sided by the rapid take up of DV right into their target market. Media 100 could not react quickly enough and, despite Media 100’s 4:2:2 codec and higher image quality for the discerning eye, DV was good enough for most people. Media 100 had started work on new hardware but politics meant that work done on Macintosh was abandoned and started over on Windows. Such an ambitious project as 844/X took longer than anticipated to come to market, was on the “wrong” platform for the customer base and the starting price, while very competitive, was still too high for a market that had collapsed in the post-dot-com era.

Media 100’s assets were purchased by Optibase who seemed to have the right idea, until this last couple of months when they went back on announcements of just last year, dropped the wrong product line (844/X) and instead pushed ahead into the only market where they have no opportunity to survive – Media 100 HD. The market for the few unique features that Media 100 HD has, is very small – too small for the division to survive. 844/X is now, for all intents and purposes, dead, unless some other company moves it forward into HD. As an HD system there would be a couple of years where hardware has a unique advantage. But not under these owners who have squandered the goodwill they had garnered.

Media 100 is not the only company to have come and gone in the years I’ve been in the industry – I remember Puffin Designs’ fun NAB costumes and soft toys; ICE’s white igloo stood out in a sea of similar-looking booths; and while Terran’s product still lives on, staggering near death at the hands of Discreet, the company is long gone.

The good thing is that, while companies come and go, the engineering expertise mostly stays in the industry. New managements give engineers a chance to innovate again in a clean environment. Discreet let the original Paint/Effect -> combustion development team go and Apple snapped up the whole team within a week with the result being Motion down the track. Media 100s engineers are, mostly still working in the industry and still contributing.

So, I’ll raise a glass this year at NAB as an official farewell. I will always appreciate the opportunity it provided to me and my business, with a side thanks to Mark Richards at Animotion/Adimex in Sydney who up-sold me to Media 100. Despite officially still “being there” I believe the Media 100 brand will be gone shortly. Such is the life cycle of innovative companies that don’t continue to innovate and take account of market shifts. It’s a salutary lesson to all of us who are in business – if we do not continue innovating and moving with our customers we risk joining the ranks of the “once were great” of history.