Categories
Business & Marketing Distribution Random Thought

Broadcast Flag (again) and WIPO

Despite being defeated thoroughly in May this year, the media oligopoly (aka the RIAA and MPAA) are once again trying to reinstate the Broadcast Flag that will take away existing media usage rights and attempt to control every consumer electronic device to be built in the future. The Broadcast Flag, if ever passed into legislation, would allow media companies to control every piece of consumer electronics. In their world, if we’re video recording in the street and a car drives past playing copyright music, the video camera shuts down. Existing Fair Use rights will go out the window.

But, as the Electronic Freedom Frontier points out the Broadcast Flag was soundly defeated and Congress are reluctant to get involved in meddling between the American consumer and their TV, so the industry has encouraged 20 suicidal Representatives to attempt to slip the revised legislation through the “back door” – on the back of a budget bill.

Here’s what they’re attempting to slip through unnoticed:

The Federal Communications Commission —
(a) has authority to adopt such regulations governing digital audio broadcast transmissions and digital audio receiving devices that are appropriate to control the unauthorized copying and redistribution of digital audio content by or over digital reception devices, related equipment, and digital networks, including regulations governing permissible copying and redistribution of such audio content….

Courtesy of Boing Boing

I don’t know why the entrenched media companies think that attempting to lock down media this way is in their best interests. At best it’s a serious lack of vision or understanding that everything changes and the only way they have a future is to adapt. These are the same industries that fought tooth and nail against “Betamax”. Jack Valenti testified to Congress that:

“…’the growing and dangerous intrusion of this new technology’ threatened his entire industry’s ‘economic vitality and future security.”

And yet now, the VHS and DVD sales, the successor to Betamax, is worth more to the movie industry than theatrical distribution, and has expanded the industry dramatically with direct-to-DVD movies.

Why would anyone think they’re any more right this time? Why would they have any credibility? They don’t. This is a case of dinosaurs attempting to prevent an ice age. Attempts at broadcast flag and other DRM will fail, whether they’re implemented or not. DRM will probably kill Blu-ray and HD DVD before they’re even out the door (another blog article there!).

There’s another good article on the Broadcast Flag in Susan Crawford’s Blog.

Even if your Representative is not one of the 20 suicidal ones, contact them now and tell them why supporting the Broadcast Flag (and all DRM) is a bad idea. It’s already been knocked out once, it shouldn’t come back, and it shouldn’t come back hidden in a budget bill. If this is to come to pass, let’s have it out in the open, debated in public, and a reasonable decision made.

And if the Broadcast Flag is not bad enough

If you really want to get a feel for the nature of the established media people, consider what the United States official delegation to the World Intellectual Property Organization proposed. Reporting on an article in the Financial Times BoingBoing.net summarizes it this way:

James Boyle’s latest Financial Times column covers the Webcasting provisions of the new Broadcast Treaty at the World Intellectual Property Organization. Under these provisions, the mere act of converting A/V content to packets would confer a 50-year monopoly over the underlying work to ISPs. That means that if you release a Creative-Commons-licensed Flash movie that encourages people to share it (say, because you get money every time someone sees the ads in it), the web-hosting companies that offer it to the world can trump your wishes, break your business and sue anyone who shares a copy they get from them. This is a way of taking away creator’s rights and giving them to companies like Microsoft and Yahoo, whose representative at WIPO has aggressively pushed to have this included in the treaty.

Even if we publish under a Creative Commons license, or even just publish our own content through an ISP, the ISP owns copyright in our work for 30 years. Fortunately this didn’t get saluted this time but that’s what’s being pushed. Is this what you want? It’s certainly not what I want. Does it protect the rights of the artist as copyright is intended in the US Constitution? I think a resounding “No way” is the only possible answer. Well, does it serve the public good – the other provision in the Constitution? I sure can’t see how.

Follow up note added October 13: The delegation to the WIPO conference, a.k.a. “the forces of radical protectionism” were seeking a “diplomatic conference” in Q1, 2006, which is the last step before a treaty is ready for signatures. Instead they were denied and the proposal will be dissected in at least two more WIPO meeting before a diplomatic conference gets discussed again, allowing for time to make sure this level of protection for carriers is never enacted. /addition

Make your opinion known to your Congressional Representatives now. Otherwise we’ll end up with these things in law, just like the reviled Digital Millenium Copyright Act. Most probably Unconstitutional, but who’s going to “stand up for pirating” and stand against established legislation?

Soon I’ll write on why Digital Rights Management, as it’s planned for Blu-ray, HD DVD and the “Trusted Computing” initiative will stifle creative endeavors and end up killing promising technologies. And why it’s bad for the MPAA and RIAA, if only they had a little vision.

Categories
Business & Marketing Distribution Interesting Technology

The power of disruptive technologies

A disruptive technology is one that most people do not see coming and yet, within a very short period it changes everything. A disruptive technology will become the dominant technology. Rarely are they accurately predicted because predictions are generally extrapolated from the existing understanding. For example, there’s no doubt that the invention of the motor car was a disruptive technology, but Henry Ford is often quoted as saying “If we had asked the public what they wanted, they would have said faster horses.”

It’s almost impossible to predict what will become a disruptive technology (although the likelihood of being wrong isn’t going to stop me) but they are very easily recognized in hindsight. Living in Los Angeles it’s obvious the affect that Mr Ford’s invention has had on this society. Although some would argue that it wasn’t so much the invention of the motor car that made the difference, but the assembly-line technique that made the motor vehicle (relatively) affordable.

In fact I think it’s reasonable to believe that a disruptive technology will have a democratizing component to it, or a lowering (or removal) of price barriers.

Non-linear editing on the computer – Avid’s innovation – was a disruptive technology but initially only within a relatively small community of high end film and episodic television editors. The truly disruptive technology was DV. DV over FireWire starting with Sony’s VX-1000 and Charles McConathy/Promax’s efforts to make it work with the Adobe Premiere of the day, paved the way for what we now call “The DV Revolution”.

Apple were able to capitalize on their serendipitous purchase of Final Cut from Macromedia and drop the work that had been done to make Final Cut work with Targa real-time cards and concentrate on FireWire/DV support. (It was two further releases before we saw the same level of real-time effects support as was present in the Macromedia NAB 98 alpha preview.) I think, at the time, Apple saw Final Cut Pro to be another way of selling new G3’s with built-in FireWire. The grand plan of Pro Apps came about when the initial success of Final Cut Pro showed the potential. But that’s another blog post.

DV/FireWire was good enough at a much lower price, with all the convenience of single-wire connection. We’ve grown from an industry of under 100,000 people worldwide involved in professional production and post-production to one almost certainly over 1 million worldwide.

Disruptive technologies usually involve a confluence of technologies at the right time. Lower cost editing software wouldn’t have been that disruptive without lower cost acquisition to feed to it. Both would have been pointless without sufficient computer power to run the software adequately. Final Cut Pro on an Apple IIe wouldn’t have been that productive!

In a larger sense DV/FireWire was part of a larger disruption affecting the computer industry – the transition from hardware-based to software-based. We are, in fact, already through this transition with digital video, although the success of AJA and Blackmagic Design might suggest otherwise. However, the big difference now is that the software is designed to do its job with hardware as the accessory. Back in the days of Media 100’s success, Media 100 would not run without the hardware installed, in fact without the hardware it was pretty useless as everything went through the card. Then when they rebuilt the application for OS X they developed it as (essentially) software-only. This paved the way to the current HD version (based on a completely different card) and a software-only version.

Ultimately, all tasks will be done in software other than the hardware needed to convert from format to format. In fact much of the role of today’s hardware is that of format interface rather than the basis for the NLE as it was in the day of Media 100, Avid AVBV/Meridien and even Cinéwave. Today’s hardware takes some load off the CPU but as an adjunct to the software, not because the task couldn’t be done without the hardware. This has contributed to the “DV Revolution” by dramatically dropping prices on hardware.

Disruptive technologies are hard to predict because they are disruptive. Any attempt to predict disruptive technologies is almost certainly doomed to failure, but like I said, that’s not going to stop me now!

We are headed for a disruptive change in distribution of media, both audio and video content. I wish I could see clearly how this is going to shake out, so I could invest “wisely” but it’s still too early to predict exactly what will be the outcome 5-7 years down the track. I feel strongly that it will include RSS with enclosures, in some form. It will have aspects of Tivo/DVR/PVR where the content’s delivery and consumption will be asynchronous. Apart from news and weather, is there any need for real-time delivery as long as the content is available when it’s ready to be consumed? Delivery will, for the most part, be via broadband connections using the Internet Protocol.

There is a growing trend to want to merge the “computer” and “the TV”, either by creating media center computers, by adding Internet connected set-top boxes (like cable boxes) or by delivering video content direct to regular computers. Microsoft’s Media Center PCs haven’t exactly set the world on fire outside the college dorm where they fit a real niche; Apple are clearly moving slowly toward some media-centric functions in the iLife suite where it will be interesting to see what’s announced at MacWorld San Francisco in January; and there are developments like Participatory Culture’s DTV and Ant is not TV’s FireANT for delivering channels of content directly to the computer screen. Both DTV and FireANT are based on RSS, with enclosures, for their channels, just like audio podcasting does.

On the hardware box front, companies like Akimbo, Brightcove and DaveTV are putting Internet-connected boxes under the TV, although DaveTV is having a bet both ways with computer software or set-top box.

Whether any of these nascent technologies are “the future of media” as they are touted by their developers, whichever way this shakes out it has important implications for our industry. No-one foresaw that the Master Antenna and Community Antenna systems of the 1950’s would evolve into today’s dominant distribution networks – the cable channels, which have now (collectively) moved ahead of the four major networks in total viewership. The advent of cable distribution opened up hundreds, or thousands of new production opportunities for content creators. This time many people foresee (or hope) that using the Internet for program distribution will take down the last wall separating content creators from their audience.

In the days of four networks, any program idea had better aim to appeal to 20-30% of the available audience – young, middle-aged or old – to have any chance of success. In an age where the “family” sat down to watch TV together (and even ate meals together) that was a reasonable thing to attempt. As society fragmented we discovered that there were viable niches in the expanded cable networks. Programs have been artistically and/or financially successful that would never have been made for network TV because of the (relatively) small audiences or because the content was not acceptable under the regulations governing the networks. The development of niche channels for niche markets parallels the fragmentation of society as a whole into smaller demographic units.

Will we see, or do we need, more channels? Is 500 channels, and nothing on, going to be better when it’s 5,000 channels? Probably, because in among the 5,000 (or 50,000) channels will be content that I care enough about to watch. It won’t be current news, that’s still best done with real-time broadcasting, but for other content, why not have it delivered to my “box” (whatever takes this role) ready to be watched (on whatever device I choose to watch it)? (Some devices will be better suited to certain types of content: a “video iPod” device would be better suited to short video pieces than multi-hour movies, for example.)

If the example of audio podcasting is anything to go by, and with just one year of history to date it’s probably a little hard to be definitive, then yes, subscriber-chosen content delivered “whenever” for consumption on my own schedule is compelling. I’ve replaced the car radio newstalk station with podcasts from my iPod mini. Content I want to listen to, available when I’m ready to listen. Podcasts have replaced my consumption of radio almost completely.

Ultimately it will come down to content. Will the 5,000 or 50,000 channels be filled with something I want to watch? Sure, subscribing to the “Final Cut Pro User Group” channel is probably more appealing than (for me) many of the channels available on my satellite system. Right now, video podcasts tend to be of the “don’t criticize what the dog has to say, marvel that the dog can talk” variety. Like a lot of podcasts. Not every one of the more-than-10,000 podcasts now listed in the iTunes directory is compelling content or competently produced.

But before we can start taking advantage of new distribution channels, for more than niche applications, we need to see some common standards come to the various platforms so that channels will discovered on Akimbo, DaveTV, DTV and on a Media Center PC. About the only part of this prediction I feel relatively sure of, is that it will involve RSS with audio and video enclosures, or a technology derived from RSS, like Atom (although RSS 2 seems to have the edge right now.)

In a best-case scenario, we’ll have many more distribution channels, aggregating niche markets into a big-enough channel for profitable content (particularly with lower cost production tools now in place). Direct producer-customer connection, without the intermediation of network or cable channel aggregators improves profit potential on popular content and possibly moves content into different distribution paths. Worst case scenario is that nothing much changes and Akimbo, DaveTV, Brightcove, Apple or Microsoft’s Media-centric computers go the way of the Apple Lisa – paving the way for the real “next big thing”.

Categories
Apple Apple Pro Apps Business & Marketing Interesting Technology Random Thought

Don’t panic! Apple adopts Intel processors

The confusion and furor surrounding Apple CEO Steve Jobs’ announcement at the WordWide Developers Conference that future Mac, after Jun 2006, will use Intel processors inside is totally unfounded. Nothing changes now, very little changes in the next year and longer term the future for the Mac got a little brighter. Although the decision caught me by surprise, as I thought about it, and listened to what was said in the keynote, I could see why it made sense.

If we look short term, the decision makes little sense. Right now a G5 (Power PC, aka PPC) PowerMac has very similar performance to the best workstations on the PC/Intel platform running Windows and the G5 will cost less than a similarly performing PC workstation. At the low end the Mac mini is competitively priced to a cheap Dell or other name brand. (Macs are not price competitive with off-brand PCs, the so called “white box”.) So, why put the developer community, and developers within Apple, through the pain of a processor shift?

For the future (“we have to do it for the children”) and because it’s really not that painful for most developers.

Right now a G5 PowerMac is very performance competitive with the best offerings from Intel. What Apple have been privy to, that rest of us haven’t, is the future of both Intel processors and PPC processors. Based on that future Apple decided they had no choice but to make the change. In the future, the performance-per-watt of power of a PPC chip will be “15 units of processing” according to Mr Jobs. The same watt of energy would give 70 units of performance on an Intel processor. Without knowing exactly how those figures were derived, and what it means for real-world processing power it seems like a significant difference. It was enough to push Apple to make the change.

Not that there’s anything wrong with the PPC architecture: IBM continue to develop and use it at the high end and PPC chips (triple core “G5” chips) will power the Microsoft XBox360. The sales of chips to Microsoft will well and truly outweigh the loss of business from Apple. It is, however, a crazy world: next year will see a Microsoft product powered by PPC and Macintoshes powered by Intel!

Steve Jobs demonstrated how easy it will be for developers to port applications to OS X Intel. In fact, he confirmed long-term rumors that Apple have kept OS X running on Intel processors with every development on OS X – Mr Jobs demonstrated and ran his keynote from an Intel Macintosh. For most applications a simple recompile in the Xcode developer environment will suffice – a matter of a few hours work at most. Moreover, even if the developer does not recompile, Apple have a compatibility layer, called Rosetta, that will run pure PPC code on an Intel Mac. Both platforms are to be supported “well into the future”.

During the keynote Mathematica was demonstrated (huge application, 12 lines of code from 20 million needed changing, 2 hours work) as were office applications. Commitments to port Adobe’s creative suite and Microsoft’s Mac Business Unit software were presented. Apple have been working on Intel-compatible versions of all their internal applications according to Mr Jobs. [Added] Luxology’s president has since noted that their 3D modelling tool modo took just 20 minutes to port, because it was already Xcode-based, and built on modern Mach-0 code.

Remember, these applications are for an Intel-powered OS X Macintosh. No applications are being developed for Windows. In fact, after the keynote Senior Vice President Phil Schiller addressed the issue of Windows. Although it would be theoretically possible to run Windows on an Intel Macintosh it will not be possible to run OS X on anything but Apple Macintosh.

Apple’s Professional Video and Audio applications might not be as trivial to port although most of the modern suite should have no problem. LiveType, Soundtrack Pro, DVD Studio Pro and Motion are all new applications built in the Cocoa development environment and will port easily. Final Cut Pro may be less trivial to port. It has a heritage as a Carbon application, although the code has been tweaked for OS X over recent releases. More than most applications, Final Cut Pro relies on the Altivec vector processing of the PPC chip for its performance. But even there, the improvement in processor speeds on the Intel line at the time Intel Macs will be released are likely to be able to compensate for the loss of vector processing. At worst there will be a short-term dip in performance. However with Intel Macintoshes rolling out from June 2006 it’s likely we’ll see an optimized version of Final Cut Pro ready by the time it’s needed.

[Added] Another consideration is the move to using the GPU over the CPU. While the move to Intel chips makes no specific change to that migration – Graphics card drivers for OS X still need to be written for the workstation-class cards – Final Cut Pro could migrate to OS X technologies like Core Video to compensate for the lack of Altivec optimizations for certain functions, like compositing. Perhaps then, finally we could have real-time composite modes!

Will the announcement kill Apple’s hardware sales in the next year? Some certainly think so but consider this: if you need the fastest Macintosh you can get, buy now. There will always be a faster computer out in a year whatever you buy now. If your business does not need the fastest Mac now (and many don’t) then do what you’d always do: wait until it makes sense. The G5 you buy now will still be viable way longer than its speed will be useful in a professional post-production environment. It’s likely there will be speed-bumps in the current G5 line over the next year, as IBM gets better performance out of its chips. We are waiting for a new generation of chips from Intel before there would be any speed improvement. If Apple magically converted their current G5 line to the best chips Intel has to offer now, there would be little speed improvement: this change is for the future, not the present.

So, I don’t think it will affect hardware sales significantly. As a laptop user I’m not likely to upgrade to a new G4 laptop, but then there will be little speed boosts available there in the next year anyway. But as a laptop user, I’m keen to get a faster PowerBook and using an Intel chip will make that possible.

Although I have to say I initially discounted the reports late last week because, based on current chip developments, there seemed little advantage in a difficult architecture change. With the full picture revealed in the Keynote as to the long term advantages and the minimal discomfort for developers, it seems like a reasonable move that will change very little except give us faster macs in the future.

How could we have any problem with that?

[Added] Good FAQ from Giles Turnbull at O’Reilly’s Developer Weblog

Categories
Apple Interesting Technology Random Thought

iTunes becomes a movie management tool

iTunes has been doing movies for some time now – trailers from the Apple Movie Trailer’s website have been passed through iTunes for full screen playback, leading many to believe that Apple were grooming iTunes for eventual movie distribution.

Well, iTunes 4.8 will do nothing to dispel the rumor mongers – in version 4.8 iTunes gains more movie management and playback features, including movie Playlists and full screen playback. Simply drag a movie or folders of movies (any .mov or .mp4 whatever the size) into the iTunes interface and they become Playlists.

Playback can be in the main interface (in the area occupied by album artwork otherwise); in a separate movie window (non-floating so it will go behind the iTunes main interface) or to full screen. Visual can be of individual movies or of playlists – audio always plays the playlist regardless of the setting controlling the visuals.

If one had to speculate (and one does, really in the face of Apple’s enticement) it certainly seems that Apple are evolving iTunes toward some movie management features. The primary driver of this development in version 4.8 is the inclusion of “behind the scenes/making of” videos with some albums. For example, the Dave Matthews Band “Stand Up” album in the iTunes Music Store features “an (sic) fascinating behind-the-scenes look at band’s (sic) creative process with the bonus video download.” The additional movie content gives Apple the excuse to charge an extra $2 for the album ($11.99 while most albums are $9.99).

There is a lot of “chatter in the channel” about delivery of movies to computers or a lounge room viewing device (derived from a computer but simplified). Robert Cringely, among others, seem to think the Mac Mini is well positioned for the role of lounge room device. Perhaps, others like Dave TV think a dedicated box or network will be the way to go. Ultimately it will be about two things: content and convenience.

Recreational Television and movie watching is a “lay back” experience – performed relaxed on a comfortable chair at the end of a busy day with little active involvement of the mind. Even home theater consumption of movies is not quite the same experience as a cinema (although close enough to it for many people.) It will take a major shift in thinking for the “TV” to become a “Media Center” outside of the College Dorm Room. We’re still many years from digital television broadcasting being the norm, let alone HD delivery to in-home screens big enough to actually display it at useful viewing distances. (If you want the HD experience right now on a computer screen Apple have some gorgeous examples in their H.264 HD Gallery. QuickTime 7, a big screen and a beefy system are pre-requisites but the quality is stunning.)

Apple do not have to move fast, nor be first, with the “Home Media Center” to ultimately be successful. Look at what happened with the iPod and iTunes in the first place. The iPod was neither the first “MP3 Player” nor some would argue “the best” but it had a superior overall experience, aided by a huge ‘coolness’ factor. So, even if Apple are planning an ‘iTunes Music Store for Movies” some time down the path, it’s not something I’d expect to be announced at MacWorld January 2006 or even 2007!

In the meantime, the new movie management features in iTunes are great. This is not a professional video asset management tool, we’ll have to look elsewhere for that (something I hope the Pro Apps group would be working on) but it is a tool for organizing and playing videos. I have collected show reels and other design pieces I look to for creative inspiration but until now there was no way of organizing them easily. Now I can import them all to iTunes, create play lists for “titles”, “3D”, “design”, “action” and so on for when I need inspiration. Movies can be in multiple play lists, just like music.

I can wait to see what Apple have planned in the future, in the meantime, I’m happy with a new tool in my toolbox.

Categories
Business & Marketing Interesting Technology

Can a computer replace an editor?

Before we determine whether or not a computer is likely to replace an editor, we need to discuss just exactly what is the role of an editor – the human being that drives the software (or hardware) that edits pictures and sound? What do they bring to the production process? Having determined that, perhaps we can consider what it is that a piece of computer software might be capable of now or in the future.

First off I think we need to rid ourselves of any concept that there is just one “editor” role even though there is only one term to cover a vast range of roles in post production. Editing an event video does not use the same skills and techniques as editing a major motion picture; documentary editing is different from episodic television; despite the expectation of similarity, documentary editing and reality television require very different approaches. There is a huge difference in the skills of an off-line editor (story) and an on-line editor (technical accuracy) even if the two roles are filled by the same person.

So let’s start with what I think will take a long time for any computer algorithm to be able to do. There’s no project from current technology – in use or in the lab – that would lead to an expectation that an algorithm would be able to find the story in 40 hours of source and make an emotionally compelling, or vaguely interesting, program of 45 minutes. Almost certainly not going to happen in my lifetime. There’s a higher chance of an interactive storytelling environment à la Star Trek’s Holodeck (sans solid projection). Conceptually that type of environment is probably less than 30 years away, but that’s another story.

If a computer algorithm can’t find the story or make an emotionally compelling program, what can it do? Well, as we discovered earlier, not all editing is the same. There is a lot of fairly repetitive and rather assembly line work labeled as editing: news, corporate video, event videography are all quite routine and could conceivably be automated, if not completely at least in part. Then there is the possibility of new forms of media consumption that could be edited by software based on metadata.

In fact, all use of computer algorithms to edit rely on metadata – descriptions of the content that the software can understand. This is analogous to human logging and log notes in traditional editing. The more metadata software has about media the more able it is to create some sort of edit. Mostly now that metadata will come from the logging process. (The editor may be out of a job, but the assistant remains employed!) That is the current situation but there’s reason to believe it could change in the future – more on that later in the piece.

If we really think about what it is we do as editors on these more routine jobs, we realize that there are a series of thought processes that we go through and underlying “algorithms” that determine why one shot goes into this context rather than anther shot.

To put it at the most basic level, an example might be during editing content from an interview. Two shots of the same person have audio content we want in sequence but the effect is a jump cut. [If two shots in sequence feature same person, same shot…] At this point we choose between putting another shot in there – say from another interview or laying in b-roll to cover the jump cut. […then swap with alternate shot with same topic. If no shot with same topic available, then choose b-roll.]

That’s a rudimentary example and doesn’t take into account the value judgment that the human editor brings as to whether another interview conveys the story or emotion as well. Most editors are unfamiliar with their underlying thought processes and not analytical about why any given edit “works” – they just know it does but ultimately that judgment is based on something. Some learned skill, some thought process, something. With enough effort that process can be analyzed and in some far distant time and place, reproduced in software. Or it could except for that tricky emotional element – the thing that makes our storytelling interesting and worth watching.

The more emotion is involved in your storytelling output, the safer your job – or the longer it might be before it can be replaced. 🙂

Right now, the examples of computerized editing available now – Magic iMovie and Muvee Auto Producer use relatively unsophisticated techniques to build “edited” movies. Magic iMovie essentially adds transitions to avoid jump-cut problems and builds to a template; Muvee Auto Producer requires you to vet shots (thumbs up or down) then uses a style template and cues derived from audio to “edit” the program. This is not a threat to any professional or semi-professional editor with even the smallest amount of skill.

However, it is only a matter of time before some editing functions are automated. Event videography and corporate presentations are very adaptable to a slightly more sophisticated version of these baby step products. OK, a seriously more sophisticated version of these baby-step products, but the difference between slightly and seriously is about 3 years of development!

In the meantime, there are other uses for “automated” editing. For example, I developed a “proof of concept” piece for QuickTime Live! in February 2002 that used automated editing as a means of exploring the bulk of material shot for a documentary but not included in the edited piece. Not intended to be direct competition for the editor (particularly as that was me) it was intended as a means of creating short edited videos that were customized in answer to a plain language query of a database. The database contained metadata about the Clips – extended logging information really. In addition to who, where and when, there are fields for keywords, a numeric value for relative usefulness of the clip, a field for keywords to search for for b-roll [If b-roll matches this, search for more than one clip in the search result, edit them together and lay b-roll over all the clips that use this b-roll.]

So, right now, computer editing can be taught rudimentary skills. This particular piece of software knows how to avoid jump cuts and cut to length based on the quality criteria. It is, in fact, a better editor than many who don’t know the basic grammar of video editing. Teaching the basic grammar is relatively easy. Teaching software to take some basic clips and cut into a news item or even basic template-based corporate video is only a matter of putting in some energy and effort.

But making something that is emotionally compelling – not any time soon.

Here’s how I see it could pan out over the next couple of year. Basic editing skills from human-entered metadata – easy. Generating that metadata by having the computer recognize the images – possible now but extremely expensive. Having a computer edit an emotionally compelling piece – priceless.

It’s not unrealistic to expect, probably before the end of the decade, that a field tape could be fed into some future software system that recognizes the shots as wide, medium, close-up etc; identifies shots in specific locations and with specific people (based on having been shown examples of each) and transcribes the voice content and the text in signs and other places in the image. Software will recognize poor exposure, loss of contrast and loss of focus, eliminating shots that do not stand up technically. Nothing here is that difficult – it’s already being done to some degree in high end systems that are > $300,000 right now. From there it’s only a matter of time before the price comes down and the quality goes up.

Tie that together with a template base for common editing formats and variations and an editing algorithm that’s not that much further on than where we are now and it’s reasonable to expect to be able to input one or more source tapes into the system in the afternoon, and next morning come back to review several edited variations. A couple of mouse-clicks to choose the best of each variation and the project’s done, output to a DVD (or next generation optical disc), to a template-based website, or uploaded to the play-out server.

Nothing’s far fetched. Developing the basic algorithm was way too easy and it works for its design goals. Going another step is only a matter of time and investment. Such is the case with anything that is repetitive in nature: ultimately it can be reproduced in a “good enough” manner. It’s part of a trend I call the “templatorization” of the industry. But that’s another blog discussion. For now, editors who do truly creative, original work need not be worried, but if you’re hacking together video in an assembly-line fashion start thinking of that fall-back career.

Categories
Business & Marketing Video Technology

Avid buys Pinnacle – the fallout

The acquisition of Pinnacle will greatly strengthen Avid’s Broadcast video offerings, the area of their business that has been strongest in recent years but will create challenges in integrating product lines and cultures. It is a move that brings further consolidation to the post production business.

Pinnacle has been in acquisition mode for most of the last five years acquiring, among others, Miro, Targa, Dazzle, Fast and Steinberg (sold on to Yamaha recently). It has a diverse line of products in major product lines:

  • Broadcast Tools – Deko On Air graphics products (Character Generators) and MediaStream playout servers;
  • Consumer editing software and hardware – with 10 million customers;
  • Professional Editing – The Liquid product line acquired from Fast; and
  • Editing Hardware – Cinewave and T300 based on the Targa acquisition.

Pinnacle has achieved nine Emmy Awards for its Broadcast product lines.

There will be conflicts and opportunities for Avid. It presents Avid with a new opportunity to create a consumer brand and Avid CEO David Krall has announced that a new consumer division will be formed analogous to the M-Audio consumer audio division acquired last year. M-Audio is the consumer parallel to Avid’s Digidesign professional division. The acquisition also consolidates Avid’s position supplying the Broadcast markets, making the company more of a "one stop shop" for a broadcast facility. There is definitely engineering work to be done on integrating the two technology lines, but there are no particular challenges there, and savings are to be made in streamlining sales and marketing. In broadcast there are only pluses for Avid.

Consumer

Bringing the Avid brand into the consumer market has a slight risk of diluting the Avid editing brand – if consumers edit on "Avid" what’s special about professional editors? However, by carefully managing product brands over company brand, as has been done with M-Audio, there should be an opportunity to bring some of those retail customers up to Xpress or Adrenaline products as their need grows, similar to the way Apple have a path for their iMovie customers to move up to Final Cut Express or Final Cut Pro.

Hardware

Avid and Pinnacle have had a long relationship on the hardware side – Targa supplied the first boards Avid used for video acquisition and the Meridien hardware was designed to Avid’s specifications but manufactured by Pinnacle as an OEM. Whether Avid has any use for the aging T3000 hardware product line (like Cinewave based on the Hub3 programmable architecture that was the primary driver of the Targa purchase) is debatable: Avid have embraced the CPU/GPU future for their products and are unlikely to change course again.

Cinewave

It almost certainly spells the end of Pinnacle’s only Mac product – Cinewave. Rumors were spreading independently of the Avid purchase that Cinewave was at the end of its product life, possibly spurred by changes coming in a future version of Final Cut Pro that no longer supported direct hardware effects. Regardless of whether or not there was any foundation in that rumor, Cinewave is an isolated product in that product group and based on relatively old technology. It is a tribute to the design flexibility and engineering team that essentially the same hardware is still in active production four years after release. Whether the product dies because it’s reached the end of its natural life, or because Avid could not be seen to be supporting the competing Final Cut Pro, it’s definitely at an end.

Liquid

There is, however, one part of the integration that simply does not fit: Pinnacle’s Liquid NLE software. Avid are acquiring an excellent engineering team – the former FAST team out of Germany – but the two NLEs have no commonality. Integrating features from one NLE into another is not trivial as code-bases are unlikely to have any compatibilities, and attempting to move Avid’s customer base toward any Liquid editor is unlikely to have any success at all.

Avid could simply let the product line die. The Liquid range has not exactly sold like hotcakes. This scenario would bring the best of the features and engineers into the Avid family and we’d see the results in 2-3 years as engineering teams merged.

They could, of course, leave Liquid alone – set it up as a division within the company and leave it be. Avid have done that with Digidesign, Softimage and M-Audio. No radical changes and slow integration of technologies where it makes sense. Liquid have probably taken few customers from Avid to date – few Composer customers have moved to Liquid. Instead, Liquid has acquired new NLE customers or people moving "up" from other NLEs. Liquid’s strongest customer bases are in small studios and in broadcast markets.

Even though Avid have let Digidesign and M-Audio compete, even although there is some overlap, it’s hard to imagine keeping a full product line that directly competes with the flagship products – on cheaper hardware at lower cost. Hard to imagine, but not impossible. It would be the most consistent behavior based on past acquisitions but one that would require a delicate balancing act to retain the new customers Pinnacle are bringing to the fold, without risking cutting into the more profitable Xpress, Media Composer and DS products.

Transaction

The transaction values Pinnacle at $462 million based on Avid’s closing price yesterday and will be handled by a combination of cash and shares. Avid will pay about $71 million in cash and issue 6.2 million new shares to the holders of Pinnacle stock, who will then make up about 15% of Avid’s shareholders. The transaction has been approved by the Boards of both companies but must still be approved by regulators and shareholders and is not expected to close until the 2nd or 3rd quarter of 2005.

The companies expect savings in regulatory costs, marketing and sales. We can expect little to change in the short term except probably, some volatility in Avid’s stock price as people try and work out what it all means.

NAB is going to be interesting this year.

Categories
Business & Marketing General

What are good visuals?

Perhaps it’s my background in video production and my strong desire to match media and message, but I’ve been seeing some incredibly inappropriate ways of delivering a message “visually”. The specific example that prompted me to write is this one . The piece is actually a very interesting pseudo documentary looking back at how media changes – perhaps its content is blog-worthy some other time. What annoyed me about it was that it was being used as an example of a “good use of Flash” when in fact I thought the visuals were so poor that, in all probability, the choice of a visual medium was a mistake: if you don’t have visual content, don’t do visuals is a good rule of thumb, I think.

Another example of, imho, really lame visuals used to waste time and attempt to make a silk purse out of a sow’s ear is this marketing hype. Again, Flash used but poor quality visuals (blown up way too big), super slow pacing and a message that, to me is cloyingly saccharine. (On the last point I am probably alone – it’s been very successful as a viral marketing piece so it must appeal to a lot of people.)

What bothers me about these pieces and about a lot of podcasts is that they are incredibly inefficient. One regular podcast I once listened to (on the topic of Final Cut Pro et al) takes about 20 minutes a week to listen to, for what would be a 3 minute read on a web page because the podcaster simply reads a script (or seems to be reading a script). OK, it could be listened to during a commute or at a gym where the 20 minutes wouldn’t be an imposition but surely, if you’re going to do an audio medium it should be produced as an audio medium?

Ditto visual medium – I always have hated making a “video” for a client that was essentially an audio program that had to have visuals forced onto it. (Like the piece at the head of this article.) Have we forgotten the imaginative power of radio? I’ll bet the movie version of War of the Worlds due out soon has none of the impact of the original 1938 broadcast. There are great radio documentaries produced that would make awesome podcasts, instead we get lame “read my script” or “come into my office and chat” podcasts that have zero production value. The Media 2014 example has great writing, the audio production is excellent and the visuals (which probably took the most time) add very little, imho.

This is what worries me about vlogcasting – even basic video production requires some time – more time than most people want to put into a blog or podcast, so what’s going to happen? Gigabytes of bandwidth occupied by badly lit, poorly edited shakey-cam that is virtually unwatchable? It’s already happening: download the Ant vlogcasting client and try and find something worth your time watching. Little evidence of strong writing or great production there – at least in what I’ve found (and if you find something great, ping me on it so I can share the excitement).

Where’s this going? – well, there’s still going to be a role for production skills for some vlogcasting, particularly if we adopt channels of information models via subscription. (The “RSS, Vlogcasting and Distribution Opportunities” blog entry is back, after editing.) It’s another example of how production specialists will need to adapt, and advise clients on what the most appropriate distribution methodology is. Just having basic production skills won’t be enough, but they will be a marketable commodity and profitable when part of the full service we offer customers on their communication needs. Also necessary will be the judgment and sense to tell customers that they don’t need “a video” but rather a website or brochure will work better for them. Savvy people will have those skills as well – if not personally, within their network.

Categories
Business & Marketing Interesting Technology

RSS, Vlogcasting and Distribution Opportunities [Edited]

Earlier I wrote about podcasting and the rapid uptake. Well, there’s every indication that video podcasting, of some sort, will follow. I think this is a tremendous opportunity for content creators because podcasting isn’t about broadcast but in fact an opt-in subscription service. In any discussion of these subjects it keeps echoing in my mind that RSS – is really simple subscription management (and yes, conditional access is possible) and Blogs.

To draw some parallels with traditional media: blogs are the journalism and writing and RSS would be the publishing channel (the network). Blogs and podcasting are bypass technologies – they bypass traditional channels. If pressed for an explanation for the truly astounding growth of podcasting and blogging to a lesser degree, I would hypothesize that they are to some degree a reaction against the uniformity of voice of modern media, where one company owns a very large proportion of the radio stations in the US (and music promotion and billboards) and news media is limited to a half dozen large company sources with little bite and no diversity.

The “blogsphere” (hate that word but it’s in common use) broke the CBS faked service papers during the last Presidential election campaign, and even in the last few weeks has been instrumental in the firing of a high level CNN executive and revealing the “fake” White House journalist and his sordid past. Collectively at least this is real journalism – and more importantly, it’s investigative journalism of the sort that isn’t done by traditional news outlets.

Blogging is popular because it’s easy and inexpensive. Sign up for a free blog on a major service, or download open source blogging software like WordPress (like I use) running on your own server. In a few minutes your voice is out there to be found. In my mind it harkens back to days of Wild West newspapers where someone would set a printing press and suddenly, be a newspaper publisher. But unlike a newspaper, blogs have an irregular publishing schedule. You can bookmark your favorite blogs and check them (when you remember), or you can be notified by your RSS aggregator application when there’s a new post (the URL for the RSS feed for this blog is in the bottom right of the page if you want to add it to your favorites).

Podcasting is easy and inexpensive unless your podcast becomes popular – then the bandwidth expense becomes considerable. Podcasting is a superior replacement for webcasting or streaming that does not have to be in real time. It’s produced and automatically delivered for consumption when it suits the listener. Those are the key attributes that, in my opinion, contribute to its success. There’s no need for a listener to tune in at exactly the time it’s “broadcast” – listen or miss it – or even to remember to visit some archive and download. My own experience on DV Guys totally parallels this. DV Guys has a nearly-five-year history as a live show (Thursday 6-7 pm Pacific) but has always been more popular through its archives pages.

Shortly after the advent of podcasting we set up a podcast of the live show available by the next day. Since then DV Guys has enjoyed more listeners than at any other time in its life. People tended to forget to visit the archives site weekly, or every second week. Even visiting every couple of weeks was too much of a commitment for a show that, while entertaining and informative, wasn’t at the top of everyone’s “must do” list. But with a podcast DV Guys is ready, available whenever a listener has a few moments – at the gym, during a commute, while waiting for a meeting or at an airport. DV Guys, like most radio, is consumable anytime. Importantly, it puts the listener in control, not the creator, of the listening experience.

Podcasting audio has another advantage – it’s easy to create. Almost as easy as blogging but not quite so easy. There are, consequently, proportionally fewer podcasts than there are blogs, because of that higher entry requirement. Even then, most podcasts are simply guys (mostly guys) talking into a microphone from a prepared script, or a few people together talking around a microphone. More highly produced podcasts are rarer.

The simplicity of publishing a blog means that it can be published for as few as half a dozen people – in fact there are people looking to use blogs and wikis as part of a project management tool. Podcasting can reach thousands but in broadcast terms that’s a tiny niche market.

Here’s a new truism – almost all markets are niche markets. What these new publishing models do is aggregate enough people in a niche to make it a market. There’s a lot of money to be made in niches. Particularly in the US, where there are a multiplicity of cable channels, small niches in the entertainment industry can be aggregated with appropriate low cost distribution channels, into profitable businesses. There are a lot of niches that are too small to be have their needs met by even a niche cable network, so cable channels get subdivided, or there’s no content for small niches.

RSS, low cost production tools and P2P is your new distribution channel. This is the other side of the production revolution we’ve been experiencing over the last 10 years, when the cost of “broadcast quality” content has dropped from an equipment budget of $200,000 upward (for camera, recorder and edit suite with titling and DVE) to similar quality at well under $20,000 (and many people doing it for under $10,000). At the end, a computer (or computer-like) device will be one of the inputs to that big screen TV you covet and you’ll watch content fed via subscription when it suits. If it’s not news, it can come whenever, ready to be watched whenever.

The relative difficulty of producing watchable video content will further limit the numbers (as happened from blogging to podcasting) and the current state of video blogs will make experienced professionals cry. That should not stop you planning for your own network. Instead of “The Comedy Network”, the “Sci-Fi Network” etc, prepare for a world of the “The fingernail beauty network” or “Fiat maintenance network” or “Guys who like to turn wood on a lathe network”, et al. Content could be targeted geographically, or demographically. There are very profitable niches available. Two that I’ve been involved with, at the video production level were for people who like to paint china plates (challenging to light) and basic metalwork skill training. There’s no need to fill 24 hours a day 7 days a week with this network model. When new content is produced, it’s ready for consumption when viewers want. We do something similar with our Pro Apps Hub where we publish content from out training programs piecemeal, as it’s produced, before we aggregate the disc version.

Note that I am not, basically, talking about computer-based viewing. My expectation is that software and hardware solutions will evolve into something usable as a home entertainment device. “TV” is a kick-back, put your feet up experience, video on the computer is a lean-forward pay attention experience. While both could be used for the end target of this publication model, what I’m really talking about is content for that lean back experience.

Now, I don’t expect “Hollywood” (as a collective noun for big media) to embrace this model early, or even ever, but that doesn’t mean it’s not going to become viable. The most popular content will probably still go through the current distribution channels, however they evolve. It also doesn’t mean we’ll be restricted to small budget production either. It could (should) evolve models where the viewers were in much closer touch with the producers, without the gatekeeper model.

For example, the basic skill training video series I produced back in Australia was niche programming. There were, effectively, 75 direct customers in the small Australian market (smaller, I should point out, than California alone). No customer or central group had money for production but each one had $150 – $300 to buy a copy of a product. Since these were very simple productions requiring a small crew and being produced in a regional city, each project had about a 30% profit margin. If the same proportions applied to the US market, the budget would have doubled but the profit would have quadrupled or more.

Take another more current example. Star Trek Enterprise has been canned but the last season had 2.5 million viewers an episode with a budget of $1.6 million an episode. If each viewer paid 75c for the episode, delivered directly to their “TV storage device” (somewhere between a Media PC, Mac Mini or TiVo) then the producers would turn a profit of $200,000 an episode or 12.5% margin on what they were getting from the network. At 99c a show, that’s nearly 50% more revenue than was coming from the network. And the audience isn’t limited to just the US market – that same content can be delivered to Enterprise fans anywhere in the world. As the producer I could live on 13 episodes at $200,000 profit above and beyond previous costs (which presumably included some salary and profit). Moreover, producers wouldn’t be locked into rating cycles and the matching boom/bust production cycle.

It doesn’t matter if each high quality (HD if you want) episode takes 20 hours of download “in the background”. When it’s complete and ready to watch it appears in the play list as available.

Bandwidth would be a killer in that scenario – even with efficient MPEG-4 H.264 encoding, a decent SD image is going to require 1-1.5 Mbits/sec and HD is going to want 7 or 8 Mbits/sec. Assuming 45 minute episodes (sans commercials) that’s around 700 MB an episode per person, or in HD about 5 GBytes, per subscriber. Across 2+ million subscribers that’s going to eat my profit margin rather badly without another solution. There are technologies in place that could be adapted.

Assuming the bandwidth challenge is resolved. What’s left?

Two things mostly: the device that stores the bits ready for display on the TV and software to manage the conditional access (you only get the bits you bought) and playlists. Something like a video/movie version of the iTunes music store. We’ll need to wait for a big player like Apple or Sony to wield enough muscle for that, but in the meantime, we see the beginning with Ant but as a computer interface and without the simplicity and elegance of a Dish/Direct/TiVo user interface. But it will come.

Will you have your network business plan ready? I’m working on mine already.

Wired has another take. Videoblogging already has a home page and for a bit of thinking on the flip-side, how this might all work for the individual wanting to aggregate a personal channel, Robin Good has a blog article on Personal Media Aggregators in one of my favorite (i.e. challenging) blogs.

Categories
Business & Marketing

Companies come, companies go

The last Friday of the month I usually head over to Burbank to Alpha Dogs’ Editors Lounge – a great opportunity for editors in the LA area to hang out together and usually to geek out on some new technology or technique in the presentation. I usually love heading over there for the social time and the presentation.

Why not go this month? Well the Media 100 folk were presenting Media 100 HD and I hate attending a wake before the death (even if the corpse smells rather bad). What’s happening with Media 100 is very sad for me. My first purchase of a Media 100 back in 1994 was the single best business decision I ever made. Media 100 made the first NLE that was truly finished quality on the desktop with a relatively simple editing interface that, for me, brought high quality video into the computer as pixels for further manipulation. (My second purchase was COSA After Effects to manipulate those pixels!)

But Media 100 was often the underdog: the editing software’s strength was its simplicity – after all John Molinari’s intention was to democratize video editing – in an industry where more features was always the mantra. The hardware had the highest quality video codec in those early days and well into the transition to PCI cards. But they failed to adapt to a changing market, because at the heart, Media 100 was a hardware company and the world moved to software or more accurately is somewhere along the journey to having moved to software. The “democratization” that CEO John Molinari had spoken of in his Blood Secrets article originally published in Videography in June 1994 was much better fulfilled by Apple computer starting with the release of Final Cut Pro and iMovie. There’s more on Media 100’s vision my own 2001 article.

For a democratized industry software can fulfill the role much more affordably than hardware, and Media 100 were blind-sided by the rapid take up of DV right into their target market. Media 100 could not react quickly enough and, despite Media 100’s 4:2:2 codec and higher image quality for the discerning eye, DV was good enough for most people. Media 100 had started work on new hardware but politics meant that work done on Macintosh was abandoned and started over on Windows. Such an ambitious project as 844/X took longer than anticipated to come to market, was on the “wrong” platform for the customer base and the starting price, while very competitive, was still too high for a market that had collapsed in the post-dot-com era.

Media 100’s assets were purchased by Optibase who seemed to have the right idea, until this last couple of months when they went back on announcements of just last year, dropped the wrong product line (844/X) and instead pushed ahead into the only market where they have no opportunity to survive – Media 100 HD. The market for the few unique features that Media 100 HD has, is very small – too small for the division to survive. 844/X is now, for all intents and purposes, dead, unless some other company moves it forward into HD. As an HD system there would be a couple of years where hardware has a unique advantage. But not under these owners who have squandered the goodwill they had garnered.

Media 100 is not the only company to have come and gone in the years I’ve been in the industry – I remember Puffin Designs’ fun NAB costumes and soft toys; ICE’s white igloo stood out in a sea of similar-looking booths; and while Terran’s product still lives on, staggering near death at the hands of Discreet, the company is long gone.

The good thing is that, while companies come and go, the engineering expertise mostly stays in the industry. New managements give engineers a chance to innovate again in a clean environment. Discreet let the original Paint/Effect -> combustion development team go and Apple snapped up the whole team within a week with the result being Motion down the track. Media 100s engineers are, mostly still working in the industry and still contributing.

So, I’ll raise a glass this year at NAB as an official farewell. I will always appreciate the opportunity it provided to me and my business, with a side thanks to Mark Richards at Animotion/Adimex in Sydney who up-sold me to Media 100. Despite officially still “being there” I believe the Media 100 brand will be gone shortly. Such is the life cycle of innovative companies that don’t continue to innovate and take account of market shifts. It’s a salutary lesson to all of us who are in business – if we do not continue innovating and moving with our customers we risk joining the ranks of the “once were great” of history.

Categories
Apple

Why aren’t there Workstation class graphics cards for Mac?

With the news today that Matrox had announced a dual-link PCI graphics card designed to power dual-link monitors like Apple’s 30″ Cinema Display I was once again prompted to ask why there are no workstation class cards for OS X. The Parhelia card is a good graphics card but not a workstation-class card but even so, the nearest equivalents for OS X do not have the complement of output options that the Parhelia card does. Pity there’s no Mac drivers for it.

But it still begs the wider question of why none of the high end graphics cards, like 3D Labs Wildcat Realizm aren’t available for Mac – with increasing demand from applications like Motion, and in the very near future CoreVideo and CoreImage on OS X 10.4 Tiger, Mac users need the power of these graphics cards to get the most out of the applications.

Of course, ATI, NVIDIA and Apple tend to point fingers at each other, although to the best of my understanding the hold-up is in the drivers and apparently Apple write the drivers for OS X. Perhaps there’s a great push to get these cards into Macs when Tiger ships – we can only hope so at least, but in the absence of hard information I vote that we in the post production industry let Apple know that we want these cards supported so we can have better performance from Avid Adrenaline on OS X, Apple’s Motion, anything CoreVideo coming up (NAB is only 12 weeks away), Boris Blue, Combustion and more.

Until we get support for these tools, there remain good reasons to go with Windows for true power graphics users.