Categories
Business & Marketing Distribution

Who’ll buy YouTube?

I can’t help but feel we’re in another dot-com-like bubble: MySpace sold to Fox Interactive for $600 million and now YouTube’s founders are being coy saying that they “don’t think it’s worth $1 billion” but that they’re OK with $600 million.

Ok, now I don’t have a fancy MBA, and it could be that I’m a hick from Newcastle in Australia but YouTube, for all its popularity (and it is popular) has absolutely no business model. Apart from a few google ads on their site, they have no income. Conservative estimates are that serving up five million videos a day (or however many it is this week) costs the site over $1 million a month in bandwidth bills alone leaving out server costs, office space and salaries. The $11.5 million they’ve raised from Sequoia Capital in two rounds ($3.5 and $8 million) isn’t going to last long before the business just stops. At least MySpace generates some income to justify its $600 million purchase price – and even that was treated with raised eyebrows at the value.

Now this week, ZDNet’s Russell Shaw posts “One of these six companies will buy YouTube” and I have to explode somewhere. Why the heck would anyone buy an opportunity to spend a million dollars a month for $600 million dollars with no chance of recovering the investment or ongoing costs?

“But Philip”, you say, “you’re missing the point. In a big company there are synergies that will help them make money.” That may be so, but we’ve heard that line before and the “synergies” between Time Warner and AOL don’t seem to have been that useful. That’s just one example – in general the so-called synergies don’t pan out and someone just loses a bunch of money, while the founders walk away rich. I’ve got no problem with Chad Hurley and Steven Chen walking away with a good portion of someone’s $600 million dollars (after Sequoia take their share). People win the lottery every day.

It’s the mindset/lunacy/sheer stupidity of whoever buys it that I just can’t fathom. The six sites that ZDNet thought might be in the market are Adobe, Time Warner/AOL, Sony, Google, News Corp/Fox and Yahoo. Adobe does not need a showcase for Flash when YouTube and Google Video are already doing that for them, and Time Warner just started its own video sharing site on its AOL property with a “community reporter” video upload site at CNN. Google have Google Video and Yahoo just launched Yahoo Video, neither of which is as popular as YouTube but they didn’t cost $600 million either!

The problem that any large company would have, if they purchased YouTube, would be that they either have to kill YouTube as it is, or fight many long and tedious (and expensive) law suits. YouTube today is popular because it’s full of copyright material uploaded by people without rights to upload it. The copyright owners generally do not approve. A couple of shows, like The Daily Show and Colbert Report have said they have no problem, but most networks and program producers have a problem with it. Even with YouTube’s policy of removing copyright material as soon as it’s pointed out to them, they’re still being sued by an LA-based producer for copyright infringement. (That suit is unlikely to succeed but doesn’t really help YouTube’s new owner.)

When you have an owner with deep pockets – NewsCorp, Sony or Yahoo – the law suits are going to come out of the woodwork and YouTube will have to remove all copyright material without clearances. There goes most of the appeal and value. In order to make some money back, there are two solutions: charge for the download, or add advertising.

Either, or both moves will kill the site’s appeal. What are they going to charge for a 37 second video of some dog biting their male companion in a sensitive spot? It’s not going to be $1.99 that’s for sure. How much advertising can you add to the head, or tail, of a 2 minute video before everyone abandons the site completely?

Personally, I don’t see a way out for YouTube. It’s a temporary phenomena that is too good to be true, because it doesn’t follow the basic tenets of business: income has to (in some way) exceed expenses.

No doubt someone will buy it for some outrageous amount of money – maybe NewsCorp want to add it to MySpace. Sony would want to put root-kit, computer killing DRM around it.

But when they do, I still won’t understand what the business model is nor why it’s being brought.

Categories
Business & Marketing Random Thought

How business can be its own worst enemy

Seems that I am on a theme where, if a supplier won’t provide the service I want to buy, I’ll go somewhere else. Well, it’s happened again. A website I used to have open most of the time for quick reference to the information finally drove me away tonight. Why? Because they’ve loaded their pages with so much flash-based advertising that having that site open used more than 70% of my processor capacity by itself.

I have nothing against sites that have advertising although I do object to the processor load that Flash ads force on me. Advertising is a given on the Internet, and until this site forced me to act, I was prepared to ignore their, frequently intrusive, advertising for the free weather service they provided. Much more up to date than the OS X 10.4 widget, which is often 2-3 hours out of date when it loads.

I almost reverted to that old standby – walking to the door and opening it – to check the weather when I noticed that ubiquitous RSS feed button on the site. Bliss, joy, glory!!! One click later and my weather is now in my favorite RSS feed aggregator (NetNewsWire Lite). Two items in the feed: weather prediction and current conditions. Exactly what I kept the browser window open full time to get.

Absolutely a reminder to me, and probably anyone in business, that the customer has to come first. The moment we start creating pages that are so heavy in advertising that they become unwieldy for the customer, we effectively put ourselves out of business. It’s not like this site has tremendous overheads – they’re only aggregating and presenting information from the National Weather Service. That the page is taking on advertising that slows my computer is greed, and greed only.

Worse still, the site has no feedback link so I can’t even help them improve by providing feedback. As a serial entrepreneur for more than 30 years, I don’t enjoy negative feedback but I want it and encourage it. I love it when people have positive comments about our products or my presentations. That feeds the ego and helps me know what works. But it’s the negative comment, or the critical opinion, that I can use to improve my presentation and/or product.

And indeed, some of the best improvements to the products have come from critical customers. Thank you. Feedback on presentations helps me improve for future presentations. The subsequent audiences thank you.

In the day of alternate distribution our customers have many ways to get the information we supply like using an RSS feed instead of going to a website, something I’m a huge fan of because of the efficiency. RSS feeds can (and some do) contain advertising and I don’t mind that, because it’s one ad per feed message, generally small and definitely not the processor-hogging flash banners that have become seemingly ubiquitous.

Customers have a choice. If we don’t focus entirely on their need, we’re only in business temporarily. If we’re in post production and don’t focus on the customer’s need for improved communication in the context of their business and message then there are plenty of alternatives. No longer are we the “gatekeepers” to production values because, frankly, anyone can buy or borrow the means of production with quality matching the best broadcasters of just a few years ago. Even HD has no significant barrier to entry.

When was the last time you solicited your customers for how you can improve?

Categories
Business & Marketing Distribution Random Thought

An industry divided

From recent announcements and manoeuvrings, it would seem like there are two content creation industries: the one that sees new forms of distribution as an opportunity to promote and extend brand, and the other that feels every new use, every feature has to be charged for – including, if the MPAA gets its way on Capitol Hill, some that have been free to date.

For the moment I only want to consider what some call “high value” content. Without wishing to denigrate videoblog/RSS subscription content and the important opportunities it opens for non-mainstream content, the industry I’m talking about here is the network/movie company/record company hegemony who make the content that the mainstream enjoy, and pay to enjoy: television, movies and recorded music.

In a week where Apple announced one million sales of videos through the iTunes store in 10 days and NBC said they will be releasing the nightly news free; the MPAA have been working hard in Washington to re-introduce the Broadcast Flag legislation, defeated in May 2005, with super-enhancements. Blu-ray has gained support from more studios because their Digital Rights Management was more draconian than the competing HD DVD camp, and Sony are in trouble for their spyware-based CD DRM. See my recent blog article When a good format “wins” for all the wrong reasons. Another take on the MPAA resurrection of the Broadcast Flag is at the Electronic Freedom Foundation .

Clearly a good part of the mainstream content creation industry considers the only way to protect their content is to lock it up, but even the MPAA has no delusions that they will actually prevent large scale piracy. As quoted in the Cory Doctorow article at Boing Boing above, they believe it will “keep honest users honest” or more accurately, prevent honest users doing what they do now – watch, store, time-shift, space-shift or format shift – without permission and payment in the future. I believe that they are really so caught up in their own world viewpoint that they cannot see how that will drive people to pirated copies that have no restriction. They are surely realistic enough to figure that all DRM will be broken. If you can watch it or hear it piracy can happen.

DRM will only cause dissension and force people toward pirated copies of the content. Actions like Sony’s that open holes for worms and viruses to take over the computer, without warning people that it’s installing such problematic software will likely cause law suits that diminish the reputations of the companies. In short, there’s nothing to be gained by excessive locks and controls on content. It will, if nothing more, drive people further from existing sources, to new and developing alternatives. Every failed mainstream movie, opens an opportunity for an independent. Every locked down TV broadcast opens the way for episodic entertainment delivered direct to customers that can be directly charged with micropayments or supported by advertising.#

Apple have established a beachhead with $1.99 television episodes (similar to the cost-per-episode of the DVD release, although not as high quality*); NBC are using a “top and tail” advertisement to support the free nightly news videos. There are new payment alternatives coming that will, in turn, open new production alternatives. It’s time the MPAA, RIAA and their associates stop treating their customers as criminals, and embrace new technology – use some imagination (OK, that’s a stretch for Hollywood I know, at least based on recent movie releases) and find the opportunity. If they don’t others will, and the losers will be the entrenched industry. Who knows, that could be the best possible outcome.

# Further thoughts on these ideas are in my February post.
* The iPod is capable of higher resolution video so I suspect that the 320×240 size was chosen deliberately to not compete with DVD sales. At that quality it’s better than VHS but less than DVD quality for most content.

And finally, just to demonstrate the utter stupidity of the “DRM crowd” – Sony have not only done themselves huge PR damage with their rootkit virus-like protection that opens the computer to other viruses based on the protection installed by Sony, but it will do nothing to prevent piracy as there are 20 million or so Macs that can rip the files off those CDs without any protection: the rootkit protection only works on PCs!

Categories
Business & Marketing Distribution Random Thought

Broadcast Flag (again) and WIPO

Despite being defeated thoroughly in May this year, the media oligopoly (aka the RIAA and MPAA) are once again trying to reinstate the Broadcast Flag that will take away existing media usage rights and attempt to control every consumer electronic device to be built in the future. The Broadcast Flag, if ever passed into legislation, would allow media companies to control every piece of consumer electronics. In their world, if we’re video recording in the street and a car drives past playing copyright music, the video camera shuts down. Existing Fair Use rights will go out the window.

But, as the Electronic Freedom Frontier points out the Broadcast Flag was soundly defeated and Congress are reluctant to get involved in meddling between the American consumer and their TV, so the industry has encouraged 20 suicidal Representatives to attempt to slip the revised legislation through the “back door” – on the back of a budget bill.

Here’s what they’re attempting to slip through unnoticed:

The Federal Communications Commission —
(a) has authority to adopt such regulations governing digital audio broadcast transmissions and digital audio receiving devices that are appropriate to control the unauthorized copying and redistribution of digital audio content by or over digital reception devices, related equipment, and digital networks, including regulations governing permissible copying and redistribution of such audio content….

Courtesy of Boing Boing

I don’t know why the entrenched media companies think that attempting to lock down media this way is in their best interests. At best it’s a serious lack of vision or understanding that everything changes and the only way they have a future is to adapt. These are the same industries that fought tooth and nail against “Betamax”. Jack Valenti testified to Congress that:

“…’the growing and dangerous intrusion of this new technology’ threatened his entire industry’s ‘economic vitality and future security.”

And yet now, the VHS and DVD sales, the successor to Betamax, is worth more to the movie industry than theatrical distribution, and has expanded the industry dramatically with direct-to-DVD movies.

Why would anyone think they’re any more right this time? Why would they have any credibility? They don’t. This is a case of dinosaurs attempting to prevent an ice age. Attempts at broadcast flag and other DRM will fail, whether they’re implemented or not. DRM will probably kill Blu-ray and HD DVD before they’re even out the door (another blog article there!).

There’s another good article on the Broadcast Flag in Susan Crawford’s Blog.

Even if your Representative is not one of the 20 suicidal ones, contact them now and tell them why supporting the Broadcast Flag (and all DRM) is a bad idea. It’s already been knocked out once, it shouldn’t come back, and it shouldn’t come back hidden in a budget bill. If this is to come to pass, let’s have it out in the open, debated in public, and a reasonable decision made.

And if the Broadcast Flag is not bad enough

If you really want to get a feel for the nature of the established media people, consider what the United States official delegation to the World Intellectual Property Organization proposed. Reporting on an article in the Financial Times BoingBoing.net summarizes it this way:

James Boyle’s latest Financial Times column covers the Webcasting provisions of the new Broadcast Treaty at the World Intellectual Property Organization. Under these provisions, the mere act of converting A/V content to packets would confer a 50-year monopoly over the underlying work to ISPs. That means that if you release a Creative-Commons-licensed Flash movie that encourages people to share it (say, because you get money every time someone sees the ads in it), the web-hosting companies that offer it to the world can trump your wishes, break your business and sue anyone who shares a copy they get from them. This is a way of taking away creator’s rights and giving them to companies like Microsoft and Yahoo, whose representative at WIPO has aggressively pushed to have this included in the treaty.

Even if we publish under a Creative Commons license, or even just publish our own content through an ISP, the ISP owns copyright in our work for 30 years. Fortunately this didn’t get saluted this time but that’s what’s being pushed. Is this what you want? It’s certainly not what I want. Does it protect the rights of the artist as copyright is intended in the US Constitution? I think a resounding “No way” is the only possible answer. Well, does it serve the public good – the other provision in the Constitution? I sure can’t see how.

Follow up note added October 13: The delegation to the WIPO conference, a.k.a. “the forces of radical protectionism” were seeking a “diplomatic conference” in Q1, 2006, which is the last step before a treaty is ready for signatures. Instead they were denied and the proposal will be dissected in at least two more WIPO meeting before a diplomatic conference gets discussed again, allowing for time to make sure this level of protection for carriers is never enacted. /addition

Make your opinion known to your Congressional Representatives now. Otherwise we’ll end up with these things in law, just like the reviled Digital Millenium Copyright Act. Most probably Unconstitutional, but who’s going to “stand up for pirating” and stand against established legislation?

Soon I’ll write on why Digital Rights Management, as it’s planned for Blu-ray, HD DVD and the “Trusted Computing” initiative will stifle creative endeavors and end up killing promising technologies. And why it’s bad for the MPAA and RIAA, if only they had a little vision.

Categories
Business & Marketing Distribution Interesting Technology

The power of disruptive technologies

A disruptive technology is one that most people do not see coming and yet, within a very short period it changes everything. A disruptive technology will become the dominant technology. Rarely are they accurately predicted because predictions are generally extrapolated from the existing understanding. For example, there’s no doubt that the invention of the motor car was a disruptive technology, but Henry Ford is often quoted as saying “If we had asked the public what they wanted, they would have said faster horses.”

It’s almost impossible to predict what will become a disruptive technology (although the likelihood of being wrong isn’t going to stop me) but they are very easily recognized in hindsight. Living in Los Angeles it’s obvious the affect that Mr Ford’s invention has had on this society. Although some would argue that it wasn’t so much the invention of the motor car that made the difference, but the assembly-line technique that made the motor vehicle (relatively) affordable.

In fact I think it’s reasonable to believe that a disruptive technology will have a democratizing component to it, or a lowering (or removal) of price barriers.

Non-linear editing on the computer – Avid’s innovation – was a disruptive technology but initially only within a relatively small community of high end film and episodic television editors. The truly disruptive technology was DV. DV over FireWire starting with Sony’s VX-1000 and Charles McConathy/Promax’s efforts to make it work with the Adobe Premiere of the day, paved the way for what we now call “The DV Revolution”.

Apple were able to capitalize on their serendipitous purchase of Final Cut from Macromedia and drop the work that had been done to make Final Cut work with Targa real-time cards and concentrate on FireWire/DV support. (It was two further releases before we saw the same level of real-time effects support as was present in the Macromedia NAB 98 alpha preview.) I think, at the time, Apple saw Final Cut Pro to be another way of selling new G3’s with built-in FireWire. The grand plan of Pro Apps came about when the initial success of Final Cut Pro showed the potential. But that’s another blog post.

DV/FireWire was good enough at a much lower price, with all the convenience of single-wire connection. We’ve grown from an industry of under 100,000 people worldwide involved in professional production and post-production to one almost certainly over 1 million worldwide.

Disruptive technologies usually involve a confluence of technologies at the right time. Lower cost editing software wouldn’t have been that disruptive without lower cost acquisition to feed to it. Both would have been pointless without sufficient computer power to run the software adequately. Final Cut Pro on an Apple IIe wouldn’t have been that productive!

In a larger sense DV/FireWire was part of a larger disruption affecting the computer industry – the transition from hardware-based to software-based. We are, in fact, already through this transition with digital video, although the success of AJA and Blackmagic Design might suggest otherwise. However, the big difference now is that the software is designed to do its job with hardware as the accessory. Back in the days of Media 100’s success, Media 100 would not run without the hardware installed, in fact without the hardware it was pretty useless as everything went through the card. Then when they rebuilt the application for OS X they developed it as (essentially) software-only. This paved the way to the current HD version (based on a completely different card) and a software-only version.

Ultimately, all tasks will be done in software other than the hardware needed to convert from format to format. In fact much of the role of today’s hardware is that of format interface rather than the basis for the NLE as it was in the day of Media 100, Avid AVBV/Meridien and even Cinéwave. Today’s hardware takes some load off the CPU but as an adjunct to the software, not because the task couldn’t be done without the hardware. This has contributed to the “DV Revolution” by dramatically dropping prices on hardware.

Disruptive technologies are hard to predict because they are disruptive. Any attempt to predict disruptive technologies is almost certainly doomed to failure, but like I said, that’s not going to stop me now!

We are headed for a disruptive change in distribution of media, both audio and video content. I wish I could see clearly how this is going to shake out, so I could invest “wisely” but it’s still too early to predict exactly what will be the outcome 5-7 years down the track. I feel strongly that it will include RSS with enclosures, in some form. It will have aspects of Tivo/DVR/PVR where the content’s delivery and consumption will be asynchronous. Apart from news and weather, is there any need for real-time delivery as long as the content is available when it’s ready to be consumed? Delivery will, for the most part, be via broadband connections using the Internet Protocol.

There is a growing trend to want to merge the “computer” and “the TV”, either by creating media center computers, by adding Internet connected set-top boxes (like cable boxes) or by delivering video content direct to regular computers. Microsoft’s Media Center PCs haven’t exactly set the world on fire outside the college dorm where they fit a real niche; Apple are clearly moving slowly toward some media-centric functions in the iLife suite where it will be interesting to see what’s announced at MacWorld San Francisco in January; and there are developments like Participatory Culture’s DTV and Ant is not TV’s FireANT for delivering channels of content directly to the computer screen. Both DTV and FireANT are based on RSS, with enclosures, for their channels, just like audio podcasting does.

On the hardware box front, companies like Akimbo, Brightcove and DaveTV are putting Internet-connected boxes under the TV, although DaveTV is having a bet both ways with computer software or set-top box.

Whether any of these nascent technologies are “the future of media” as they are touted by their developers, whichever way this shakes out it has important implications for our industry. No-one foresaw that the Master Antenna and Community Antenna systems of the 1950’s would evolve into today’s dominant distribution networks – the cable channels, which have now (collectively) moved ahead of the four major networks in total viewership. The advent of cable distribution opened up hundreds, or thousands of new production opportunities for content creators. This time many people foresee (or hope) that using the Internet for program distribution will take down the last wall separating content creators from their audience.

In the days of four networks, any program idea had better aim to appeal to 20-30% of the available audience – young, middle-aged or old – to have any chance of success. In an age where the “family” sat down to watch TV together (and even ate meals together) that was a reasonable thing to attempt. As society fragmented we discovered that there were viable niches in the expanded cable networks. Programs have been artistically and/or financially successful that would never have been made for network TV because of the (relatively) small audiences or because the content was not acceptable under the regulations governing the networks. The development of niche channels for niche markets parallels the fragmentation of society as a whole into smaller demographic units.

Will we see, or do we need, more channels? Is 500 channels, and nothing on, going to be better when it’s 5,000 channels? Probably, because in among the 5,000 (or 50,000) channels will be content that I care enough about to watch. It won’t be current news, that’s still best done with real-time broadcasting, but for other content, why not have it delivered to my “box” (whatever takes this role) ready to be watched (on whatever device I choose to watch it)? (Some devices will be better suited to certain types of content: a “video iPod” device would be better suited to short video pieces than multi-hour movies, for example.)

If the example of audio podcasting is anything to go by, and with just one year of history to date it’s probably a little hard to be definitive, then yes, subscriber-chosen content delivered “whenever” for consumption on my own schedule is compelling. I’ve replaced the car radio newstalk station with podcasts from my iPod mini. Content I want to listen to, available when I’m ready to listen. Podcasts have replaced my consumption of radio almost completely.

Ultimately it will come down to content. Will the 5,000 or 50,000 channels be filled with something I want to watch? Sure, subscribing to the “Final Cut Pro User Group” channel is probably more appealing than (for me) many of the channels available on my satellite system. Right now, video podcasts tend to be of the “don’t criticize what the dog has to say, marvel that the dog can talk” variety. Like a lot of podcasts. Not every one of the more-than-10,000 podcasts now listed in the iTunes directory is compelling content or competently produced.

But before we can start taking advantage of new distribution channels, for more than niche applications, we need to see some common standards come to the various platforms so that channels will discovered on Akimbo, DaveTV, DTV and on a Media Center PC. About the only part of this prediction I feel relatively sure of, is that it will involve RSS with audio and video enclosures, or a technology derived from RSS, like Atom (although RSS 2 seems to have the edge right now.)

In a best-case scenario, we’ll have many more distribution channels, aggregating niche markets into a big-enough channel for profitable content (particularly with lower cost production tools now in place). Direct producer-customer connection, without the intermediation of network or cable channel aggregators improves profit potential on popular content and possibly moves content into different distribution paths. Worst case scenario is that nothing much changes and Akimbo, DaveTV, Brightcove, Apple or Microsoft’s Media-centric computers go the way of the Apple Lisa – paving the way for the real “next big thing”.

Categories
Apple Apple Pro Apps Business & Marketing Interesting Technology Random Thought

Don’t panic! Apple adopts Intel processors

The confusion and furor surrounding Apple CEO Steve Jobs’ announcement at the WordWide Developers Conference that future Mac, after Jun 2006, will use Intel processors inside is totally unfounded. Nothing changes now, very little changes in the next year and longer term the future for the Mac got a little brighter. Although the decision caught me by surprise, as I thought about it, and listened to what was said in the keynote, I could see why it made sense.

If we look short term, the decision makes little sense. Right now a G5 (Power PC, aka PPC) PowerMac has very similar performance to the best workstations on the PC/Intel platform running Windows and the G5 will cost less than a similarly performing PC workstation. At the low end the Mac mini is competitively priced to a cheap Dell or other name brand. (Macs are not price competitive with off-brand PCs, the so called “white box”.) So, why put the developer community, and developers within Apple, through the pain of a processor shift?

For the future (“we have to do it for the children”) and because it’s really not that painful for most developers.

Right now a G5 PowerMac is very performance competitive with the best offerings from Intel. What Apple have been privy to, that rest of us haven’t, is the future of both Intel processors and PPC processors. Based on that future Apple decided they had no choice but to make the change. In the future, the performance-per-watt of power of a PPC chip will be “15 units of processing” according to Mr Jobs. The same watt of energy would give 70 units of performance on an Intel processor. Without knowing exactly how those figures were derived, and what it means for real-world processing power it seems like a significant difference. It was enough to push Apple to make the change.

Not that there’s anything wrong with the PPC architecture: IBM continue to develop and use it at the high end and PPC chips (triple core “G5” chips) will power the Microsoft XBox360. The sales of chips to Microsoft will well and truly outweigh the loss of business from Apple. It is, however, a crazy world: next year will see a Microsoft product powered by PPC and Macintoshes powered by Intel!

Steve Jobs demonstrated how easy it will be for developers to port applications to OS X Intel. In fact, he confirmed long-term rumors that Apple have kept OS X running on Intel processors with every development on OS X – Mr Jobs demonstrated and ran his keynote from an Intel Macintosh. For most applications a simple recompile in the Xcode developer environment will suffice – a matter of a few hours work at most. Moreover, even if the developer does not recompile, Apple have a compatibility layer, called Rosetta, that will run pure PPC code on an Intel Mac. Both platforms are to be supported “well into the future”.

During the keynote Mathematica was demonstrated (huge application, 12 lines of code from 20 million needed changing, 2 hours work) as were office applications. Commitments to port Adobe’s creative suite and Microsoft’s Mac Business Unit software were presented. Apple have been working on Intel-compatible versions of all their internal applications according to Mr Jobs. [Added] Luxology’s president has since noted that their 3D modelling tool modo took just 20 minutes to port, because it was already Xcode-based, and built on modern Mach-0 code.

Remember, these applications are for an Intel-powered OS X Macintosh. No applications are being developed for Windows. In fact, after the keynote Senior Vice President Phil Schiller addressed the issue of Windows. Although it would be theoretically possible to run Windows on an Intel Macintosh it will not be possible to run OS X on anything but Apple Macintosh.

Apple’s Professional Video and Audio applications might not be as trivial to port although most of the modern suite should have no problem. LiveType, Soundtrack Pro, DVD Studio Pro and Motion are all new applications built in the Cocoa development environment and will port easily. Final Cut Pro may be less trivial to port. It has a heritage as a Carbon application, although the code has been tweaked for OS X over recent releases. More than most applications, Final Cut Pro relies on the Altivec vector processing of the PPC chip for its performance. But even there, the improvement in processor speeds on the Intel line at the time Intel Macs will be released are likely to be able to compensate for the loss of vector processing. At worst there will be a short-term dip in performance. However with Intel Macintoshes rolling out from June 2006 it’s likely we’ll see an optimized version of Final Cut Pro ready by the time it’s needed.

[Added] Another consideration is the move to using the GPU over the CPU. While the move to Intel chips makes no specific change to that migration – Graphics card drivers for OS X still need to be written for the workstation-class cards – Final Cut Pro could migrate to OS X technologies like Core Video to compensate for the lack of Altivec optimizations for certain functions, like compositing. Perhaps then, finally we could have real-time composite modes!

Will the announcement kill Apple’s hardware sales in the next year? Some certainly think so but consider this: if you need the fastest Macintosh you can get, buy now. There will always be a faster computer out in a year whatever you buy now. If your business does not need the fastest Mac now (and many don’t) then do what you’d always do: wait until it makes sense. The G5 you buy now will still be viable way longer than its speed will be useful in a professional post-production environment. It’s likely there will be speed-bumps in the current G5 line over the next year, as IBM gets better performance out of its chips. We are waiting for a new generation of chips from Intel before there would be any speed improvement. If Apple magically converted their current G5 line to the best chips Intel has to offer now, there would be little speed improvement: this change is for the future, not the present.

So, I don’t think it will affect hardware sales significantly. As a laptop user I’m not likely to upgrade to a new G4 laptop, but then there will be little speed boosts available there in the next year anyway. But as a laptop user, I’m keen to get a faster PowerBook and using an Intel chip will make that possible.

Although I have to say I initially discounted the reports late last week because, based on current chip developments, there seemed little advantage in a difficult architecture change. With the full picture revealed in the Keynote as to the long term advantages and the minimal discomfort for developers, it seems like a reasonable move that will change very little except give us faster macs in the future.

How could we have any problem with that?

[Added] Good FAQ from Giles Turnbull at O’Reilly’s Developer Weblog

Categories
Business & Marketing Interesting Technology

Can a computer replace an editor?

Before we determine whether or not a computer is likely to replace an editor, we need to discuss just exactly what is the role of an editor – the human being that drives the software (or hardware) that edits pictures and sound? What do they bring to the production process? Having determined that, perhaps we can consider what it is that a piece of computer software might be capable of now or in the future.

First off I think we need to rid ourselves of any concept that there is just one “editor” role even though there is only one term to cover a vast range of roles in post production. Editing an event video does not use the same skills and techniques as editing a major motion picture; documentary editing is different from episodic television; despite the expectation of similarity, documentary editing and reality television require very different approaches. There is a huge difference in the skills of an off-line editor (story) and an on-line editor (technical accuracy) even if the two roles are filled by the same person.

So let’s start with what I think will take a long time for any computer algorithm to be able to do. There’s no project from current technology – in use or in the lab – that would lead to an expectation that an algorithm would be able to find the story in 40 hours of source and make an emotionally compelling, or vaguely interesting, program of 45 minutes. Almost certainly not going to happen in my lifetime. There’s a higher chance of an interactive storytelling environment à la Star Trek’s Holodeck (sans solid projection). Conceptually that type of environment is probably less than 30 years away, but that’s another story.

If a computer algorithm can’t find the story or make an emotionally compelling program, what can it do? Well, as we discovered earlier, not all editing is the same. There is a lot of fairly repetitive and rather assembly line work labeled as editing: news, corporate video, event videography are all quite routine and could conceivably be automated, if not completely at least in part. Then there is the possibility of new forms of media consumption that could be edited by software based on metadata.

In fact, all use of computer algorithms to edit rely on metadata – descriptions of the content that the software can understand. This is analogous to human logging and log notes in traditional editing. The more metadata software has about media the more able it is to create some sort of edit. Mostly now that metadata will come from the logging process. (The editor may be out of a job, but the assistant remains employed!) That is the current situation but there’s reason to believe it could change in the future – more on that later in the piece.

If we really think about what it is we do as editors on these more routine jobs, we realize that there are a series of thought processes that we go through and underlying “algorithms” that determine why one shot goes into this context rather than anther shot.

To put it at the most basic level, an example might be during editing content from an interview. Two shots of the same person have audio content we want in sequence but the effect is a jump cut. [If two shots in sequence feature same person, same shot…] At this point we choose between putting another shot in there – say from another interview or laying in b-roll to cover the jump cut. […then swap with alternate shot with same topic. If no shot with same topic available, then choose b-roll.]

That’s a rudimentary example and doesn’t take into account the value judgment that the human editor brings as to whether another interview conveys the story or emotion as well. Most editors are unfamiliar with their underlying thought processes and not analytical about why any given edit “works” – they just know it does but ultimately that judgment is based on something. Some learned skill, some thought process, something. With enough effort that process can be analyzed and in some far distant time and place, reproduced in software. Or it could except for that tricky emotional element – the thing that makes our storytelling interesting and worth watching.

The more emotion is involved in your storytelling output, the safer your job – or the longer it might be before it can be replaced. 🙂

Right now, the examples of computerized editing available now – Magic iMovie and Muvee Auto Producer use relatively unsophisticated techniques to build “edited” movies. Magic iMovie essentially adds transitions to avoid jump-cut problems and builds to a template; Muvee Auto Producer requires you to vet shots (thumbs up or down) then uses a style template and cues derived from audio to “edit” the program. This is not a threat to any professional or semi-professional editor with even the smallest amount of skill.

However, it is only a matter of time before some editing functions are automated. Event videography and corporate presentations are very adaptable to a slightly more sophisticated version of these baby step products. OK, a seriously more sophisticated version of these baby-step products, but the difference between slightly and seriously is about 3 years of development!

In the meantime, there are other uses for “automated” editing. For example, I developed a “proof of concept” piece for QuickTime Live! in February 2002 that used automated editing as a means of exploring the bulk of material shot for a documentary but not included in the edited piece. Not intended to be direct competition for the editor (particularly as that was me) it was intended as a means of creating short edited videos that were customized in answer to a plain language query of a database. The database contained metadata about the Clips – extended logging information really. In addition to who, where and when, there are fields for keywords, a numeric value for relative usefulness of the clip, a field for keywords to search for for b-roll [If b-roll matches this, search for more than one clip in the search result, edit them together and lay b-roll over all the clips that use this b-roll.]

So, right now, computer editing can be taught rudimentary skills. This particular piece of software knows how to avoid jump cuts and cut to length based on the quality criteria. It is, in fact, a better editor than many who don’t know the basic grammar of video editing. Teaching the basic grammar is relatively easy. Teaching software to take some basic clips and cut into a news item or even basic template-based corporate video is only a matter of putting in some energy and effort.

But making something that is emotionally compelling – not any time soon.

Here’s how I see it could pan out over the next couple of year. Basic editing skills from human-entered metadata – easy. Generating that metadata by having the computer recognize the images – possible now but extremely expensive. Having a computer edit an emotionally compelling piece – priceless.

It’s not unrealistic to expect, probably before the end of the decade, that a field tape could be fed into some future software system that recognizes the shots as wide, medium, close-up etc; identifies shots in specific locations and with specific people (based on having been shown examples of each) and transcribes the voice content and the text in signs and other places in the image. Software will recognize poor exposure, loss of contrast and loss of focus, eliminating shots that do not stand up technically. Nothing here is that difficult – it’s already being done to some degree in high end systems that are > $300,000 right now. From there it’s only a matter of time before the price comes down and the quality goes up.

Tie that together with a template base for common editing formats and variations and an editing algorithm that’s not that much further on than where we are now and it’s reasonable to expect to be able to input one or more source tapes into the system in the afternoon, and next morning come back to review several edited variations. A couple of mouse-clicks to choose the best of each variation and the project’s done, output to a DVD (or next generation optical disc), to a template-based website, or uploaded to the play-out server.

Nothing’s far fetched. Developing the basic algorithm was way too easy and it works for its design goals. Going another step is only a matter of time and investment. Such is the case with anything that is repetitive in nature: ultimately it can be reproduced in a “good enough” manner. It’s part of a trend I call the “templatorization” of the industry. But that’s another blog discussion. For now, editors who do truly creative, original work need not be worried, but if you’re hacking together video in an assembly-line fashion start thinking of that fall-back career.

Categories
Business & Marketing Video Technology

Avid buys Pinnacle – the fallout

The acquisition of Pinnacle will greatly strengthen Avid’s Broadcast video offerings, the area of their business that has been strongest in recent years but will create challenges in integrating product lines and cultures. It is a move that brings further consolidation to the post production business.

Pinnacle has been in acquisition mode for most of the last five years acquiring, among others, Miro, Targa, Dazzle, Fast and Steinberg (sold on to Yamaha recently). It has a diverse line of products in major product lines:

  • Broadcast Tools – Deko On Air graphics products (Character Generators) and MediaStream playout servers;
  • Consumer editing software and hardware – with 10 million customers;
  • Professional Editing – The Liquid product line acquired from Fast; and
  • Editing Hardware – Cinewave and T300 based on the Targa acquisition.

Pinnacle has achieved nine Emmy Awards for its Broadcast product lines.

There will be conflicts and opportunities for Avid. It presents Avid with a new opportunity to create a consumer brand and Avid CEO David Krall has announced that a new consumer division will be formed analogous to the M-Audio consumer audio division acquired last year. M-Audio is the consumer parallel to Avid’s Digidesign professional division. The acquisition also consolidates Avid’s position supplying the Broadcast markets, making the company more of a "one stop shop" for a broadcast facility. There is definitely engineering work to be done on integrating the two technology lines, but there are no particular challenges there, and savings are to be made in streamlining sales and marketing. In broadcast there are only pluses for Avid.

Consumer

Bringing the Avid brand into the consumer market has a slight risk of diluting the Avid editing brand – if consumers edit on "Avid" what’s special about professional editors? However, by carefully managing product brands over company brand, as has been done with M-Audio, there should be an opportunity to bring some of those retail customers up to Xpress or Adrenaline products as their need grows, similar to the way Apple have a path for their iMovie customers to move up to Final Cut Express or Final Cut Pro.

Hardware

Avid and Pinnacle have had a long relationship on the hardware side – Targa supplied the first boards Avid used for video acquisition and the Meridien hardware was designed to Avid’s specifications but manufactured by Pinnacle as an OEM. Whether Avid has any use for the aging T3000 hardware product line (like Cinewave based on the Hub3 programmable architecture that was the primary driver of the Targa purchase) is debatable: Avid have embraced the CPU/GPU future for their products and are unlikely to change course again.

Cinewave

It almost certainly spells the end of Pinnacle’s only Mac product – Cinewave. Rumors were spreading independently of the Avid purchase that Cinewave was at the end of its product life, possibly spurred by changes coming in a future version of Final Cut Pro that no longer supported direct hardware effects. Regardless of whether or not there was any foundation in that rumor, Cinewave is an isolated product in that product group and based on relatively old technology. It is a tribute to the design flexibility and engineering team that essentially the same hardware is still in active production four years after release. Whether the product dies because it’s reached the end of its natural life, or because Avid could not be seen to be supporting the competing Final Cut Pro, it’s definitely at an end.

Liquid

There is, however, one part of the integration that simply does not fit: Pinnacle’s Liquid NLE software. Avid are acquiring an excellent engineering team – the former FAST team out of Germany – but the two NLEs have no commonality. Integrating features from one NLE into another is not trivial as code-bases are unlikely to have any compatibilities, and attempting to move Avid’s customer base toward any Liquid editor is unlikely to have any success at all.

Avid could simply let the product line die. The Liquid range has not exactly sold like hotcakes. This scenario would bring the best of the features and engineers into the Avid family and we’d see the results in 2-3 years as engineering teams merged.

They could, of course, leave Liquid alone – set it up as a division within the company and leave it be. Avid have done that with Digidesign, Softimage and M-Audio. No radical changes and slow integration of technologies where it makes sense. Liquid have probably taken few customers from Avid to date – few Composer customers have moved to Liquid. Instead, Liquid has acquired new NLE customers or people moving "up" from other NLEs. Liquid’s strongest customer bases are in small studios and in broadcast markets.

Even though Avid have let Digidesign and M-Audio compete, even although there is some overlap, it’s hard to imagine keeping a full product line that directly competes with the flagship products – on cheaper hardware at lower cost. Hard to imagine, but not impossible. It would be the most consistent behavior based on past acquisitions but one that would require a delicate balancing act to retain the new customers Pinnacle are bringing to the fold, without risking cutting into the more profitable Xpress, Media Composer and DS products.

Transaction

The transaction values Pinnacle at $462 million based on Avid’s closing price yesterday and will be handled by a combination of cash and shares. Avid will pay about $71 million in cash and issue 6.2 million new shares to the holders of Pinnacle stock, who will then make up about 15% of Avid’s shareholders. The transaction has been approved by the Boards of both companies but must still be approved by regulators and shareholders and is not expected to close until the 2nd or 3rd quarter of 2005.

The companies expect savings in regulatory costs, marketing and sales. We can expect little to change in the short term except probably, some volatility in Avid’s stock price as people try and work out what it all means.

NAB is going to be interesting this year.

Categories
Business & Marketing General

What are good visuals?

Perhaps it’s my background in video production and my strong desire to match media and message, but I’ve been seeing some incredibly inappropriate ways of delivering a message “visually”. The specific example that prompted me to write is this one . The piece is actually a very interesting pseudo documentary looking back at how media changes – perhaps its content is blog-worthy some other time. What annoyed me about it was that it was being used as an example of a “good use of Flash” when in fact I thought the visuals were so poor that, in all probability, the choice of a visual medium was a mistake: if you don’t have visual content, don’t do visuals is a good rule of thumb, I think.

Another example of, imho, really lame visuals used to waste time and attempt to make a silk purse out of a sow’s ear is this marketing hype. Again, Flash used but poor quality visuals (blown up way too big), super slow pacing and a message that, to me is cloyingly saccharine. (On the last point I am probably alone – it’s been very successful as a viral marketing piece so it must appeal to a lot of people.)

What bothers me about these pieces and about a lot of podcasts is that they are incredibly inefficient. One regular podcast I once listened to (on the topic of Final Cut Pro et al) takes about 20 minutes a week to listen to, for what would be a 3 minute read on a web page because the podcaster simply reads a script (or seems to be reading a script). OK, it could be listened to during a commute or at a gym where the 20 minutes wouldn’t be an imposition but surely, if you’re going to do an audio medium it should be produced as an audio medium?

Ditto visual medium – I always have hated making a “video” for a client that was essentially an audio program that had to have visuals forced onto it. (Like the piece at the head of this article.) Have we forgotten the imaginative power of radio? I’ll bet the movie version of War of the Worlds due out soon has none of the impact of the original 1938 broadcast. There are great radio documentaries produced that would make awesome podcasts, instead we get lame “read my script” or “come into my office and chat” podcasts that have zero production value. The Media 2014 example has great writing, the audio production is excellent and the visuals (which probably took the most time) add very little, imho.

This is what worries me about vlogcasting – even basic video production requires some time – more time than most people want to put into a blog or podcast, so what’s going to happen? Gigabytes of bandwidth occupied by badly lit, poorly edited shakey-cam that is virtually unwatchable? It’s already happening: download the Ant vlogcasting client and try and find something worth your time watching. Little evidence of strong writing or great production there – at least in what I’ve found (and if you find something great, ping me on it so I can share the excitement).

Where’s this going? – well, there’s still going to be a role for production skills for some vlogcasting, particularly if we adopt channels of information models via subscription. (The “RSS, Vlogcasting and Distribution Opportunities” blog entry is back, after editing.) It’s another example of how production specialists will need to adapt, and advise clients on what the most appropriate distribution methodology is. Just having basic production skills won’t be enough, but they will be a marketable commodity and profitable when part of the full service we offer customers on their communication needs. Also necessary will be the judgment and sense to tell customers that they don’t need “a video” but rather a website or brochure will work better for them. Savvy people will have those skills as well – if not personally, within their network.

Categories
Business & Marketing Interesting Technology

RSS, Vlogcasting and Distribution Opportunities [Edited]

Earlier I wrote about podcasting and the rapid uptake. Well, there’s every indication that video podcasting, of some sort, will follow. I think this is a tremendous opportunity for content creators because podcasting isn’t about broadcast but in fact an opt-in subscription service. In any discussion of these subjects it keeps echoing in my mind that RSS – is really simple subscription management (and yes, conditional access is possible) and Blogs.

To draw some parallels with traditional media: blogs are the journalism and writing and RSS would be the publishing channel (the network). Blogs and podcasting are bypass technologies – they bypass traditional channels. If pressed for an explanation for the truly astounding growth of podcasting and blogging to a lesser degree, I would hypothesize that they are to some degree a reaction against the uniformity of voice of modern media, where one company owns a very large proportion of the radio stations in the US (and music promotion and billboards) and news media is limited to a half dozen large company sources with little bite and no diversity.

The “blogsphere” (hate that word but it’s in common use) broke the CBS faked service papers during the last Presidential election campaign, and even in the last few weeks has been instrumental in the firing of a high level CNN executive and revealing the “fake” White House journalist and his sordid past. Collectively at least this is real journalism – and more importantly, it’s investigative journalism of the sort that isn’t done by traditional news outlets.

Blogging is popular because it’s easy and inexpensive. Sign up for a free blog on a major service, or download open source blogging software like WordPress (like I use) running on your own server. In a few minutes your voice is out there to be found. In my mind it harkens back to days of Wild West newspapers where someone would set a printing press and suddenly, be a newspaper publisher. But unlike a newspaper, blogs have an irregular publishing schedule. You can bookmark your favorite blogs and check them (when you remember), or you can be notified by your RSS aggregator application when there’s a new post (the URL for the RSS feed for this blog is in the bottom right of the page if you want to add it to your favorites).

Podcasting is easy and inexpensive unless your podcast becomes popular – then the bandwidth expense becomes considerable. Podcasting is a superior replacement for webcasting or streaming that does not have to be in real time. It’s produced and automatically delivered for consumption when it suits the listener. Those are the key attributes that, in my opinion, contribute to its success. There’s no need for a listener to tune in at exactly the time it’s “broadcast” – listen or miss it – or even to remember to visit some archive and download. My own experience on DV Guys totally parallels this. DV Guys has a nearly-five-year history as a live show (Thursday 6-7 pm Pacific) but has always been more popular through its archives pages.

Shortly after the advent of podcasting we set up a podcast of the live show available by the next day. Since then DV Guys has enjoyed more listeners than at any other time in its life. People tended to forget to visit the archives site weekly, or every second week. Even visiting every couple of weeks was too much of a commitment for a show that, while entertaining and informative, wasn’t at the top of everyone’s “must do” list. But with a podcast DV Guys is ready, available whenever a listener has a few moments – at the gym, during a commute, while waiting for a meeting or at an airport. DV Guys, like most radio, is consumable anytime. Importantly, it puts the listener in control, not the creator, of the listening experience.

Podcasting audio has another advantage – it’s easy to create. Almost as easy as blogging but not quite so easy. There are, consequently, proportionally fewer podcasts than there are blogs, because of that higher entry requirement. Even then, most podcasts are simply guys (mostly guys) talking into a microphone from a prepared script, or a few people together talking around a microphone. More highly produced podcasts are rarer.

The simplicity of publishing a blog means that it can be published for as few as half a dozen people – in fact there are people looking to use blogs and wikis as part of a project management tool. Podcasting can reach thousands but in broadcast terms that’s a tiny niche market.

Here’s a new truism – almost all markets are niche markets. What these new publishing models do is aggregate enough people in a niche to make it a market. There’s a lot of money to be made in niches. Particularly in the US, where there are a multiplicity of cable channels, small niches in the entertainment industry can be aggregated with appropriate low cost distribution channels, into profitable businesses. There are a lot of niches that are too small to be have their needs met by even a niche cable network, so cable channels get subdivided, or there’s no content for small niches.

RSS, low cost production tools and P2P is your new distribution channel. This is the other side of the production revolution we’ve been experiencing over the last 10 years, when the cost of “broadcast quality” content has dropped from an equipment budget of $200,000 upward (for camera, recorder and edit suite with titling and DVE) to similar quality at well under $20,000 (and many people doing it for under $10,000). At the end, a computer (or computer-like) device will be one of the inputs to that big screen TV you covet and you’ll watch content fed via subscription when it suits. If it’s not news, it can come whenever, ready to be watched whenever.

The relative difficulty of producing watchable video content will further limit the numbers (as happened from blogging to podcasting) and the current state of video blogs will make experienced professionals cry. That should not stop you planning for your own network. Instead of “The Comedy Network”, the “Sci-Fi Network” etc, prepare for a world of the “The fingernail beauty network” or “Fiat maintenance network” or “Guys who like to turn wood on a lathe network”, et al. Content could be targeted geographically, or demographically. There are very profitable niches available. Two that I’ve been involved with, at the video production level were for people who like to paint china plates (challenging to light) and basic metalwork skill training. There’s no need to fill 24 hours a day 7 days a week with this network model. When new content is produced, it’s ready for consumption when viewers want. We do something similar with our Pro Apps Hub where we publish content from out training programs piecemeal, as it’s produced, before we aggregate the disc version.

Note that I am not, basically, talking about computer-based viewing. My expectation is that software and hardware solutions will evolve into something usable as a home entertainment device. “TV” is a kick-back, put your feet up experience, video on the computer is a lean-forward pay attention experience. While both could be used for the end target of this publication model, what I’m really talking about is content for that lean back experience.

Now, I don’t expect “Hollywood” (as a collective noun for big media) to embrace this model early, or even ever, but that doesn’t mean it’s not going to become viable. The most popular content will probably still go through the current distribution channels, however they evolve. It also doesn’t mean we’ll be restricted to small budget production either. It could (should) evolve models where the viewers were in much closer touch with the producers, without the gatekeeper model.

For example, the basic skill training video series I produced back in Australia was niche programming. There were, effectively, 75 direct customers in the small Australian market (smaller, I should point out, than California alone). No customer or central group had money for production but each one had $150 – $300 to buy a copy of a product. Since these were very simple productions requiring a small crew and being produced in a regional city, each project had about a 30% profit margin. If the same proportions applied to the US market, the budget would have doubled but the profit would have quadrupled or more.

Take another more current example. Star Trek Enterprise has been canned but the last season had 2.5 million viewers an episode with a budget of $1.6 million an episode. If each viewer paid 75c for the episode, delivered directly to their “TV storage device” (somewhere between a Media PC, Mac Mini or TiVo) then the producers would turn a profit of $200,000 an episode or 12.5% margin on what they were getting from the network. At 99c a show, that’s nearly 50% more revenue than was coming from the network. And the audience isn’t limited to just the US market – that same content can be delivered to Enterprise fans anywhere in the world. As the producer I could live on 13 episodes at $200,000 profit above and beyond previous costs (which presumably included some salary and profit). Moreover, producers wouldn’t be locked into rating cycles and the matching boom/bust production cycle.

It doesn’t matter if each high quality (HD if you want) episode takes 20 hours of download “in the background”. When it’s complete and ready to watch it appears in the play list as available.

Bandwidth would be a killer in that scenario – even with efficient MPEG-4 H.264 encoding, a decent SD image is going to require 1-1.5 Mbits/sec and HD is going to want 7 or 8 Mbits/sec. Assuming 45 minute episodes (sans commercials) that’s around 700 MB an episode per person, or in HD about 5 GBytes, per subscriber. Across 2+ million subscribers that’s going to eat my profit margin rather badly without another solution. There are technologies in place that could be adapted.

Assuming the bandwidth challenge is resolved. What’s left?

Two things mostly: the device that stores the bits ready for display on the TV and software to manage the conditional access (you only get the bits you bought) and playlists. Something like a video/movie version of the iTunes music store. We’ll need to wait for a big player like Apple or Sony to wield enough muscle for that, but in the meantime, we see the beginning with Ant but as a computer interface and without the simplicity and elegance of a Dish/Direct/TiVo user interface. But it will come.

Will you have your network business plan ready? I’m working on mine already.

Wired has another take. Videoblogging already has a home page and for a bit of thinking on the flip-side, how this might all work for the individual wanting to aggregate a personal channel, Robin Good has a blog article on Personal Media Aggregators in one of my favorite (i.e. challenging) blogs.