Categories
Apple Pro Apps Video Technology

What about Final Cut Pro 7?

I was prepared for a “small” release this time round, as I assumed that the Pro Apps Team would be working hard to convert to Cocoa and would have to release a smaller “interim” release, but Final Cut Pro 7 is definitely more than I was expecting.

Having iChat Theater built-in means no more workaround with remote collaboration using two Macs! It also suggests the Pro Apps folk “get” that remote collaboration is booming and they know they need to adapt to that world.

Likewise the new publishing palette is going to be great for a lot of editors who need to routinely provide progress updates and deliver them on the web. That it runs in the background while you continue working is even better. You could have saved a reference movie and sent that to Compressor and added an upload action to the preset, but this is just so much simpler, and gives direct access to the most popular sharing sites, and Mobile Me!  MobileMe might be the best choice for many editors – files can be private and certainly not as public as YouTube!

My all-out favorite features, while a small one, is that Markers in a Sequence now move with the Sequence as clips are inserted or deleted. Colored Markers are great and I’ll use them a lot to identify a type of marker. For example, one color could mean “more work needed here” another color would be a locator just you jump quickly to part of the Sequence, and so on.

The technologist in me is very impressed with the new ProRes codecs. Those that work at the high end will love the ProRes 4444 codecs (and those that want an alpha channel will use it anyway). The Proxy version at 36 Mbit/sec parallels Avid’s own DNxHD offline codec and Apple needed something similar for HD offline. The most interesting codec is, however, the 100 Mbit LT version.

Clearly aimed at acquisition I expect we’ll see camcorders and other devices, like maybe the Ki Pro, supporting this data rate, which is co-incidentally the same as AVC-I at its highest setting. AVC-I up against ProRes 422 LT would be very, very similar in quality, both 4:2:2 and 10 bit and using similar compression strategies. It would be a perfect data rate for the Ki Pro if AJA want to support it. (I can’t help but wonder if the last-minute-delay of the Ki Pro wasn’t to wait for this announcement, but I’m just guessing.)

The Pro Apps team have thrown a “sop” at those who want Blu-ray authoring with the ability to create a Blu-ray compatible H.264/AVC file in Compressor that can be burnt to Blu-ray or standard DVD media. Nothing that Toast 10 hasn’t been able to do for some time now but nice to have it included in the lower-cost Final Cut Studio.

Many have interpreted the inclusion of this feature as an indication that Apple are going to get “more serious” about Blu-ray, but I’m not sure. I think it indicates the opposite. If there was going to be a big Blu-ray push the these features would be added to DVD SP, which received almost no update in this version. I think we’ve got Apple’s “solution” for Blu-ray in Final Cut Studio. Who know, only the future (and probably a Product Manger at Apple) will tell. (The PM won’t ever tell, that’s for sure!)

As to the loss of LiveType. It was probably inevitable as it was increasingly obvious that Motion was taking on many of the roles previously done by LiveType. By adding in the LiveType glyph animation features to Motion (adopted directly from LiveType) most of the functionality is now in Motion. My only concern is whether Motion now recolors LiveFonts correctly (i.e. the way LiveType did). I’ll test as soon as I have a copy in hand.

Finally, the price. Who can complain about Final Cut Studio being the same prices now as Final Cut Pro was alone for the first couple of generations.

Certainly, on the surface, it’s a good release.

On the timing – I notice that all Pro Apps products – Studio, Server and Logic (Pro Music) all came out together for the first time. Does it mean anything? It’s Apple, who knows and I’d rather not drive myself crazy trying to second guess them!

Categories
Assisted Editing

An update and two new pieces of software

We’ve been all about logging and metadata over the last few weeks!

First Cuts has just had a substantial update: we’ve added a new module to it that makes it easy to use Microsoft Excel to do your log notes. The new module is called exceLogger and it came about because of a suggestion from a First Cuts user. The advantage is that, even if you’ve already captured your clips, exceLogger will read your log notes out of the Excel spreadsheet and add them to the logging fields in your Final Cut Pro project.

The update is free for First Cuts owners.

We liked the idea of exceLogger so much that we created a stand-alone application called exceLogger for FCP – you can read more about it here.

The second new piece of software is something completely different. Final Cut Pro back at version 5.1.2 introduced support for QuickTime metadata, and more cameras and formats have been adding metadata to their media files. (Philip wrote about this metadata at his blog.) The problem is that you can’t see this QuickTime metadata in Final Cut Pro’s browser view – it’s hidden.

That’s why we created mini Metadata Explorer (miniME for short): export your clips from Final Cut Pro as an XML file, and open it in miniME. The spreadsheet view fills in with your clip names and columns of QuickTime metadata.

The free version of miniME allows you to save this metadata out to an Excel spreadsheet. But if you buy a serial number you also get the option to add this hidden metadata into the Final Cut Pro logging fields of your choice. There’s more information here.

Categories
Video Technology

What happened to HDV (and tape)?

I have never been an HDV hater. I always thought that it was a great format, that allowed a lot of HD production to be affordable, while needing to be treated carefully for maximum quality.

From the first JVC HDV camcorder – lousy camera but showing promise – HDV was an affordable, accessible HD format that continued to improve in quality from generation to generation as the encoders improved. (MPEG-2, like DV, is constructed so that there can be considerable innovation and improvement on the encoder side, as long as a reference, or standard, decoder can decode it.) MPEG-2 is now more than four times more efficient than it was when the specifications were finalized 15 years ago.

The reason for the codec history lesson is that HDV is based on MPEG-2. (As are XDCAM HD and XDCAM EX.) Encoders improve over time so inevitably models fall behind the latest releases. For that reason I had to drop from consideration – for a new camera - Canon’s XL-H1, A1, and G1; Sony’s diminutive HVR-A1U ; and JVC’s KY-110U. These were all released in 2006 or earlier and while Canon claimed the “best” encode quality at the time, that is no longer even remotely true. JVC themselves claim that the MPEG-2 encoders in the HD200 and HD250 cameras are “100% better than the year before” (the year the 110U was released)!

While these would be excellent purchases on the second hand market, if you’re buying new you should be buying state-of-the-art, not three year old technology. That’s two whole encoder quality iterations!

Another reason why HDV didn’t make the cut this year is that most of the pro-focused camcorders are more expensive than more versatile and up-to-date options. For example, the nearly two-year-old GY-HD250 currently has a street price of $8,950 – that’s the highest street price of any camcorder on the list and more than Panasonic’s HPX300 or Sony’s EX-3.

I’d certainly still consider an Canon HV40 as a personal camera or a crash camera – at only $850 it’s hard to go wrong. The main reason it would still stay in play as a personal camcorder is price and native workflows in most NLEs. At least well-proven workflows in all NLEs. But even here the upcoming Canon Vixia Canon HF S11 and HF 21 AVCHD will likely give better quality – unless you want 24P, which is an HV20/30/40 exclusive in the price range.

This year we have a plethora of great choices for camcorders: none of them HDV in my opinion. If you’re not editing with Final Cut Pro – where the JVC HD100 and HD700 are less attractive – then you might consider a Sony V1U (released 2007, so only one generation of technology old) but for the million and a quarter Final Cut Studio users the native QuickTime workflow with the quality of the 35 Mbit/sec XDCAM HD codec makes a lot more sense at the same price (V1U vs HM100).

This year’s great choices are all non-tape cameras: HPX-300, EX-1, EX-3, HPX170, HM700, HM100, and HMC150 write to proprietary solid state media (P2, SxS) or to inexpensive and ubiquitous SDHC  cards. Solid state media at tape-like pricing that you can simply record and keep as well as keeping a digital backup. (Now that’s appealing.)

So, it seems that HDV was the last new tape-based format, ever. And I think we’re over it. As we’ve started to work out issues of long-term storage of non-tape media, the advantages of much-faster ingest – instant in some cases – and enhanced metadata support have become obvious.To different groups at different times, for sure, but we are facing a non-tape future.

And I think I’m OK with that.

The format that has really surprised me is Panasonic’s AVCCAM. I have to say my initial response to the HMC150 was “why on earth are they muddying the waters by rebranding AVCHD as AVCCAM”? I’m still not convinced the two names for the same format makes sense, but the higher data rates available on the HMC150 (and upcoming HMC40) and the AVC (a.k.a H.264) codec at the base of the format, mean that AVCCAM delivers much higher image quality: well, images that suffer less from compression-related degradation.

The disadvantage: only Premiere Pro CS4 and Sony Vegas really deal with it natively and Premiere Pro CS4 still has some issues with some variants of the format. Avid and Apple’s software re-encodes the files to the much-larger ProRes 422 or DNxHD codecs. (Typically 5-6x the storage requirements of AVCCAM/AVCHD.) But it’s a decent camera at a decent price with higher-than-HDV image quality, just with a workflow hiccup. (See comments on HV40 above.)

The HMC150 records to SDHC cards, as do the other two hot picks of the year: JVC’s HM100 and HM700. Whatever format you choose (HPX300, EX-3 or HM700) if you want a shoulder mount you’ll pay a premium. Typically, however, you get interchangeable lens capability in those same cameras, so it’s not all bad.

Finally, a word about the HPX-300. Because of the AVC-Intra support, the HPX-300 has the highest record quality (compressed image quality) of all with 50 or 100 Mbit/sec bit rates and 4:2:2 10 bit recording, there’s no real arguing that this is the quality king this year.

Except for the Ki Pro Factor. AJA’s almost-released Ki Pro is a hard drive or Compact Flash recorder that records native QuickTime files in ProRes 422 – near uncompressed 10 bit, 4:2:2 recording quality equal to the AVC-I support in the HPX-300. Every one of the recommended cameras this year can record uncompressed analog or digital output to the Ki Pro. If you’re not working with Final Cut Pro though, it’s a wash, like the JVC HM100 and HM700.

It must mean something when there are so many cameras targeting a specific postproduction NLE. The only other time I recall that happening was with a (from memory) Hitachi camera that recorded native Avid media, but I forget the details and it never reached any sort of momentum.

HDV 2004-2009 R.I.P.

Categories
Business & Marketing

What do you need to consider if you’re thinking about self publishing a book?

There was a time when the only way to get a book published was to interest a publisher and sign away your copyright to that publisher. There were definitely benefits to that arrangement, mostly starting with a nice up-front advance on sales!

However, most authors never see anything more than that advance and usually end up owing money (in theory) to the publisher as a consequence of insufficient sales to cover the advance. The per-book return to an author is so low that most authors make more money off the Amazon affiliate link than they do from the book sale!

When you self publish you get a much larger return-per-sale than from a publisher, because you’re taking on more of the work and risk yourself. With print-on-demand technologies and online sellers like Amazon open to all, it’s certainly practical to self-publish, but should you?

Based on my experience with The HD Survival Handbook; Pro Apps Tips collections; Awesome Titling, Simple Encoding Recipes (just rewritten last week for 2009) and most recently The New Now. This exercise started with simple downloadable PDFs and has led to a paperback now in Amazon.

That you will have to write your content and provide most of the illustrations is expected and pretty much the same whether you self publish or have a publisher. One intangible advantage of a publisher is that they are going to keep on your back for the book once they’ve paid you the advance, whereas when you self publish you’re responsible for your own scheduling.

You have to provide your own editor/proofreader

Everyone (whether they like to think it or not) needs a proofreader and someone who reads their material to ensure content accuracy and grammatical clarity. Believe me, my work is much better thanks to Greg Clarke’s careful read throughs and constructive criticism. Even better, he works to improve my work, not take my “voice” out of it.

My experience with publishers (two companies) is that they try to achieve this soulless bland style that could be anyone. I have, as you probably have noticed, a personal style and voice in my writing. I like that and it seems my readers like the style. By self publishing I get to keep my voice in the work – to keep the writing in my style not something generic and dull.

If you must edit your own work, or simply can’t find someone to fill that role, then read it out loud. Reading aloud takes a different path within the brain and you’ll recognize mistakes or lack of clarity much more easily if you read out loud.

You’re responsible for design and layout

Personally I like playing with illustration, layout and design. My font choices are probably boring and The New Now is probably a point-size too large (although my contemporaries like the slightly larger print for aging eyes). I totally enjoyed laying out and creating the illustrations for The HD Survival Handbook so this isn’t daunting for me. But if you’re not comfortable doing design, you’ll need to (probably) pay someone to lay out the book, whether you’re distributing a PDF or going to print.

Likewise cover design and cover copy. It’s all going to come back to you without a publisher, so be prepared to put in even more time, or pay someone to do it. For covers, Amazon’s CreateSpace has templates you can draw on.

You’re responsible for the printing

What once was one of the two primary reasons for having a publisher was to fund the expense of printing (typically) 5,000 books. (Not surprisingly, the advance when I was working with publishers was equivalent to the return from 5,000 copies. Few authors see any additional return.)

These days, with on-demand printing already very reasonable for B&W books and getting more so for color, printing is not an issue any more. (As an aside there’s a new generation of the print-on-demand technologies just announced that are twice as fast and half the cost of the current machines. This will reduce the cost of on-demand printing even further.)

I chose Amazon’s CreateSpace simply because the relationship with Amazon makes it a very simple choice. It solves three problems in one – printing, ISBN number and access to retail distribution. The process is simple enough even for a first time user. I had only one issue that appears to have been more a problem with UPS than with CreateSpace.

You can use CreateSpace as a channel to Amazon, or simply to print copies of the book to sell after presentations or from your own website. (We sell the PDF version from our site, all print copies that are not in-person sales are handled by Amazon.)

Now, when an order is received at Amazon, it’s printed at CreateSpace and shipped without any additional effort on my part.

It is a little more complicated to be listed in Amazon if you use LuLu or other on-demand publisher.

You need to provide an ISBN

While not necessary if you plan to only sell direct, an ISBN number is essential if the book is to go into any distribution channel or to a retail bookshop. Some places want to charge up to $250 for an ISBN to be allocated to your book, but CreateSpace include the ISBN for no additional charge. You simply leave a blank space on the cover design for where the ISBN will be imposed and printed.

Booksellers worldwide can order your book by ISBN.

You need to get access to distribution channels

Unless you plan to only sell in person and through a website, you needed a publisher to get access to the retail book channel. CreateSpace automatically offers listing in Amazon via a simple checkbox and price setting. (You set the price for Amazon, although they will tell you the minimum price you can sell and still get a return!)

Although there are other booksellers – who can order the book via the ISBN – I didn’t think there was value in seeking to be listed at Barnes and Noble or other bookseller. My book can be found on Amazon or ordered by any bookseller and that’s enough. I also figure anyone in our industry (loosely defined as Digital Production, Post Production or Distribution) will likely buy online rather than attempt to find any give book in a walk-in bookshop. Most likely they will go to Amazon where the book is listed.

Open, unmediated access to the Amazon retail site is one of the most significant changes that made self-publishing practical.

You need to do your own publicity and promotion

In theory, your publisher is going to promote and publicize your book. In theory. In practice what mostly happens is that the book is listed among all upcoming books in your category in a publication circulated to bookshops (so they will advance order copies). They’ll send out an email to selected, somewhat appropriate media and bloggers, and that’s about it.

You might get a 30 minute presentation spot on a publisher’s booth during a trade show but by-and-large that’s the publisher’s contribution to promoting your book. Most authors will expend effort to promote the book themselves anyway.

Which is another reason to consider self-publishing. If you’re going to need to promote your book yourself anyway, why not just promote your book yourself and leave the publisher out?

I wrote an article for the 2009 Supermeet Magazine (available shortly for download – check LAFCPUG.org) on growing a market for your independent production. That information would be equally valuable for building a market for a book, using modern PR techniques and (don’t hate me) “social media”. (There is more in The New Now on using the same techniques to build a business – naturally with a lot more depth.)

From my perspective self-publishing has been a positive experience. I get to keep my unique style and voice; I get to control how the book looks (not important to everyone but it is to me) and most importantly, I get to keep a larger portion of the return from my hard work. To date, we have done significantly better on the books than I would have had I gone down the more traditional path. Given the sorts of advances now being offered by publishers (trending toward half what they were five years ago and not enough to cover the time to write a book) I have done very, very much better from fewer sales than I would have had I published via the traditional route.

And here’s a final benefit. Author copies from CreateSpace are at cost. They are much lower, particularly for B&W/Grayscale books than you would think, such that my New Now book is often my new ‘calling card’. It’s a inexpensive way to keep people thinking of you and recognize the value I can add with consulting and other services.

As long as the six “You have to” issues listed here aren’t deal-breakers for you then I recommend you give it a go!  Got questions? That’s what the comments are for.

Categories
Business & Marketing

How to get paid faster!

This is summarized from a small section of the “Work Smarter” chapter in The New Now: How to grow your production or postproduction business in a changed and changing world.

Never feel reluctant to invoice customers. You have done great work for them, helping make them more money (or you don’t have a future) so there’s no reason to feel the slightest bit embarrassed about wanting to be paid, and paid promptly.

Pay Fast

Let’s consider the other side of the equation: if you want to be paid fast, pay fast yourself. Unless you never want to do business with a supplier or subcontractor again, keep the relationship good by paying when due, or ahead of when due. Doing so will keep the business relationship alive and improve your reputation. You’ll become a valued client of theirs and get preferential treatment if it ever comes down to a choice of doing business with you, or with someone else.

You are not a bank – reset your status with customers

If you find yourself getting strung out for payment then you need to reset the relationship. If necessary, go to those customers who are slowing payment and point out that you’re a production company, not a bank, and since you won’t be getting a big bail-out you rely on customers to pay promptly.

Make sure you make it clear you’ll run through hoops for the customer but you expect them to pay in a timely manner.

Tip for the Paranoid: Watermark all preview tapes, DVDs or files before payment has been received.

Talk about payment terms up front

You’re going to have to talk about it sometime and before you’re committed to the work is the best time. Unless payment terms are part of the agreement, customers can always claim they “didn’t know” or “we can’t do that”. If there is a genuine problem that their company cannot pay on your terms, this is the time to negotiate it, not when the job is complete.

Know your profitable customers

Do you know how much each client’s payment cycle costs you?

In a post at Howard Mann’s Business Brickyard, Colleen Barrett – former President of Southwest Airlines – has this story to tell about examining the profitability of each customer based in part on payment history.

Send your invoice promptly

In a small production or postproduction business, it’s the video work that interests most of us. The fact that we have to run a business as well is kind of a drag. One consequence is that invoices don’t get sent out promptly. Realistically you can’t expect anyone to pay until they have the invoice, so get the invoice out the door as quickly after the client accepts the work as possible.

Important: I’m talking about tried and true customers here. If you’re working with a totally new customer stick to the traditional policy of 30% payment of budget when work commences; 30% at the completion of production (or suitable postproduction milestone, like first draft assembly) and the balance of 40% due on completion. For new customers it’s very unwise to let the master out to the customer until payment has been made (and cleared the bank).

Avoid the mail

Once you let the postal system have control over your invoice you don’t know when it was received unless you take out receipt confirmation, which requires a trip to the post office for each invoice. Not fun. Instead, email the invoice and ask the client to confirm receipt. If you don’t hear back within a day or so, follow up with another email or phone call to confirm that the invoice has been received and that there are no problems with it.

Know the client’s payment process

The larger the company, the more complicated getting paid is. Know their system so you can work with it to your advantage, or at least not make payment unnecessarily slow because of a mistake at your end.

Offer a discount

Depending on your jurisdiction many Utilities and Government bodies are legally obliged to take advantage of any discounts offered.

Make your invoices pleasant

Invoices are a fact of life but make them appealing. Get the person who designed your logo and stationary to design the invoice. Make it look attractive and look like your company.

If you have a relatively small number of customers, like we typically do in production and postproduction, then take the time to write a personal message with the invoice (in the email or an accompanying letter). Make it some funny quote, or poignant statement about the industry, or something relevant to the customer’s business. By including something pleasant you can eventually make people look forward to your invoice. It can be the same message on every invoice, although change it at least every month.

Systematically follow up every invoice

Simply sending out a ‘Past Due’ notice and subsequent monthly statements isn’t going to cause a company deliberately delaying payment any stress or pressure. We’re enabling their poor behavior!

Never let copyright pass until you’re paid in full

One of the things I learnt painfully, was to put a clause in the agreement (never a contract, that scares people) such that copyright in the work did not pass to the client until payment was received in full.

It’s a blunt instrument, and only really useful when the relationship has broken down and future work is unlikely, but it ups the ante because willful breach of copyright has penalties attached that inadvertent breaching does not.

Categories
Apple Pro Apps Metadata

What about the hidden metadata in Final Cut Pro?

We’ve been working with a few people previewing, and getting feedback on, a new addition to our First Cuts assisted editing tool – basically checking some areas of Final Cut Pro that I haven’t explored for years and I had the most interesting conversation with Jerry Hofman.

Before I get to that though, let me ask (beg) for feedback on any of our software products. We want to keep making them better and love feedback, feature requests and especially problems. We respond quickly – this particular feature request was received on Friday 26th, discussed briefly during a Hollywood Bowl concert on Saturday night and was a preliminary feature by Wednesday!

Anyhow, in discussing this particular tool with Jerry (you’ll find out what it is soon enough!) I asked how much metadata from RED is imported to Final Cut Pro via Log and Transfer. Jerry, who uses RED a whole lot more than me (i.e. he uses it!) said “not very much”, which pretty much matched my understanding working with a whole bunch of RED clips with Sync-N-Link and never seen any of the color temperature, date or other information that’s in the RED metadata.

In sharing this conversation with my smart partner, and our main code writer, Greg Clarke, he commented “Oh, I do think Mr Hofman is mistaken!” (or words to that effect). Turns out Greg has been scrolling past this metadata for most of the last year. The difference is that Greg works with FCP XML exports, while Jerry and I were looking through the Final Cut Pro interface.

OMG! What a treasure-trove of metadata there is. And why didn’t we know of this? Surely someone from all the conversations we’ve had with people developing Sync-N-Link must know about this? (You’ll all come out of the woodwork into the comments and let me know you’ve known about it for years!)

So this morning Greg’s built me a tool for exploring this hidden (I prefer “secret” because it makes it seem more mysterious) metadata, turning it into an excel spreadsheet. I already had XDCAM EX media and P2 media along with RED clips and I was able to download some AVCCAM media shot with Panasonic’s HMC-150 camera.

There’s an enormous amount of Source metadata there. A lot of fields that seems to be unused even in the camera. Clearly, the current version of Final Cut Pro doesn’t have the flexibility to display items like ‘whiteBalanceTint’ or ‘digitalGainBlue’ settings in the original file. I guess this type of metadata is going to be challenging for Apple and Avid to deal with, as they don’t (currently) have displays in their application for the enormous amount of metadata that are generated with tapeless cameras. I’m just very thankful that it’s being retained, and that it’s available via XML (and associated with a Final Cut Pro clip ID).

There’s definitely metadata already  being produced that we can use to improve First Cuts – at least for non-tape media sources. But it’s also interesting to explore fields that are available but not being used.

Show all columns and you'll be surprised at what's available, or going to become available.
Show all columns and you'll be surprised at what's available, or going to become available.

BTW, you can explore yourself using Log and Transfer. Open any type of media that Log and Transfer supports and then, right click on the column header (like you would in Final Cut Pro) and select “Show all Columns”. The columns displayed will change according to the type of media selected.

So far, Sony’s XDCAM EX has the least amount of metadata and the least interesting metadata – barely more than the basic video requirements and information on the device: model and serial number.  RED footage has a lot of metadata, although most is focused on the technical aspects of the shot as you would expect for a digital cinema camera.

But take a peak at the source metadata from P2 Media! All the goodness like the date of the shoot (which FCP otherwise does not export) and time (as does RED) but also fields for ‘Reporter Name’ (awesome for a First Cuts – News product) and Latitude and Longitude. While they’ve been blank in every instance because I don’t think Panasonic are shipping any cameras with GPS built in yet, it does suggest that future Panasonic cameras are likely to contain GPS and store that data in with the media file. Anyone who’s a regular reader will know that means Derived Metadata! There are also fields for ‘Location Source’, ‘Location Name’, ‘Program Name’, ‘Reporter’, ‘Purpose’and ‘Object’ (??).

AVCCAM carries all the fields of P2, more or less, with the addition of a “memo” and “memo creator” fields.

It’s been fun exploring this ‘secret’ metadata. Now to find a way to make some use of it, or make it practical. Would anyone be interested in a tool that would not only read and explore this metadata, but allow some of it to be mapped to existing Final Cut Pro fields?

Categories
Random Thought Video Technology

What other editing interface(s) can we imagine?

During a conversation last night about a new type of touch-screen display that mounts on regular glass (don’t know any more about it than that – hope to get more information shortly and share).

During the discussion I was reminded that in the earliest days of using NLEs (a Media 100 for me at that time) I had fantasies about being able to edit using a 3D display environment, where in this virtual world the clips would be in space or grouped together in some logical order (these days I’d say that was based on metadata groupings) and the editor could simply move clips around, stack them and build the story along a virtual timeline. Even composite by stacking clips.

Not that I ever really developed the idea beyond that trip to my imagination, it does make me wonder if some sort of surface like that being proposed for regular glass, or even maybe a 30″ Cinema Display type screen, that was a full touch-screen surface that supported gestures, etc. Microsoft’s Surface would be close to the sort of experience I’m visualizing.

In thinking about it further I realized that the sort of work we’ve been doing with metadata would tie in nicely. The metadata would be used to group and regroup clips organizationally, but also to suggest story arcs or generally assist the editor.

It’s probably time for a new editing paradigm.

If not for a future version of FCP or Media Composer, perhaps, for iMovie?

Categories
Apple Pro Apps

What about Final Cut Studio and Snow Leopard?

In the comments on my article on “Why no ExpressCard 24 slot in the new MacBook Pro” Andreas asked about Final Cut Studio and Snow Leopard and I briefly responded there. Larry Jordan responded on his blog (Andreas asking him the same question) but not in any detail. Neither Larry nor I are programmers, but  I direct programmers every day – both OS X software and Web applications – so I do know a little. Plus I’ve tracked the technology development for FCP from version 1 onward – every change from OS 9 to OS X CFM, to OS X Mach-O to a hybrid application.

So, let’s see if I can manage a little clarity even if it’s only based on observation and deduction. As I said in the brief response, I have no clue what development Apple are doing and whether or not this is at all accurate. No doubt there will be engineers at Apple laughing at my naivety!

Carbon was the technology that Apple developed to let OS 9 applications run on OS X. It’s a set of programming interfaces (APIs) that individual applications call. There’s absolutely nothing wrong with Carbon, except that it can’t take advantage of any of the modern features of OS X – particularly those coming in Snow Leopard, nor can it run in 64 bit.

Cocoa is a different set of APIs, heavily drawn from NeXTSTEP – the NeXT operating system (Apple acquired NeXT in 1996). It is the preferred method for programming for OS X. In fact Apple have been pretty clear to its developers that, long term, Carbon will give way to Cocoa. They are encouraging all developers to get their code to “pure cocoa”.

When it looked like Apple were going to continue Carbon development with 64 bit APIs, as announced at the WWDC in 2006, there was really little incentive for Apple to spend the very many millions of dollars it will cost to rewrite FCP’s Carbon parts as Cocoa – to just get to where it is now. 64 bit Carbon was also Adobe’s preferred path for its older applications (Photoshop, Illustrator and After Effects). With 64 bit Carbon coming, Adobe could get 64 bit support without a major rewrite.

But when, at WWDC in 2007, Apple changed the rules and said, “No 64 bit Carbon”, any high-performance application had to think seriously about rewriting to Cocoa, at least long term.

We know, from public announcements, that Adobe had to delay 64 bit support for the named applications until CS 5 on OS X because of the time it takes to move applications to Cocoa and therefore 64 bit support.

All new features in FCP, since FCP 5 onward, have been written in Cocoa – HDV Log and Capture, Log and Transfer, Multicam, FXplug (that I know of) are all Cocoa. OS X lets programmers mix and match programming languages with ease. My guess is that Apple, like Adobe, would have continued with a hybrid approach with Final Cut Pro if the 64 bit Carbon APIs were going to be available. When they were not, the Pro Apps team, like Adobe, would have to have started porting their application to Cocoa. CS4 showed none of that progress, being released less than 2 years after the ’07 WWDC announcement.

Most of the FC Studio applications are new enough to have been written in Cocoa – the second revision of DVD Studio Pro (Spruce, not the Astarte-based DVD SP 1); Motion, LiveType, Soundtrack Pro and Compressor are all pure Cocoa applications and can (relatively) easily take advantage of 64 bit and/or Snow Leopard features like Grand Central Dispatch. I say “relatively” because there is still work to be done, that generally can’t start until Snow Leopard’s features are locked (i.e. WWDC 2009).

No responsible programmer is going to attempt to write for a new OS until the OS is locked and finished. So, theoretically, the engineers working on those applications could start NOW to work on Snow Leopard features, ready for a release in a year or two.

I can’t imagine that even the Cocoa-based Studio products will be taking advantage of any Snow Leopard features this time round – the timing is just all wrong. And they’re already Cocoa.

FCP, starting life at Macromedia as a cross-platform OS 9/Windows application, has most of the core written in Carbon. My (educated) guess is that it will take 2-3 years to rewrite all that code to Cocoa. I doubt they started that before WWDC 2007, as until then there was little incentive to invest many millions of dollars in a Cocoa rewrite. (Remember this is before the announcement of Snow Leopard, Grand Central Dispatch and OpenCL too.) Those features do provide an additional incentive to get FCP to “pure cocoa” (I estimate $10-15 million will be required to write, test, test, and I hope test what will effectively be v1 code again).

Apple will no doubt do that because the Final Cut Studio is highly profitable for the company from software sales only. (You don’t need to get too much from 1.25 million customers to make a viable business, even without taking into account any hardware sales, which don’t benefit the Pro Apps team at all.)

However, wanting to do it and having the time to do it are different things. They didn’t start (most probably) until after WWDC 2007 when they, and the Adobe teams, learnt that 64 Carbon wasn’t going to happen. Allow two to three  years to finish that job, before they can start to think about Snow-Leopard feature optimizations.

Pretty much every release of FCP requires the latest OS (the exceptions being FCP 3 which was for OS X and OS 9 both, and FCS 2 which is Leopard OSX 10.5 or Tiger OS X 10.4.11) and the latest QuickTime. So, I think that Studio 3 will, on the balance of probability, require Snow Leopard. Snow Leopard has no problem running 32 bit Cocoa applications at the same speed they always ran. As Snow Leopard has no PPC version, it could mean that FCS 3 may be not supported on PPC, we’ll have to wait for Apple to let us know. (FWIW, Adobe’s Production Premium CS4 is Intel only; Avid’s 3.x releases are Intel only, by way of reference.)

However, absolutely do not even think about running FCS 2 on anything newer than Leopard. In fact NEVER run FCP on any version of the OS or QT other than the ones that were directly supported. To do so is going to cause you problems, pain and regret. Old versions are never tested on the new OS and QT so there’s no reasonable expectation that they will run on a newer OS.

So, whether or not FCP 6 will run on Snow Leopard or not is irrelevant. Only an idiot would attempt it, and none of my readers are that stupid, right? Bank on an FCS upgrade to run the studio on Snow Leopard because it’s the only way to guarantee you’ll still be in business with an operating NLE after the upgrade.

Based on all that, the timing of Snow Leopard etc., it’s not really reasonable to expect that there will be Snow Leopard features in the next release, but we won’t know until Apple releases them. Until then, I guess we can all dream! 🙂

Categories
General Presentations

Calendar and bio additions to the blog

It wasn’t until recently that I realized I had no “about Philip” or bio, whatever, on the site, so I’ve added a page so you can find out a little more about me, if you care.

Another addition is the calendar of upcoming presentations. Over then next couple of months I’ll be attending or speaking at the Orange county MCA-I Media Camp tomorrow (Saturday June 20). Media Camp is like a Bar camp, itself a reaction to the O’Reilly FOO (Friends of O’Rielly) exclusive, invite-only conference.  A Bar Camp (Foo Bar, get it?) is the opposite. Anyone is welcome, there’s no formal agenda so you make of it what the participants want to make it.

The Orange County Media Camp is like that – we’ll all be getting together and talking about/planning etc media production and distribution topics. I guess. I won’t know until I get there.

Then there’s DV Expo coming up In September, and the Professional Video Association’s Conference in January 2010 – more on that later but I’ll be presenting a full day on Final Cut Pro titling and a full day on growing your production or postproduction business.

In between I expect to be doing some presentations in Boston, Connecticut and New York and, depending on timing, other user groups or dealers. These will include a full day on New Media – where we’re going and how we’ll make money; and free sessions on how to benefit from First Cuts, Finisher, and Sync-N-Link. We may do the Growing your Business seminar there as well.

I hope to see you at one of the events. If you read the blog, come and say hello.

Categories
Distribution New Media

Why is fighting piracy a losing battle?

A couple of days ago I talked about why a “three strikes” law is such a bad idea. It’s also a bad idea because it’s pointless and bad for the content creators and/or owners’ businesses.

On the other hand, even the threat of a three strikes law in France (ultimately struck down by their highest court as against human rights) immediately created a business opportunity for encrypted Virtual Private Networks (VPN). Not only that, but the Pirate Bay folks are also setting up an inexpensive VPN service for anyone that wants it. These services disguise the IP address of the downloader, so even if copyright owners tried to get an IP address to sue (not that you can sue an IP address, as many have found out in non-US jurisdictions) everyone using the service has the same IP address, not correlateable to any individual location. 

If that should ever become “broken” as a workaround, something else will be found. In fact there’s a move afoot to update the bittorrent protocol to a new form that resists seeders being identified.

It’s a pointless exercise. Whatever horse there was has long bolted the stable and the only reasonable response from an intelligent business person is to find new business models. Andy Kessler, writing at Forbes.com, discusses The Inevitability of Internet Pirates: 

Hand out as many guilty verdicts as you like, but folks on the Internet will copy away–because, really, who can stop them? Google won’t do it, Internet providers like Comcast ( CMCSA –news - people ) and AT&T ( T - news - people ), who can block a lot of this stuff, can’t do it without Network Neutrality proponents squawking, “Interference!”

Even authoritarian regimes fail. (The Great Firewall of China is quite leaky.) Plus, it is so easy to create a Web service to download copyrighted material that, like that arcade game Whac-A-Mole, if you take one culprit down with your mallet another five pop up in the next few nanoseconds. Sad but true, there is not much anyone can do.

Blocking sites does not work because it’s relatively trivial to find a proxy server to log into the “blocked site”. DRM has been an abject failure inconveniencing those who actually paid for the product without doing anything to reduce “piracy”. 

Piracy is such a daft word for infringement – a civil or business problem, not a criminal one. With theft of property, the original owner is deprived of ownership because the thief has taken it. Not so with a digital copy where millions can be produced without anyone being deprived of anything (other than potential, not actual, income). The copyright industry likes to use the word “theft” or “stealing” but it’s disingenuous at best and an outright lie in all likelihood. (As are the outrageous guesses at “losses” that make the ridiculous assumption that every download is a lost sale at a premium price, none of which is supportable by fact.)

Not only is it inevitable, but it’s not in the best interests of the content industry. As I’ve noted before, those who do pirate music are also the music industry’s best customers. Apparently the pirating is a way of testing new music that otherwise would never have been heard, appreciated and ultimately purchased. 

Not only that but a new Harvard study shows clearly that it’s in societies best interests to have weak copyright. Remembering that the writers of the Constitution of the USA reluctantly granted “limited” exclusive copyright in return for encouraging creative work.

Copyright law was never meant to protect the music business in the first place—instead, it’s intended to foster creative production in the arts. It seems that goal is fostered by weak copyright and file sharing. 

The idiot copyright industry keeps saying that nothing would be created without ever stronger, and longer, copyright. This is another lie that bears no relationship with any fact or research, but that doesn’t stop the IRAA and MPAA making the claim. (Heck, they simply change their story to suit whatever action they’re currently taking to prop up outdated business models – every action that is, except updating their business models.)

The Harvard Study, analyzed by Michael Geist (the original is a pdf and harder to link to or quote) has found that file sharing has significantly increased cultural production. 

The paper takes on several longstanding myths about the economic effects of file sharing, noting that many downloaded songs do not represent a lost sale, some mashups may increase the market for the original work, and the entertainment industry can still steer consumer attention to particular artists (which results in more sales and downloads).

And this:

The authors’ point out that file sharing may not result in reduced incentives to create if the willingness to pay for “complements” increases.  They point to rising income from performances or author speaking tours as obvious examples of income that may be enhanced through file sharing. In particular, they focus on a study that concluded that demands for concerts increased due to file sharing and that concert prices have steadily risen during the file sharing era.  Moreover, the authors’ canvass the literature on the effects of file sharing on music sales, confirming that the “results are decidedly mixed.” 

It’s time the MPAA, RIAA and their international associates, who do NOT really represent artists, simply realized they were flogging a dead horse and the only chance they have for a future is to adapt. Ever more draconian laws that turn almost every citizen into a civil offender (or worse a criminal) is not only stupid, it’s not in the best interests of those who create the content. Something many musicians have realized already.

Any chance of at least one politician understanding the argument? I didn’t think so.