Archive for the ‘Video Computer Technology’ Category

Tools/Workflow Philosophy: Best-of-Breed rather than Already-Integrated Suite ?

Sunday, July 14th, 2013

I am becoming less enthusiastic about the “Integrated Suite” philosophy or perhaps actuality of Adobe CS6, in favour of a “Best of Breed” approach, where I cherry-pick the best tool for each kind of job and then design or discover my own workflow for integrating them.

I reached this conclusion from the following experiences:

  • As regards editing itself:
    • For general A & B Roll” editing, I find Premiere is ok, though for improved usability, I’d prefer a Tag-based system (as in FCPX) to the traditional Bin-based one (as in Adobe & Avid).
    • For MultiCam editing, even in Adobe CS6, I find Premiere does the job but I find it clunky, frustrating and limited at times, like it has not yet been fully “baked” (though “getting there”)…
      • e.g. In the two such projects I have so far worked on, there has been an annoying 2-second delay from pressing the spacebar to actual playing.  Maybe some kind of buffering?
        • I found a setting for “Pre-roll” in the Preferences but altering it made no difference.
        • The following suggested that the embedded audio (in video file) could be the issue, the solution to which was to relink to a WAV file.
      • e.g. It brings up a separate MultiCam Monitor instead of using the Source Monitor.  You have to remember to activate this each time before playing.  I find that a nuisance (and time-waster when I forget) especially because I tend to alternate multicam editing as such with tweaking the cut timings until they feel right, and sometimes that can only be done in retrospect.
      • e.g. When you stop playing in multicam mode, it places a cut (that you probably didn’t want) wherever the playhead happens to be at the time.
        • I see I am not the only one complaining about this: “ExactImage, Sep 15, 2012″at
          • A workaround given at that link: Before to stop the playback press the key 0 (zero) of the keyboard and then you can stop the play (with the Space bar) without the cut in the timeline.” Duh!
      • e.g Markers are really useful in multicam, but while Premiere’s are steadily improving with product version, they are way clunkier and more limited than those in Sony Vegas:
        • e.g. I put a marker at the start of an interesting section (of timeline), I select it and define its duration to be non-zero, so I can stretch it out to mark a region, then I drag the playhead to the find the end of that interest, I try to drag the marker’s right-hand end up to the playhead, but instead the playhead gets reset to the start of marker.  Duh!
        • e.g. Markers cannot be promoted from clip (media or nested Sequence) to current Sequence.
        • e.g. waveform displays (assuming you can get them to appear in the first place) go blank when sliding clips around.  Really annoying when trying to synchronise to music etc.
    • …so I will explore other options for multicam:
      • In the past (as will be apparent from the above) I have had more joy, as regards Multicam, with Sony Vegas.
      • I will check out what people think of other NLEs as potential “Best of Breed” for multicam editing.  Thus far I have heard (from web-search) good things about FCPX and LightWorks.
  • For audio enhancement, such as denoising, I find iZotope’s RX2 far superior to the one in Adobe Audition.
  • For making a DVD:
    • I find Encore to be handy in some ways but limited and clunky in others.
      • e.g. can’t replace an asset with one of a different type (e.g. [.avi] and [.mpg]).
    • The advantage of using an integrated DVD-Maker such as Encore might be limited:
      • e.g. many people are not using the direct link, but exporting from Premiere/AME, in which case any third-party DVD Builder could be used.
      • The only significant advantage I am aware of is the ability to define Scene/Chapter points in Premiere and have them recognised/used by Encore.
        • But maybe some third-party DVD Builder applications can also recognise these?  Or can be configured/helped to do so?  Worth finding out.
    • ?

Using Cineform’s HDLink to Re-Wrap (ReWrap) from QuickTime (QT) MOV to AVI

Monday, July 8th, 2013

Rewrapping means taking the encoded contents out of one container file-type and putting it in another, with no decode/re-encode happening.  For example, given a [.mov] file, one might rewrap it to a [.avi] file.  These file-types are each merely containers, designed to contain various encode formats (e.g. DV, Lagarith, Cineform, DivX) without having to “understand” them.

Rewrapping may for example be required for some Windows-based applications, that either don’t handle [.mov], either at all or (as I have encountered) not fully.  Similarly, some applications (Windows or Mac based) will only work (or work properly) with [.mov] files.  For instance I have found the Windows variant of Boris RED (versions 4 and 5) to work properly with HD 50 fps progressive only via [.mov] container, as reported at while someone else has found Avid Media Composer 5 to prefer [.mov], reported at

One tool for doing this: HDLink, a utility bundled with the Windows version of Go-Pro-Cineform “visually lossless” wavelet-based codec (that I have used for a number of years).  HDLink can convert Cineform files from [.mov] to [.avi] and vice-versa.  Incidentally, for the Mac version of Cineform, there is a broadly equivalent utility called ReMaster, but that can only convert in one direction, from [.avi] to [.mov].

To re-wrap:

  • (Just now, I merely did [Convert] tab, select file and [Start], ans all worked fine, but maybe full work instruction should be as follows?)
  • Use HDLink’s [Convert] tab.
  • Select/Ensure the required destination file-type:
    • Click [Prefs] button (at bottom of dialog)
    • In [Prefs], ensure [Destination File Format for … Conversion] is set as you require.
    • And (I guess?) enable [Force re-wrap CF MOV->AVI], to ensure it doesn’t sneakily do a transcode?
  • Select the Input file and go.
  • The rewrapped version will appear in the same folder.

The process is of course much faster than transcoding, involving simple computation, hence the overall speed will tend to be limited by the storage (e.g. hard disk and/or its transfer bus, especially if it’s a slow old thing like USB2) rather than the CPU (which may consequently show an extremely low % usage).


Adobe Production CS6 – Cache & Render Files and their Locations

Saturday, September 1st, 2012

When I start-up any application, I like to understand at least the main side-effects it’s having on my system.  In the case of Adobe’s primary video-editing apps, Premiere and After Effects, my experience (on Windows 7) is that they save intermediate preview-renders to the system volume.  This causes me the following concerns:

  • System Volume may serve poorly as a media drive.
    • Larry Jordan, at least in the recent past, advises against using the system drive for media read/write.  On the upside, such drives may have high-bandwidth to the system, but on the downside, the system can interrupt their use with highest priority, whuch may (I guess) pose a risk to smooth playback (though I am aware that buffering may possibly reduce this risk, I haven’t done or seen any such calculations).  Cache files are indeed media files that are written and read.
    • On the other hand, an informed representative of a well-known UK supplier of video editing laptops advised me that in his experience, most users of laptops with only a single internal drive (as system drive) do use that drive in this way (for portability).
  • System drive can become “clogged up”
    • System drive can become clogged-up by many or large video files of which that the user is only partially aware, their creation having happened implicitly during their use of the NLE etc.  Like temporary files only worse!
    • Ultimately the system drive can even become full, making the operating system itself sluggish or even less stable (and video playback less smooth.
    • Backup of a system drive that includes media files will typically require significantly greater archive space and will take significantly greater time (than a clean system).
  • Migrate-ability is reduced
    • I like the idea of a video project being a free-floating data-object.  That is, it should not be tied to any particular instance of a data storage volume, let alone a particular computer (system).  It should be possible for all files relevant to a project to be stored on any volume, migrated to any other volume, plugged into any computer having appropriate installed applications, and everything to work the same way as when the project was on its original volume being edited on the original system.  That includes not only the source media files etc. but also the intermediate rendered files.

So what do the Adobe editing applications provide to enable my preferred working arrangement?

  • Premiere:
    • [Edit > Preferences > Media]
      •  This defines the location of the folder [Media Cache Files], which contains pseudorandomly-named files.  Example Files:
        • [929_4372_01-125eeda9-ba0d-a8ea-4418-3480000001f0.ims]
        • [Rendered – 68721ea9-25e9-4f56-8430-4ca10101ace7-04602910-cd54-1a45-a5d7-557b000001f2.ims]
      • Default location: [c:\Users\…\AppData\roaming\Adobe\Common]
      • [Yes] Save Media Cache files next to originals when possible
        • e.g. for my XDCAM-EX files, inside a CLIPR folder (which contains the EX’s “essence” (.mp4) files, in this case [929_4491_01.MP4]), appeared the following files:
          • 929_4491_01.MP4 (9.2 MB)
            • Unknown, but the “48000” and the “a” on the end of “cfa” are suggestive of audio.
          • 929_4491_01.MP4 48000.pek (37KB)
            • Simply the peaks (waveform graphics data) file for the audio component of the essence-file.
        • Experience:
          • I clicked the [Browse] button and selected an area on my external media drive (a GRaid Mini) as: [H:\_App_Specific\Adobe].
          • Consequently, at my specified location the following folder appeared: [Media Cache Files]
    • Media Cache Database
      • Note:
        • When Premiere Pro imports video and audio in some formats, it processes and caches versions of these items that it can readily access when generating previews. Imported audio files are each conformed to a new .cfa file, and MPEG files are indexed to a new .mpgindex file. The media cache greatly improves performance for previews, because the video and audio items do not need to be reprocessed for each preview.
        • When you first import a file, you may experience a delay while the media is being processed and cached.
          A database retains links to each of the cached media files. This media cache database is shared with Adobe Media Encoder, After Effects, Premiere Pro, Encore, and Soundbooth, so each of these applications can each read from and write to the same set of cached media files.
      • Location: [c:\Users\…\AppData\roaming\Adobe\Common]
        • [Browse]
          • If you change the location of the database from within any of these applications, the location is updated for the other applications, too.
          • Each application can use its own cache folder, but the same database keeps track of them.
          • Example Experience:
            • I clicked the [Browse] button and selected an area on my external media drive (a GRaid Mini) as: [H:\_App_Specific\Adobe].
            • In response, a prompt came up saying “Move the existing media cache database to the new location, or delete it (Buttons: [Move] [Delete] [Cancel] ).
            • I clicked [Move]
            • Consequently, at my specified location the following folder appeared: [Media Cache]
      • Purging
        • [Clean]
          • This removes “orphan” cache files.
          • To remove conformed and indexed files from the cache and to remove their entries from the database, click [Clean]. This command only removes files associated with footage items for which the source file is no longer available.
            • Important: Before clicking the [Clean] button, make sure that any storage devices that contain your currently used source media are connected to your computer.
      • [Yes] Write XMP ID To Files On Import
        • Check this box to write ID information into XMP metadata fields.
        • e.g. hence [929_4491_01M01.XMP] sidecar-file (containing XMP metadata) got written into the CLIPR folder containing its associated media  file, here an XDCAM-EX essence file, [929_4491_01.MP4].
      • [Yes] Enable Clip And XMP Metadata Linking Check this box to link clip metadata to XMP metadata, so that changing one changes the other.
    • Intermediate-Preview Render Files
      • None of the above measures affect where intermediate/preview files get rendered to…
        • I proved this (in Windows 7) by deliberately causing a render then searching on “Huge” (>16MB) files created “Today”
        • Rendered files location was:[C:\Users\…\Documents\Adobe\Premiere Pro\5.5\Adobe Premiere Pro Preview Files\Untitled3.PRV]
      • A brief Google revealed several articles where the (sadly now obsolete) solution was a setting under [Edit > Preferences > Scratch Disks]
      • Eventually discovered that in CS5.5 these settings now resided in: [Project > Project Settings > Scratch Disks]
        • Here, all settings (affecting Capture and  Preview) were set to [Same as Project]
        • As it happened, my project location was [C:\Users\David\Documents\Adobe\Premiere Pro\5.5].
      • Experiment with a different project location:
        • Save As: [H:\_Media\_Projects\MyProject\030 Projects\Adobe\Experiments]
          • The file [Expt A v001.prproj] appeared in it, but that was all.
        • Save a Copy: [H:\_Media\_Projects\MyProject\030 Projects\Adobe\Experiments]
          • The file [Expt A v001 Copy.prproj] appeared there, but that was all.
        • (The file open within Premiere remained as the original, not the copy)
        • After a while, the project file was joined by: [Adobe Premiere Pro Auto-Save]
      • Experiment to migrate the Preview-Render files:
        • In Windows Explorer, I created a folder named [Adobe Premiere Pro Preview Files]
        • Into that folder, from the similarly-named folder on the system volume, I dragged the existing folder [Untitled3.PRV]
      • Experiment: Premiere “knows” when render-files have gone, and prompts for their possible new location.
        • I deleted the project-specific render-files folder on the (external) project-drive.
        • I re-rendered, resulting in a fresh such folder, re-populated.
        • In Premiere, I Closed the project
        • Then in Windows I renamed the project-specific render-files folder, then back in Premiere I re-opened the project.
          • Premiere prompted with Browser titled: “Where is the File ‘Rendered – 5ac…..228.mpeg’?”
          • I selected the stated file in its newly-renamed folder, Premiere then found all the others there.
        • Result: the timeline render-region went green (i.e. “probably playable”).
      • Experiment: Once render-files are regarded as “gone”, they cannot be restored.
        • I closed the project, ensuring I did not save changes (such as the new location of the render files), then re-opened it.
          • As in the previous experiment (since I did not save changes), the render files folder could not be found.
          • Premiere thus prompted with Browser titled: “Where is the File ‘Rendered – 5ac…..228.mpeg’?”
          • This time however I simply used the [Skip All] option.
        • Closed the project.
        • Renamed the rendered-files folder back to its original name.
        • In Premiere, re-opened the project.
          • The timeline region remained red, indication no render-files were associated.
      • Experiment: Tidy migration of a project to a new location.
        • Warnig: in the case of doing a Copy (which is Windows’ default drag operation between different volumes), take care to ensure the Project (file) is not simply referencing the original preview files at the old location…
        • Drag both Project and its folders (including render-file folder) to a new location (e.g. on a new disk).
        • If name and relative location of folder are unchanged (as they ought to be, in good practice) then the files will be automatically detected and used, not even a user-prompt.
          • Just be sure though that the project isn’t simply referencing the render-files in their original location, if they are still present there.  Premiere is “lazy” in this respect.
      • Experiment: The relative location of the Rendered Files folder does matter (relative to the project file).
        • Tried putting the render files in a non-standard location.
          • The “Locate/Browse” prompt appeared
          • I located the file
          • All at first appeared well, and the corresponding section of the timeline went green
          • However, the “Composer” window simply displayed “Media Pending”.  That never went away.
      • Experiment:
        • When migrating, also need to move (or copy):
          • The Media Cache directories
            • Actually I’m not so sure about this. I tried exiting Premiere, renaming these directories and opening Premiere.  It created and repopulated the same directories in their original location, which in my case was an external drive.
          • The Source Media files
      • I suggest marking each external with the drive letter that the user assigns to it, say Z:\. Then, whenever Z:\ is plugged in, it will always be seen as Z:\. This way, the NLE can keep up with where the Assets are located, starting with the drive letter.
      • If one is migrating Projects between computers, they will repeat this exact process in the OS of each computer.
      • Note: when doing the migration, ALL Assets, Scratch Disks, and the Project file, MUST be included on that external

Work Procedure for Migrate-ability:

  • By associating cache and XMP files with the media (or its essence), Adobe projects are migratable.  However adding such files into the BPAV/CLIPR folder structure is considered by some applications to be an adulteration of that structure, requiring their deletion.   However, such deletion on an as-needed basis is not too onerous – given it is easy to do and in any case this situaion should rarely arise in practice.
  • When using different disks, remember to re-define (in Preferences) the location of cache files etc.
    • One work-around would be to -re-set the cache location before opening any individual project.
      • Might be hard to remember to do when opening a project from within the NLE, easier to remember when double-clicking a project file in Windows Explorer.still
    • I’m not 100% sure what to do about these…
  • As noted earlier:
    • When doing the migration, ALL Assets (Sources), Scratch Disks (Renders), and the Project file, MUST be included on that external.
      • I note that this says nothing about Cache Files etc. …

External Storage Devices and their Bandwidths

Saturday, September 1st, 2012
  • Summarised info from
    • Average transfer rates in MB/s for different interfaces:
      • USB2:    20 – 25, depending on other USB devices sharing the same bandwidth
        • archiving for storage only
      • FW400:  30 – 35
        • archiving for storage, and light editing to/from (just very slow)
      • FW800:  50 – 60
        • archiving for storage, and regular editing to/from (fairly fast)
      • USB3:    65 – 80, depending on other USB devices sharing the same bandwidth
        • (no experience)
      • eSATA & SATA: 100 -140
        • archiving for storage, and regular editing to/from
    • RAID Speed-gains over a single disk:
      • RAID0:    0.9 x N disks over a single disk
      • RAID3/5: 0.8 x (N-1) disks over a single disk for read, 0.6 x for write.
        • ICHR10 figures are a bit lower than hardware controllers.

Computer Kit-Change Time?

Thursday, June 21st, 2012

I currently use Mac equipment, but most of what I do is Windows-based.  Although Macs can run Windows under Boot Camp, but there are some shortcomings in practice, the main ones being poor Boot Camp support for FireWire and ExpressCard:

  • On a Mac Pro bought in 2008:
    • FW800 port works OK with an external FW800 hard disk unit, but is unable to drive specialist audio/video equipment.
  • On a MacBook bought at the beginning of 2010:
    • The FW port is unusable, even for an external hard disk unit.  If I try to use it, it works initially then (e.g. after a GB or two) the FW driver crashes and remains offline.
    • The ExpressCard port does not function.
      • Interestingly, placing a Sony SxS video-recording card in the ExpressCard slot causes the operating system (Windows) to search for a matching driver.  However the card never appears in Windows Explorer. Frustratingly “almost there but not quite”…

So Boot Camp is really limitated as regards Windows-based video editing!

As an alternative to Boot Camp, I tried running windows as a virtual machine under the Mac OS application Parallels.  Rendering is surprisingly efficient under this regime, almost 100% of Boot Camp speed, but I found that:

  • FireWire is not supported (at least not in the version I tried)
  • Crashes were not too frequent, but they were more frequent, than under Boot Camp.

So maybe I should try it the other way round!  It is possible to run Mac OS on a Windows PC via an “umbrella scheme” called Hackintosh, whereby various softwares (not called Hackintosh) make the PC look sufficiently like a Mac to allow Mac OS to be installed and booted.

So what kind of PC?

  • Ideally I’d like a “luggable”, say with 24 inch screen and 8 cores.
  • But it can be a fraught business choosing equipment that is compatible with the major NLEs etc.
  • So I took a look at a renowned expert-seller of such equipment, namely DVC.  They offer the HP EliteBook 17″ HP8760W with Quadro 3000 graphic card (suitable for Avid & Adobe Premiere Pro):

Examining the potential of that laptop:

  • CPU:  It is an i7 with 4 cores, 2.3GHz with turbo up to 3.4 GHz
  • GPU: The Quadro 3000, which has 240 pipelines, 2GB memory, and consumes 75W.
  • It can run Hackintosh > Mac OS
    •  Hackintosh: How-To:
    • Google: [HP 8760w hackintosh]
        • Hp Elitebook 8740w with Mac OSX Lion 10.7.1 installed
        • Very smooth performance, no display glitches
        • With Lion, unlike Snow Leopard, the USB ports work.
        • Also the FireWire, Webcam, BlueTooth work.
          • {Though from experience I’d want to test that FireWire}
        • However  the following do not work: Track-pad, Fingerprint-reader, Card reader, WiFi.
          • WiFi is partially fixable by using a USB adaptor, but its bandwidth would then be constrained (?)

So that laptop is a definite contender…

Googling further on that model, it becomes apparent that it is available in a variety of customizations:

If I do go for that model, I shall most likely purchase it from DVC, even if I can find it cheaper elsewhere.  I’d rather not take the risk of some subtle error and want to help keep them in business for the future!

HDV Tape Capture on Old Laptop: Trim & Tweak

Thursday, May 31st, 2012

I have an old 1-core Athlon laptop running XP and AVG AntiVirus.   I use it mainly for capturing from tape via a similarly-old HDV camcorder.  The tape contains Mpeg2 in Transport-Stream (TS) format, as recorded by the camera.  This capture takes place in real-time, with the tape, via FireWire (FW).  The capture process must not be interrupted – otherwise packets will be lost.  Losing packets is a nuisance rather than a show-stopper, presumably a benefit of the TS format, the only consequence is lost frames (unsure whether that affects overall duration or whether blank frames are substituted for lost ones).

To minimise chance of lost packets, I reduce as many possibilities of delay and interference as I can think of. The machine is disconnected from network (and WiFi is disabled).  For storage target, I attach a GRaid two-disk storage device, necessarily via USB2 (the machine only has one FW port, and that is used for the camera).  Additionally I Exit/Quit/Suspend a number of processes.  Firstly all relevant icons in the System Tray.  Secondly via application-specific Control Panels:

  • AVG AntiVirus: Disable elements via AVG Uder-Interface:
    • Shield
    • Identity Protection
  • Java
    • QuickStarter: Disable via Java applet in Control Panel
  • HDV Tape-Capture App (HDVSplit)
    • Change its priority from the default of “High” to “RealTime”.
      • {Uncertain if this is advisable, but the tape in reality is definitely real-time!  Anyhow, overall it worked OK}


Mobile Video Editing Hardware: Thoughts, Ideas & Dreams (continued)

Friday, January 13th, 2012

Following-on from my earlier post, Mobile Video Editing Hardware: Thoughts, Ideas & Dreams, where I considered an eventual migration from my laptop to a luggable PC, my thoughts veered (possibly having spotted cash-icebergs among them) towards an alternative solution:

  • Use the laptop for lightweight editing & compositing.
  • Use the desktop as a number-crunching RADI-attached server.

The two could be linked by:

  • Remote access / remote sessions (some of which via smartphone)
  • DropBox, e.g. have an active folder where I can drop Adobe Premiere XML and have it processed remotely by Adobe apps installed there.

Some links:

    • (There’s no equivalent “_part_1” page.  I guess it’s just “Part 2” of that guy’s story).
    • DIY virtual machines: Rigging up at home, by Trevor Pott, 11th January 2012 14:33 GMT
    • Personal Virtual Machine (PVM) (in use) for about seven years with retail boxed version of Windows XP.
    • VM has been moved from virtualization platform to virtualization platform over the years … the most recent incarnation … inside Hyper-V.
    • …nothing beats Windows Server 2008 R2. It comes with a top-notch virtualisation platform (Hyper-V), and added RemoteFX support with Service Pack 1. You can still use the desktop operating system for all your HTPC needs, and a single Server 2008 R2 Standard license allows you to run both a host copy and a single virtual instance of Server 2008 R2.
    • In my case, the host instance does little more than play movies on the projector via VLC. The virtual instance of Server runs my Plex media server, and aggregates my many storage devices into a single share using DFS.
  • Shuttle Inc (Taiwan)

Mobile Video Editing Hardware: Thoughts, Ideas & Dreams

Tuesday, January 10th, 2012

Want a mobile “suitcase” editing system, something more (and more expandable) than a laptop but not too expensive.  Primarily to be used for Adobe CS5.5 for media enhancement / editing / compositing etc.

Nearest I found was NextDimension’s range around $7000 I think (but just guesswork – could be way off – would need to get a quote).   That would (if true) be around £4500 at current rates.  Plus import…  NextDimension call such machines “flextops” (Maybe they coined the term? Google searches on it mostly come up with them.)

Apart from the (mil/broadcast-lite but me-heavy) price, it might possibly be undesirably heavy to lug around much.   If so (just guessing, not assuming), it would make more sense to go for a modular quick-setup system.  So, starting to “think different” in this direction:

  • Standard tower, capable of taking new CUDA etc. graphics cards etc. as they emerge, but no need for more than say a couple of disks, maybe if SSD could even get away with just a single disk? (For system and media – inadvisable for traditional disks of course, what about for SSD’s?  I have much to learn about SSD’s though).
  • “Laptop-Lite” to talk to it.  With robust shuttered-stereoscopic HD monitor.
  • Gigabit network to NAS fast storage (SSD and/or RAID ?).

Maybe in that case it would be far more logical/affordable to use an existing laptop as a client working together with a luggable tower server, sufficiently light and robust for frequent dis/re -connection and travel.  And remote access of course (no heavy data to be exchanged, assume that’s already sync’d).  And some means to easily swap/sync applications and projects (data) between laptop and tower, giving the option to use just the (old) laptop on its own if needed.  All such options are handy for the travelling dude (working on train, social visits etc.) who also occasionally has to do heavy processing.  Then would just need a protective suitcase for the tower, plus another one for a decent monitor for grading etc.

I certainly won’t be spending anything just yet, but it’s good to have at least some kind of “radar”.


Laptop-Based Mobile Editing: GRaid Mini (Out-Shines “Passport” Drive)

Wednesday, November 9th, 2011

Video-editing on-the-move (e.g. on a train) using a MacBook Pro (laptop) with Sony Vegas 9 (64-bit) as NLE (under Boot Camp / Windows 7), my practical experience was that a GRaid Mini external drive was far, far better than a 5400 rpm Western Digital “Passport” drive.   Consistent with the dual use of the MacBook, I partitioned the drive for both NFTS (Windows) and HFS+ (Mac OS), 50-50%.  Due to Boot Camp limitations (explained below), up till now I only ever used it “tethered” to its own mains-based power supply.  But now I see it can also be used mobile, powered from the MacBook – something that up till now I could only achieve under Mac OS, not under Windows.

When using Boot Camp / Windows on the MacBook, I initially tried the shirtpocket-sized Passport drive because it was small, light and powered from the laptop’s USB port.  While its data throughput wasn’t too bad, at least for single-channel HD editing (especially when only 1280720), when it came to cuts from one video clip (hence, in my case, video file) to another, there was a frustrating delay every time.

I also have a GRaid Mini drive, but it wasn’t obvious at first how to use it mobile when using Windows (on a MacBook).  That drive consists of two 7200  drives in RAID-0 configuation (striped, giving speed but no redundancy), and appears just like any single drive to the computer (no RAID management etc. needed).  The drive has not only a USB (2) port but also FireWire 800 (FW800) and eSATA ports.  While the latter two options work fine with the MacBook under Mac OS, they don’t work under Boot Camp / Windows.  I have tried many times and trawled many forums, no solution is apparent.  Under Mac OS the eSATA drive would ordinarily plug into an ExpressCard adaptor plugged into the laptop’s ExpressCard slot, but under Boot Camp / Windows, the ExpressCard slot doesn’t work, while for the FW800 port under Boot Camp / Windows, it appears to work at first but eventually crashes as a device when it attempts to communicate data (e.g. when copying files).

When connected only by USB to the MacBook under BootCamp / Windows, the GRaid Mini is not powered from that port, hence up till now I have relied on a mains power supply to that drive.  However, I discovered if, after first connecting by USB, you subsequently connect also by the FW800 lead, then the drive takes power from the FW800 yet communicates data via the USB lead.   Hooray!  I can use it on-the-move then!

The order in which the leads are connected is vital.  If by mistake the FW lead was connected first, then the drive would sense that as the data communications route, and subsequently fail in use.  It is vital that the USB connection is made first.  Likewise, on disconnecting the drive (following “ejection” by the computer’s file-system), disconnect the FW drive first.  The rule is FW lead: connect last disconnect first.

My experience of editing with the GRaid Mini is far more fluid hence more pleasurable and efficient.  Totally worth it.  None of the per-cut delay effects of the 5400 rpm Passport drive.  And now it can be used on-the-move, even with Boot Camp / Windows on a MacBook.  I just wish Apple would fix that Boot Camp isue with FireWire and ExpressCard ports!

Laptop-Based Video Editing: Mobile NAS

Thursday, October 27th, 2011

Suggestions for mobile (laptop) multicam etc. video editing

o Ethernet now is 1Gbps
+ Compares well to FW800’s 0.8Gbps.
+ But is there any lag/latency ?  (significantly beyond USB and FW)
o An advantage of NAS – is operating-system-agnostic
+ Hence can conduct a (greater) production project with all material in one folder structure on one device, and Mac and Windows apps can each access each others’ files with full read/write access, including the ability to add “sidecar” files e.g. for audio waveforms or in-video motion analysis results (as used by steady/stabilize effects).
* Can connect a NAS to a laptop directly via Ethernet Crossover cable.  No need for a router.
o Google: [ethernet crossover cable].
o It’s a very standard item, e.g. available at PC World
* Exists-there a small mobile NAS?  Preferably FW-powered from the laptop?
o Google:
+ [small nas raid]
+ [mobile nas]


Mobile Editing Blues: FW800 Unusable on MacBook via BootCamp

Thursday, October 27th, 2011

This is a problem I encountered some time ago, when I was running Boot Camp v3.1 on my MacBook Pro.  Since then I upgraded to v3.2.  I know there’s a v3.3 around but before upgrading I thought it worthwhile to see whether v3.2 had fixed that problem (especially since I couldn’t rule out the possibility of v3.3 reintroducing it).   Only one thing to do: prevaricate test.

  • Copy file from GRaid Mini (GRm) to Desktop:
    • 2GB fine
    • 12GB appears ok initially but then fails (to zero b/s transfer rate, then the Grm device “no longer exists”, at least until reboot)
  • Reverse: 2GB fails (same way) almost immediately.

OK not good thus far…

Next tried an alternative approach: run W7 as a Virtual Machine on Mac Os via Parallel.  I have Parallels v6.  Forum search revealed that there is no FW support in either v6 or v7, though the developers seem interested in knowing why people want it.

  • 2GB GRm to W7 Desktop: ok
  • The reverse: ok.

Had to stop there due to other work – and a very full W7 disk.

The next workaround to consider is attaching a NAS.  Ethernet bandwidths can be 1Gbps, hence more than FW800’s 0.8 Gbps, though I wonder if there could be any issues of lag / latency in this approach.  I’ll do some research and put up another post about this idea.

Cinematography Apps for iPhone

Saturday, September 24th, 2011

Glide-shots: Steady-Shot / Smooth-Deshake-Stabilize / SteadyCam

Sunday, July 25th, 2010

Which is best?   Depends on the camera, scene and shot dynamics I guess.  The same point is queried at the following thread:

Some general advice from a computer-post-savvy author: definitely use the camera’s SteadyShot:

Limitations of post

  • Stabilization necessitates motion estimation and image reconstruction, which are extremely CPU-heavy, hence really slow to execute.
  • Most stabilization apps (in post) can’t currently cope with motion-blurred edges or parallax effects (though both should be possible in principle, by deconvolution and 3D modelling both informed from multiple frames).
  • For rolling-shutter-ed footage (e.g. CMOS sensors as in Sony Exmor as in Sony XDCAM-EX e.g. EX1 & EX3), there exist options to reduce the effect (don’t expect perfection, but may suffice):

My experiences:

  • Stabilizing Tools:
    • Gunnar Thalin’s Deshaker works really well.  And it is multi-threaded, really speeds up the process.  The author says it is more intended for handheld pans etc. than fast-shaking shots from vehicles etc. (but has nevertheless seen good results in such situations).
      • The author says [] to try “to stabilize only on the most distant parts in the frames, since the moving inwards-effect is less there”.  And “you should probably increase the value for [discard motion of blocks that move > X pixels in wrong direction]. That’s to allow the blocks to move “freely” a little, since Deshaker can’t handle the “moving inwards”-effect.
      • Possibly equally applicable to other smooth/stabilize/deshake tools ?
    • Boris’s Optical/Motion Stabilizer (in Boris Red 4.3.3 on XP) is only single-threaded and I find it slower, clunkier and less intuitive than Deshaker.  Has a Smooth mode, which is like the others here, as well as a Stabilize mode (try to keep frame static, no good for motion then).  The other tools can be configured to do the same thing.
    • Mercalli in Sony Vegas has no mode for 720p50 but otherwise is pretty good and very intuitive and configurable.
    • FCP’s SmoothCam Effect worked best for a challenging clip for wobbly-hand-held camera tracking close past an object (a Formula-1 car) hence huge degree of moving-inwards effect.  The default settings worked straight away.  The result quality was way above that of the other tools.  On the other hand sometimes it’s not the best (sorry, forgot the exact situation).
  • Cameras & Shots:
    • Historically, using a TRV33 DV HandyCam indoors (hence low-light hence long shutter time):
      • Way back in the past, using a (now ancient) TRV33 DV handy-cam (which has huge sensor margin i.e. spare pixels), when I shot big zooms to lecture audience individuals (e.g. question-time) I had the camera’s steady-shot (digital, not mirror) enabled  and used Gunnar Thalin’s Deshaker (VirtualDub plugin) also.  The result was astoundingly steady.
      • The same arrangement worked OK with hand or shoulder mounted cam for walk-throughs past nearby objects (e.g. walls, people, furniture).
      • An attempt to do the same thing without steady-shot enabled on the camera resulted in seriously motion-blurred edges.
    • Now, using a Sony EX3:
      • With camera Steady-Shot set to Medium, hand-held pans and motion past nearby objects seem to acquire a positional instability, as if the camera feedback mechanism needs greater damping. Maybe the camera’s internal mirror “suspension” has to be tighter (than the TRV33 digital equivalent) because it lacks the generous pixels margin of the TRV33?  or maybe something to do with the mirror’s inertia?  Or (real-time-constrained) processing-power?
        • Experimentation is needed with the camera’s other SteadyShot modes (High, Low).
        • In the absence of more generous sensor pixel margins, I wish it could be loosened-up e.g. to allow black borders (to crop in post) so as to permit smoother rides overall.

PC Windows <--> Mac OS X RoundTrip (Round-Trip)

Monday, July 19th, 2010


  • In Windows I export from Sony Vegas to AVI (CineForm).  In OS X I read the file into FCP and apply the SmoothCam effect, then export to ProRes.  In Windows, Sony Vegas, I replace the original file with the smoothed one.  The levels/gamma are wrong.

Solution (Search):

  • Sony Vegas forum
    • Use DNxHD
      •  Couple of tips re DNxHD:  709 color level assumes 16-235, and RGB assumes 0-255.
    • Force it back again:
      • But this presumably implies getting re-quantized twice (the roundtrip issue and the forcing), which for 8-bit footage I imagine could reduce the quality (banding).
  • Uncertainties
    • Where and how does this gamma get applied?  In FCP I didn’t (knowingly) alter the levels (eg until it looked right), I just applied the SmoothCam filter.  So I guess it would look wrong on the (pre-SnowLeopard) Mac but I wouldn’t care.  Wouldn’t FCP then export back whatever it got but smoothed?  This one is really confusing.    Experiments needed (when I get time…) I guess.

FCP Project Folder Structures: The (Non?) Fragmentation Issue

Saturday, May 15th, 2010

Reading book “Final Cut Pro Workflows” by Osder & Carman, 2008.  On page 284 it relays advice that it is best to put Project Files [.fcp] on a separate drive to the Media Drive (e.g. Media Drive= XSAN), due to:

  • Safety – not all on one drive
  • Avoid fragmenting the media drive (project files, cache and to a lesser extent render files) are written often (transient files?)

I’m not immediately convinced by these arguments:

How to view degree of fragmentation on an HFS volume:

  • []
    • Command-line app to report a variety of storage-volume statistics, including fragmentation.
    • After download, can check the sha1 checksum, but this is of the executable, not the download itself ([.dmg] file).  The ‘sha1’ command is inbuilt to Mac OS, as: [/usr/bin/openssl sha1].  Note the last character of ‘openssl’ is a small ‘L’ niot a ‘1’.

iMovie Preferences & Directory Structure

Sunday, May 2nd, 2010
  • General
    • Show advanced tools [No] -> Yes
  • Video
    • Import HD video as: [Large 960×540] -> Full – Original Size

No way I can see to define the iMovie work-area folder, where media gets imported to for example.  Maybe there’s no choice over this, which would be a shame (don’t want movies filling up my system drive).  In my case the iMovie work-area was:

  • /Users/davidesp/Movies
    • iMovie Events.localized/
      • New Event/
        • iMovie Data
        • iMovie Cache/
        • iMovie Thumbnails/
        • <a source media file>
    • iMovie Projects/
      • My First Project.rcproject

2D to 3D Movie Conversion

Thursday, April 22nd, 2010

Great article by Studio Daily at, on the 2D⇒3D conversion company In-Thre  from this article I learnt that:

  • Some new movies, not just legacy ones, are being converted from 2D to 3D (stereo).  This step is being planned as part of production.  Don’t know why they can’t just shoot it in stereo (cost? maturity? conservatism?) but that’s how it is.
  • The method: a tech & manual rotoscoping pipeline (production line) where images are masked to create layers and artistic judgement is applied to the appearances of individual objects.  As one would imagine, no simple “magic solution”.  However beyond those basics they have their own patented 2D⇒3D inference algorithms operating on individual objects even at sub-pixel level.
  • Not quick or cheap: “for a 100-minute or 120-minute 2D-to-3D conversion, you would need about 300 to 400 artists phasing and out of production over about four to six months.”.  Clash of the Titans was so-processed in under half that time – possibly explaining some negative press (mentioned in the article) regarding the quality of its 3D.

The interviewee in the article was from In-Three.  Their website explains:

  • Dimensionalization is a method developed by In-Three of converting 2D content to stereoscopic 3D content.
  • There are various approaches to creating 3D content: capturing 3D using dual camera rigs, rendering 3D using dual “virtual” camera rigs within a computer graphics environment, and creating 3D by converting 2D content with processes such as Dimensionalization.
  • Dimensionalization is trademarked because it describes a patented process which gives the unique, depth, shape and perspective to each individual object on a pixel or even sub-pixel level. Throughout our process, there are a multitude of “special and unique techniques” our experienced stereo team has and continues to develop, so that you can be confident that we bring the tools and the skill to any conversion project.
  • The Dimensionalization process is covered by a number of U.S. patents. These patents make In-Three a leader in the development of intellectual property surrounding the conversion of two-dimensional films to stereoscopic experiences.

Final Cut – Online & Virtual Archive via “Quantum”

Thursday, April 22nd, 2010

The following has a nice explanation and diagram of the arrangement, showing Final Cut Server being used to interface to both online and archive material.

Disk Space Usage / Inventory

Wednesday, April 21st, 2010

For Mac OS:

  • Disk Inventory X

For Windows:

  • WinDirStat
  • FolderSize

They are both pretty similar, in each case displaying filespace usage via a tree map looking like a patchwork of multicoloured PVC, each colour representing a different type of file (audio, video, application, document etc.).  Their advantage over traditional browser trees is you can see all the largest files and folders simultaneously (as a plan-view).  Tree maps (treemaps) are explained at – they are formed by subdividing in alternate dimensions (horizontal/vertical), each time in proportion to relative size of item, be it folder or file.  A variation on this, employed by the above tools, is a cushion treemap [], where shading reveals the directory structure.  A further variation is the squarified treemap [], where subdivision and grouping attempt (no guarantee of success) to make the rectangles as square as possible. (more…)

MacBook Pro gets hot under BootCamp Windows

Saturday, March 27th, 2010


  • MacBook Pro, when in Boot Camp Windows 7, gets uncomfortably hot (in its heatsink areas, the bar above keyboard and also the chassis underside).  In comparison, it runs pretty cool (temperature) under Mac OS.


  • No fan control when in Boot Camp Windows mode.

Popular Solution (does not imply any recommendation):

Some good links (as of 2010-03-27):


  • Just copied the contents of the zip file to  [C:\Program Files (x86)\Lubbo’s Fan Control] and ran it from there.  It has files [inpout32.dll] and [inpoutx64.dll].
  • Ran it but it gave error messages:
    • (The) Following process(es) is/are using SMC:
      • kbdmgr
    • It’s not an error, but Lubbo’s Fan Control cannot share apple SMC access.
    • Do you want to kill it/them?
    • NO = the system may freeze.  Try only if you are running BootCamp 3.1
    • YES = the incompatible process(es) will bekilled and the program will start.
      • But I have read elsewhere that this means function keys won’t then work (for that session).
      • Didn’t work – it said “It was not possible to load IO driver.  Retry?” and “(May be better to press CANCEL and reopen the program)”
    • CANCEL = this program will exit
  • Someone else had the same problem but found a solution that appeared to work for them:
  • So it seems that to get Lubbo’s utility working I have to:
    • Kill a system process
    • Install a version of .NET that is not yet officially supported.
      • I’ll write a separate blog post about .NET
  • Life on the edge, huh?

DCT (Mpeg/Jpeg) Gibbs Noise / Mosquito Noise

Monday, February 8th, 2010

The Gibb Effect is an MPEG compression artefact. It is a blurring of the outline of sharp objects, with inappropriately-coloured pixels appearing around the outline of the object. The commonest area in which the Gibb Effect is seen is during end credits, where insufficient bits have been allocated to compressing the data. Another name for this artefact is mosquito noise.[]

Mobile eSATA (via ExpressCard) for MacBook Pro

Sunday, February 7th, 2010

 Mobile eSATA (via ExpressCard) for MacBook Pro.Sonnet Fusion F2.  Up to 1GB (when configured as Striped i.e. RAID0).  Sustained Read/Write of 126MBps=1008Mbps.


It connects via two eSATA cables to ExpressCard adaptor and also via a FireWire connector purely for the power (no bandwidth).  The intention is that the FW bandwidth is still free for use by other devices e.g. “AJA’s Io external capture and effects box – which requires all of the FireWire bandwidth to itself”. []

  • Note – for Sony EX1 and EX3 users the Fusion F2 uses th Express 34 slot on the MacBook Pro, meaning one would need to transfer SxS data to either a FireWire or USB drive and then across to the Fusion 2.

Thinks: It works as Software RAID for the Mac.  Is there any practical way to also use it from Windows?

Avid (MC4) Mix & Match (of formats on timeline, no need to render)

Saturday, January 2nd, 2010
  • The mix&match feature of the Avid (e.g. MC4) assumes bringing in footage in corresponding projects. After that, you can instantly access that footage from any other project (or project type), and playback in real-time
    • Your sequence setting is what you tell it.  1080i, 720p, 525i…whathaveyou.  And whatever clip you add to that that ISN’T that format, gets scaled to that format…using a filter called a MOTION ADAPTER.  This add interpolation to match the sequence settings, and this is added automatically when you add new footage that doesn’t match.  And there are all sorts of interpolation modes…these are all user selectable.  AND you can change your sequence settings to match something else later.
    • If you want the interpolation to better then you can “promote” the motion adapter to a full blown TIME WARP (that has been there for many years) and the footage will benefit more.
    • Works in software-only (no Mojo required) and takes advantage of multi-core (e.g. 8 core)
    • Avid’s ‘open timeline’ implementation is much better than FCP’s.  Avid MC automatically adds a plugin that is designed to do this upscale in very smart ways. It isn’t just scaling it and then repeating a frame.
    • The editor does need to have certain “switches “ turned on to see the highest quality output, such as: -Full Quality 10bit output, -HQ RT Scaling Decoder, -Advanced Polyphase image interpolation.
  • For example: “you will have to import NTSC clips in an NTSC project and 720p60 clips in a 720p60 project. If you try to import 720p60 files into a 30i project, you will be downconverting upon import, which is not as nice, and will not be able to handle certain metadata correctly”

Cloud Computing and Amazon’s EC2 Service

Saturday, January 2nd, 2010

Amazon lets a system admin type person rent time/space/bandwidth on Amazon’s Cloud Computing network, “Elastic Compute Cloud”, EC2 (which also sounds like “Easy-To”).  It costs e.g. about 10 cents per hour (depending on selected level of service).

  • Maybe useful for CGI generation?

Setup is fiddly, but this tutorial explains how to do it from command-line, including some pragmatic tips:

Here’s an easier way, via web-based GUI:

Generic intro to Cloud Computing:

Auto audio leveller – beyond simple normalization/compression

Monday, June 22nd, 2009


Discovered at:

The Levelator is a freeware application (for various OS)  that automatically evens out multiple audio sources:

“It’s software that runs on Windows, OS X (universal binary), or Linux (Ubuntu) that adjusts the audio levels within your podcast or other audio file for variations from one speaker to the next, for example. It’s not a compressor, normalizer or limiter although it contains all three. It’s much more than those tools, and it’s much simpler to use. The UI is dirt-simple: Drag-and-drop any WAV or AIFF file onto The Leveler’s application window, and a few moments later you’ll find a new version which just sounds better.”

“So how do we calculate levels and process audio for The Levelator?  We first isolate segments that are silent and remove them from the calculations. We define silence as audio segments which have no subsegments of 50 ms or more where the RMS is greater than -44.0dB. We then compute the RMS value of the remaining segments and normalize them to our target RMS level of -18.0dB.

The above is actually a drastic simplification of The Levelator’s processing, which takes into account a number of subtleties when dealing with certain real-world sources. For example, the silence threshold of -44.0dB is not reasonable if the audio before normalization is already very quiet. The -44.0dB value is therefore used only after the overall RMS is first normalized to near that target. This requires an iterative calculation. The Levelator processes an entire audio file, not a continuous stream, so we have the advantage of infinite lookahead and the ability to make multiple passes over the data in large and small chunks.”

Pluraleyes: Multicam auto-sync (based on audio)

Monday, June 22nd, 2009

Mac/FCP add-on to automatically synchronize multicam clips based on their audio content.

Discovered in article at:

The article says: “…PluralEyes … syncs up multi camera footage without use of timecode. It’s in beta and you can download and try it out yourself.  (In the article author’s tests) it worked on about two out of every three clip pairs. When it worked, it was perfect, effortlessly lining up and then converting clips into a multicam clip in the browser.”


System Requirements:

  • OS X 10.4.11 or later
  • Final Cut Pro 5.1.4 or later
  • PluralEyes™ analyzes the audio content, so all clips to be synced need to have an audio track.

Backup & Archive to Multiple External Drives: “Retrospect Backup” tracks what’s where

Tuesday, June 9th, 2009

Once captured, the next problems are backup and archive.  Here is one man’s solution:

  • We capture to a 2TB drive and edit from it as well. Then once it starts to get full, we use Retrospect Backup software to move files from the 2TB drive to smaller removable drives. Retrospect keeps track of the drives and once they fill up, you can just add another one. The main reason we do it this way is that Retrospect keeps track of all the data so that it’s very easy to find and restore projects. You can view them by the date you saved them or you can simply do a search…even for individual files. Plus, because they are firewire 800 drives, the restore process is very quick (at least it is right now with standard def DV footage….HD footage will take up more space)
  • [
    • It also has other useful info e.g. the transfer rates experienced in practice.


EX-to-FCP Ingesting Tips

Tuesday, June 9th, 2009

Forum thread on how to ingest EX footage into FCP (via ShotPut and XDCAM Transfer).

It includes a cool screencast video tutorial by Matt Davis:

The point of doing it his way is that it makes thinks more foolproof than the more basic “just load it in” approach and does so in a fashion that is semi-automated.

Apple Color is incompatible with QuickTime (allegedly)

Friday, June 5th, 2009

According to CineForm NEO HD FAQ [] as of 2009-05-05:

“Color does NOT support QuickTime codecs. Instead, Color only allows
certain codecs that have been compiled into the code to be used.  Apple
is aware of this limitation, but they haven’t been to quick to solve
the problem.”

Whaaaaat!?!?!   If so then that’s astonishing(-ly uncool).

On the Mac, Cineform uses the QuickTime wrapper, hence:

“CineForm files currently do NOT work in Apple’s Color application.”

XDCAM EX usage in Final Cut – An experienced user’s explanation, confirmation and tips

Wednesday, June 3rd, 2009

“Final Cut 6 (with an update) will recognize XDCAM footage more or less in it’s native format. You import the footage using the Sony Transfer software and it merely puts a QuickTime “wrapper” around the XDCAM footage. It’s still Long GOP like HDV but a better codec. You can render it into discrete frame codec as you say but it is not entirely necessary. I sometimes do and often don’t. I do a final render out in a full frame/intraframe file and then send it to compressor to munch it into whatever final form I need it in.” []

HD (compressed & uncompressed) and Computer (Capture & RAID) Bandwidths

Sunday, May 31st, 2009

I like to have a feel for bandwidths – roughly how big these things are and why.  Especially when I start running out of storage space and having to purchase additional disk drives etc…   Also I like to get a handle on how adequate my RAID will be for capturing uncompressed HD.  Google-search & calculations:

  • In the case of HD-SDI (1080i50), bitrate is around 1.5 Gbps []
  • In our case we have 720p50, but that apparently has about the same uncompressed bandwidth as 1080i50, namely 1.5Gbps []
  • My system has a RocketRaid+ProAVIO RAID5 arrangement.  I have not yet tested its uncompressed capture ability (I havea BlackMagic card), but it (if I’ve calculated it right), it looks like it should handle it easily.
    • RocketRaid themselves report tests [] involving an AJA capture card and a RAID5 array (as is mine) giving read and write speeds at about 400MBps (=3.2Gbps, because B i.e. Bytes = 8*b = Bits). 
      • If I understand this right, my system has a capture bandwidth twice that an uncompressed HD signal.  Plenty of headroom then.
  • ProRes’s 123 Mbps is not far off the “100Mbps” figure often mentioned in relation to the Convergent Design flash memory recorder, at which level the recording, compressed (as is ProRes) in a variant of MPEG2, is regarded as almost indistinguishable from uncompressed.

Sequence Renders: What & Where?

Sunday, May 31st, 2009

Suppose you added an effect to the timeline and as a result it shows a red “Unrendered” (or “Needs Rendering”) line above it.  You press Command-R (say) and rendering happens and now you can play it in real-time.  But what exactly is this rendering?  What format is it in and where does it go, i.e. what file(s) in what folder(s) ?

  •  The rendered file appears in a project-specific subfolder of the Rendering folder of FCP’s Scratch area.  In my case I defined the latter to be on the RAID.
    • Location was: [RAID_ProAVIO/App-Specific/Final_Cut/ FCP_Scratch/Render Files/Voda 2009 Estab 001/]
    • Example file name is: “Sequence 1-WP1-00000001
  • The coding format is ProRes 422 – Standard, not High Quality (which would be overkill).  This is good news!
    • VideoSpec displays its FourCC code to be “apcn”
    • The rendered file’s bitrate, according to VideoSpec, is an average of 123Mbps, around four times that of the HDCAM EX high-quality mode (35Mbps).  Nevertheless a great reduction on the uncompressed bitrate (around 1.5 Gbps) and broadly comparable to Convergent Design’s 100Mbps, judged by some to be virtually indistinguishable from uncompressed quality.  So it bodes well.

    Final Cut Effects for Video Denoising

    Sunday, May 31st, 2009

    Denoising: We’re talking video here (not audio), with twinkling speckles, e.g. due to low exposure or high gain hence a poor signal to noise ratio.   There aren’t any video denoisers built in to FCP, but plugins exist, both free and commercial.  There is some discussion of these at DvInfo forum [].  The best one I have found so far, at the cost of money and rendering intensity, is Neat Video,  available for most of the major NLE’s, including FCP.

    •  The “Too Much Too Soon” (TMTS) filter package [] includes a denoiser.  And it is free.
      • Didn’t remove much noise for me, despite setting it to max denoising.
    • Joe’s Filters (filter package), commercial, includes a Smart Denoiser.  
      • This worked reasonably well.  It removed most othe dynamic (twinkling) noise but left static pixel variations, presumably associated with pixel variation in the camera’s sensor which as a result were more obvious.  The result was like looking at a clean moving image through a dirt-patchy glass.
      • It was found to be best placed after the levels&colors adjustments, i.e. at the bottom of the filter chain.
    • Neat Video [], the best denoiser I have ever been able to find on Windows, is now available for FCP on Mac.
      • Tried the FCP version.  Totally outclasses the ones so far seen.  At the cost of money and (intensive) rendering time.  Well worth it though if you need the highest quality.
      • There is a free demo download but it is crippled to only affects a central rectangle of the image.  So don’t judge it by what happens at the margins region, rather use it as a basis for comparing the denoised central bit.
    • CHV – the Repair filter collection, including a denoiser, at []
      • (I have not yet tried this)

    Final Cut – Find/Try the Basic Enhancement Effects

    Sunday, May 31st, 2009

    Basic effects from my point of view are color curves, color correction and denoising.  Indeed the test footage I had was in dire need of all these enhancements.  Here, I began seeking how to do a simple levels adjustment, hoping to move on afterwards to S-Curves as in Sony Vegas.  However even for simple levels adjustment the process in FCP turned out not to be as simple as I hoped…  Probably the best solution was FCP’s 3-Way Color Corrector.  Had high hopes for using Apple Color (the application, not a fruit-specific effect) but for some reason FCP’s “Send to Color” option was greyed-out.

    •  .

     To begin with, here are some handy tips: 

    • FCP: FCP > User Preferences: Undo Levels -> 99, Recent Clips -> 20 (max poss)
    • Playing the Unrendered: Some kinds of effects cause a red line to appear above the affected part of the timeline, meaning essentially that the effect is so heavy on CPU that it can’t be played in real-time and must first be rendered (e.g. hit Command-R), then the red line disappears and it plays smoothly.   For me, such renders typically take a minute or two – boring and frustrating when all you want to do is experiment with settings etc.  There is however an alternative: Alt-P for “force rendered playback”.  Not so smooth but far less boring.  Reference: []

    Now the actual Effects experiments:

    • Levels Adjustment – Attempt 1 (works but awkward): The in-built Levels effect.  This effect is not what I expected – it is a little weird and non-intuitive.  Others agree.  
    • Levels Adjustment – Attempt 2 (easier): Joes Filters [] makes a (commercial) filter with 5-way controls (min & max input & output plus gamma) as in Vegas.  The sliders are tiny and fiddly – but presumably that’s an Apple thing (?)
    • Levels Adjustment – Attempt 3 (failed so far): Apple Color.  That’s reportedly far more sophisticated in capability than FCP’s filters and allegedly [] includes S-curve levels adjustments equivalent to Color Curves as in Sony Vegas. It is advised by some [] to use that instead.   Allegedly [] a good instructional DVD on it is Creative COW Master Series : Stop Staring and Start Grading with Apple Color by Walter Biscardi.
      • So I tried to use Color, as follows: [FCP: (Sequence) > (RtClk) > Send To > Color].  However Color was greyed out (unavailable).  Why?  Color is installed OK and can be started up as an app in its own right.  Tried doing that then “..Send To” it again, but no difference – still greyed-out.
    • Some other people [] had some other difficulties in FCP<>Color exchange.  Though not exactly as mine, it illustrates kind of the fiddly fussiness that can become an issue.
    • Yet others [] suggest instead using the Low/Mid/High levels sliders of the 3-Way Color Corrector to achieve the same effect as an S-Curve.  
      • There is a free tutorial for this at Creative Cow, explaining not only levels adjustment but also my standard “face cropping & vectorscope” method of flesh color correction.  There are plenty more FCP tutorials there of similar calibre.  
      • I tried the 3-Way and yes it is by far the easiest method, very simple and intuitive, though once again the (Apple-standard?) sliders are a bit too small and fiddly.
      • A further advantage of using the 3-Way is that it also in Apple Color and can an even (in some sense) be migrated from FCP to Color, if that’s where you later prefer to work on the grading.
    • Still wish I had a proper interactive S (or Bezier) levels/colors curve effect though…

    Import Sony EX XDCAM 720p50 into Final Cut: On a Surer Footing

    Sunday, May 31st, 2009

    Following advice passed on from Sony (by Obi Lidobe/Ejukene), I installed latest Mac versions of Sony XDCAM support software [from under Tools/Downloads], namely:

    • Final Cut Pro XDCAM Transfer v2.9.0
    • Clipbrowser v2.5
    • SxS Device Driver
    • XDCAM EX Log&Transfer Utility v1.0
    • This set of items presumably constitutes the “separate plug‑in from Sony … required to enable (the XDCAM EX) features”.

      Tried using FCP to see if the new XDCAM EX features are now available:

    • FCP: FCP >Easy Setup: The nearest obvious template was “XDCAM HD”.  But does that cover “XDCAM EX” or only their older optical disk based XDCAM ?
    • FCP: File > Import > Sony XDCAM Transfer
    • Transfer: required setup of cache folders etc.  Default was on personal area of Mac HD.  Chose instead to put it on my RAID: [/App-Specific/Sony XDCAM Transfer], with subfolders /Cache, /Import, /Export Scratch.  Can change these later on under [Sony XDCAM Transfer > Settings]
    • Transfer: File > Add Sources.  Can select multiple files.  Can access NTFS-captured files in original Sony (BPAV) folder structure.  Automatically queues job to build thumbnails (only does that, doesn’t convert the files to anything yet).  
    • Transfer: Thumbnails appear.  Can multi-select them.  
    • Do [Transfer: (Files) > (RtClk) > Import].  This generates equivalent QuickTime (“.mov”) files to the Import folder you specified earlier under Settings.  QUESTION: Does one have to define such settings individually for each project?  How best to organize their location?  The size of this equivalent file is almost identical to that of the original “.mp4” file (in the EX’s BPAV folder).  Presumably it is the same codec (data), just re-wrapped.  QUESTION: Would it be better to import them to ProRes (since this – unlike the XDCAM EX format – is a non-GoP format)?
    • Incidentally, the [Transfer: (File) > (RtClk) > Export Clips to Folder] option generates equivalent “.mxf” files, again broadly the same size, prompting for the destination folder.  QUESTION: Is this intended for foreign NLE’s such as Avid or Vegas rather than FCP?  Is it “export” in the sense of “from FCP to outside world”?
    • As a result of the [Transfer: …Import] operation, “.mov” files exist in the Import folder (as defined in Settings) and also they are listed in FCP’s Browser (top-left pane). 
    • FCP: Drag one of these files to Timeline.  Prompts: “Not the same format – change?”.  Say YES.   So I guess my doubts about the appropriateness of the HDCAM HD template were justified.  QUESTION: What format is it now then?
    • FCP: (Sequence) > (RtClk) > Properties: 50fps, 1280×720, Compresor = (XDCAM EX 720p50 (35 Mb/s VBR) ).  QUESTION: Does that imply that when the Sequence is rendered (as in getting rid of the “Needs Rendering” red line), it is rendered to this same format?  QUESTION: To reduce generational losses (in this highly compressed format), would I be better off setting this to be ProRes, and if so then what format?  Presumably if the original clips had been imported to ProRes, I would have been automatically prompted for that Sequence setting when I dragged those imports onto the Timeline (from FCP Browser).  
    • Also presumably the ProRes approach would benefit external (to the FCS system) workflows e.g. enhancement in VirtualDub (via the Windows read-only version of the ProRes codec, just a “Dec”oder).  In that scenario, the external Windows app would have to write to some other broadly equivalent format such as Cineform.  Is there a Cineform decoder for Mac?  If I had it, would the [FCP: File > Import] or some other way be able to import (convert to FCP-friendly format) that format?   Not just a case of re-wrapping but re-compression.  Would I have to use Compressor in principle – and is it capable of it in practice?

    Import Sony EX XDCAM 720p50 into Final Cut: Websearch

    Sunday, May 31st, 2009
    • Googled for any well-known solution to the lack of 720p50 support.  An article from Nov 2007 at the Aulich & ADamski website [] said that the previous (to then) lack of 50p support in FCP had been addressed in updates (at that time) to Final Cut Pro, Motion, DVD Studio Pro, Color, Cinema Tools, Soundtrack Pro and QuickTime Pro.
    • At [] the Release Notes of Final Cut indicated that XDCAM EX support had been added in FCP version 6.0.2.  This support included for XDCAM EX 720p50 VBR as per my footage.  
    • Notes also said: “Important: A separate plug‑in from Sony is required to enable these features”.  But no actual link etc. was stated…
    • Notes also said: “Once you ingest your XDCAM EX footage to QuickTime media files on your scratch disk, you can simply choose the XDCAM EX Easy Setup that corresponds to your footage and edit as you would with any other native format in Final Cut Pro”.  So at least now I know my footage should be ingested to QuickTime not MXF.
    • Notes also said, re “50p Support”: “Along with support for a number of recent 50p video formats, Final Cut Pro 6.0.2 includes support for 50 fps timecode in all timecode fields and project properties. A new 50 @ 25 timecode format has been added for deck support and EDL compatibility with 50 fps formats.  Note: Motion, Color, and Soundtrack Pro now support 50p footage as well.
    • Notes also said, re “Updating Motion and Motion Templates”:Final Cut Pro 6.0.2 master templates require Motion version 3.0.2 or later. By upgrading to Motion version 3.0.2 or later, you take advantage of important fixes and improvements made in the Motion application and templates.”
    • I checked the version of my installed Motion and it was indeed 3.0.2.
    • Setting up an FCP project (though not specifically XDCAM EX): article at []
    • Organizing FCP project disks/folders/files in a tidy fashion: article at
    • How to render for a DVD []: “Make sure you sequence is rendered and then export a Quicktime using current settings with compression markers and do not make it self-contained (assuming you compressing on the same machine). In Compressor pick a DVD setting that works for you delivery.”

      Import Sony EX XDCAM 720p50 into Final Cut: Initial Stumbles

      Sunday, May 31st, 2009

      Initial Stumbles 

      • Shot footage on an EX3 suitable for DVD and web.  Following advice of “gurus” such as Alister Chapman, shot it in 720p50 mode.  Having done so, wanted to get it into Final Cut for editing etc.
      • Initially, used ClipBrowser (v2.0) to ingest the footage.  Didn’t know if I should do it the same way I did on Windows for Sony Vegas, that is by generating a “.mxf” file or by generating a “.mov” file.  Tried both.  These are containers, not codecs.  The “.mxf” file is Material Exchange Format while “.mov” is QuickTime.
      • Wanted to know more about the contents e.g. the codecs used and their settings.  To get this, used VideoSpec – a video analyzer broadly like GSpot on Windows)
      • MXF contents: FourCC “mpg2” (MPEG-2), Bitrate 35000 kbps,  fps 50, 1280×720, PAR 1:1, DAR: 16:9, Chroma subsampling format YUV420p.
      • MOV contents: FourCC “xdva” (XDCAM), Bitrate 34900 kbps, fps 50, 1280×720, PAR 1:1, DAR 16:9, Chroma YUV420p.
      • In FCP, tried to find a standard setting suitable for this, but nothing matched. In particular there were 60p formats but not 50p formats – frustrating.   Instead made a “best guess” at the most closely matching format and customized it.  I think I ended up with format “HDV 720p50” but was concerned that HDV may have different standards (e.g. number and aspect ratio of pixels) to that of my EX XDCAM footage.

      Plan the next steps

      Sunday, April 26th, 2009

      This is now a usable system.

      However, for education and possible flexibility, I next intend to identify benchmark tests for both Mac and Windows, run them as-is on standard disks, GRAID and ProAVIO RAID, attempt install MacDrive (via workaround to dodge RAID driver compatibility issues), retest, also try other cross-filesystem tools. Also, for the ProAVIO NTFS partition, want to identify a reliable filesystem synchronizer. ABSynch comes to mind but I have not used it before.

      Success: Both Mac and Windows can now use the RAID !

      Sunday, April 26th, 2009

      The result was exactly as intended, an MBR partitioning system containing an HFS+ partition and an NTFS partition. Mac OS X was able to read the contents of both partitions. Windows (BootCamp XP) was able to read and write to the NTFS partition.

      Convert GPT to MBR (non-destructively via iPartition)

      Sunday, April 26th, 2009

      Now the question is: Can I non-destructively convert the RAID’s partitioning system from GPT to MBR? Non destructively here means I don’t have to wipe the disk (through reformatting etc) and recover the data from backup. The answer is YES! The Mac-based iPartition app does exactly that.

      iPartition 3.1.1 (154) did it fine, taking around 6 or 7 hours.


      NTFS partition not visible to XP – because it’s GPT

      Saturday, April 25th, 2009

      OK so I have partitions for HFS+ and NTFS but still no extra disk (drive letter) shows up.  XP’s Disk Management tool does list the disk device but allows no operations on it – menu commands are greyed-out.  The tool displays the partitioning system as GPT, which may be a clue:  I think XP is unable to handle GPT-based partitions, only MBR ones.

      iPartition the RAID

      Saturday, April 25th, 2009

      Used iPartition (3.1.0, 153)  to split the existing RAID into separate volumes for the existing HFS+ volume and additionally an NTFS volume, the latter for use by the BootCamp-XP system. (more…)

      WinClone OK After Windows Pruned

      Saturday, April 25th, 2009

      WinClone worked fine once the BootCamp system disk size had been reduced from 250 GB to 30GB and de-fragged. (more…)

      Back-up the System Volumes

      Friday, April 24th, 2009

      Backed-up to my new RAID1 USB drive:

      • Backed-up Mac volume to “Macintosh HD 2009-04-21.dmg”. Presumably I did this via Disk Utility from bootable CD.
      • Tried backing-up XP volume by using WinClone. Normally a smooth process but this time I ran into difficulties.


      How to Partition the RAID

      Wednesday, April 22nd, 2009

      The “separate partitions” workaround should be simple to try and will provide a basis for a baseline speed test against which to compare the other workarounds.  iPartition can be run not only from its own bootable CD but also from Mac OS. Only under the latter can it see the RAID. Before doing anything though, back-up the system disks, Mac and Win (BootCamp-XP).


      Wednesday, April 22nd, 2009

      Regardless of which product is to blame, what work-arounds are there?

      1. Split the RAID into separate partitions for Windows and for Mac filesystems.
      2. Alternative RAID card e.g. by Promise Technology.
      3. Convert the filesystem itself between Windows and Mac formats, by using iPartition (which I already have).


      Mac: HFS+ accessibility to BootCamp-XP

      Tuesday, April 21st, 2009

      OK so the RAID itself is now accessible to Windows (BootCamp-XP) but since it is formatted as HFS+ the filesystem does not mount as a Windows disk. Reseller helped investigate this further.At first, attempted use of MacDrive 7. a product that (as I understand it) lets Windows apps see HFS+ filesystems (e.g. disks). However after installation, windows kept booting then BSOD then rebooting etc… Exited by forcing a safe boot then uninstalling MacDrive. A query to MacDrive suppliers confirmed it was not compatible with the RR driver: (more…)

      Mac: RocketRAID accessibility to BootCamp-XP

      Tuesday, April 21st, 2009

      Tues 2009-04-21 visited reseller (PV) to get RAID made accessible to Windows (BootCamp-XP) as well as Mac.RR card flashed to disable its BIOS.  I think I was told that this mod was needed because the RAID config (BIOS) assumes keyboard present, but it’s a USB keyboard and Mac boot order hasn’t enabled USB at that stage.   Also got a new driver (.dmg) file for the RR.  As a result, RAID (ProAVIO) now visible in XP Disk Manager.

      DV->AviSynth->VFAPI Problem solved: DV Codec & RGB24

      Monday, April 13th, 2009

      Had an AviSynth script which used to work OK on my old PC but not on my new one.

      The first problem was that AviSynth’s AVISource command failed to find a codec for the DV (AVI) file I was trying to read.  DirectShowSource worked OK but AVISource didn’t, implying that there was no VFW codec for DV functioning on the new PC.  Indeed, the new machine had no DV Codec installed – because there isn’t one as default in XP.  When I installed the MainConcept’s DV Codec then the problem went away.

      Next issue was when I put the AviSynth scipt thru VFAPI, it produced an audio-only stream.  Indeed in retrospect when I had dragged the file to VFAPI the video options had been greyed-out.   Same script worked OK on the old PC.   GSpot revealed that it worked on the old PC because that PC included an “RGB Color Space Convertor” DLL, namely LMVRGBxf.DLL.  This turned out to be part of Nero, which was indeed installed on the old PC but not the new.  Begs the question of whether such a file should ideally be on a PC and if so what are the choices?  Regardless, taking note of the “colour space convertor” description, guessed (correctly) that all I had to do was alter the end of the AviSynth script to use ConvertToRGB24 instead of ConvertToRGB32.

      With those two fixes, VFAPI processed both video and audio aspects of the DV file and consequently the resulting *_vfapi.avi file was now fully (video and audio) playable in both Media Player and Sony Vegas.