Archive for July, 2013

How to join a Google Group:

Wednesday, July 24th, 2013

How to join a Google Group:

Official help:[]

BUT below is my experience of it:

  •[More > Groups]
  • Groups:[Search for groups or messages]:[xyz]
    • -> Groups matching xyz
  • Click on the one you want
    • -> Says “Loading…”, may take a minute or so…
    • -> Messages under that group/forum are listed:
    • xyz  a of b topics (c unread) * [Join group to post] [G+1]
  • Click blue Button:[Join group to post]
    • -> Form:[Join the xyz group]
  • Form: Enable [Automatically subscribe me to email updates when I post to a topic]

Convert FLV Video Files

Monday, July 22nd, 2013

To convert from [.flv] to another format, use VLC Media Player’s [Media > Convert/Save] option.  Be sure to set the destination as well as the source.  VLC can only convert to formats in its own internal container and codec sets, but e.g. can convert to [.mp4] containing H264.

Thereafter can use e.g. Sony Vegas to generate e.g. [.avi] containing CFHD, e.g. for onward use in applications that don’t recognize mp4-h264.  Vegas is more accommodating and flexible than (straight use of) Adobe Media Encoder, as regards non (broadcast) standard frame sizes and proportions.  Conveniently, Vegas automatically matches the Project to the footage on footage-import.

Prior to that, I tried installing and using Riga, the two-way FLV convertor, but it didn’t work on  my Window 7 (64-bit) machine,  opening only a blank window where a GUI was expected, and both the downloader and installer were both full of bloatware (NB needed to install in Advanced mode in order to avoid some of that).  Pointless…

Extract DVD Contents to plain [.mpg] files via Sony Vegas (12)

Saturday, July 20th, 2013

How to extract DVD contents to plain mpg files

  • Open Sony Vegas 12
  • Menu:[File > Import > DVD Camcorder Disc…]
  • Dialog:
    • [Source > Browse…]
    • [Destination > Browse…]

Using Mocha to Stabilize/Lock onto an Object

Saturday, July 20th, 2013

Can use Mocha either stand-alone and export result as an image-sequence, or in combination with AE in order to export as a movie-file.

Some points:

  • Go to Track tab
  • Put In/Out points over the useful bits (e.g. not overexposed bits).
  • Put playhead in middle of duration, note Frame-Number, then track both forwards and backwards from this point.
  • Go to Stabilize tab.
  • There is a Stabilize button to preview what it will look like.
    • Must select a Layer (tracked-region) first
      • (in principle, could have more than one tracked region).
    • Remember to disable this button before attempting to track again.

If exporting for Registax, then it is sensible to use TIFF format, but it must be with no alpha (otherwise Registax 5 gets its colors weird).

If using Registax (5) then:

  • Align=None
  • Drizzle=25%
  • Limit
    • (just in order to get to the next stage0
  • Stack
  • Wavelet
    • Default (not Gaussian), Linear (not Dyadic), most sliders near full.
  • Do All
  • Save Image
    • Save as a TIFF, so can manipulate levels in Gimp etc)


Java 6 SE Runtime: Download (Archive-Links)

Friday, July 19th, 2013

I tried to run a java-based network-analysis application (Cytoscape).  Mostly it ran ok, though a few glitches occurred.  Then I read the documentation, which stated that it required version 6 of Java.  Needless to say, I had Java 7 on my system.  I have previously read that Java 7 was not entirely backwards-compatible with Java 6.  So now I’d like to see how well it runs under Java 6.

Which leads to the problem…

Where can one get Java 6?


BBC TV Technical Production Guidelines

Tuesday, July 16th, 2013

Some BBC documents I came across:

    • ID
      • DQ – Defining Quality
      • This section brings together all policies and standards that apply to the delivery of television programmes.
      • For other information, please see the TV Commissioning Site:
    • Signal Levels
      • In a picture signal, each component is allowed to range between 0 and 100% (or 0mV and 700mV). This equates to digital sample levels 16 and 235 (8-bit systems) or 64 and 940 (10 bit systems).
    • Blanking
      • Digitally delivered pictures are considered to have a nominal active width of 702 pixels (52us) starting on the 10th pixel and ending on the 711th pixel in a standard REC 601 (720 sample) width.
      • A minimum width of 699 pixels (51.75us) within these limits must be achieved.
      • Additional active pixels outside the above limits must be an extension of the main picture.
      • Vertical Blanking must not exceed 26 lines per field.
      • Line 23 may contain a whole line of picture, be totally blank ed, or the last half may contain picture relevant to programme. Line 23 must not contain any form of signalling as it is likely to appear in picture during letterbox style presentation.
      • Likewise picture content in line 623 is also optional, but if present it must be related to the programme
    • Aspect Ratio
      • Active Picture Width
        • Active picture width is 52us / 702 pixels. All aspect ratio calculations are based on this. Any processes based on 720 pixel width may introduce unwanted geometry or safe area error.
    • Use of HD Material (for SD programmes)
      • Some standard definition programmes will contain material from high definition sources.
      • Particular care must be taken to deliver the best possible quality of down-converted material.
      • It is acceptable to use a broadcast VTR’s “on board” down converter to produce standard definition copies of high definition programmes.
      • Most non linear editing packages do not produce acceptable down conversion and should not be used without the broadcaster’s permission
    • Safe Areas for Captions
    • Audio
      • Stereo audio levels and measurement (loudness or volume)
        • Stereo programme audio levels are currently measured by Peak Programme Meters (PPM). The Maximum or Peak Programme Level must never exceed 8dBs above the programme’s reference level. The following levels, as measured on a PPM meter to BS6840: Part 10 with reference level set at PPM 4, are indicative of typical levels suitable for television, and are given as guidance only.
      • Stereo phase
        • Stereo programme audio must be capable of mixing down to mono without causing any noticeable phase cancellation.
      • Material (levels in PPM):
        • Dialog: Normal: 3-5, Peak 6
        • Uncompressed Music: Normal: 5, Peak 6
        • Compressed Music: Normal: 4, Peak:4
        • Heavy M&E (gunshots, loud traffic etc): Normal: 5-6
        • Background M&E (ambient office or street noise etc or light mood music): 1-3
    • Technical Standards for Delivery of Television Programmes to BBC
    • This document is only to be used for the delivery of programmes commissioned in Standard Definition (SD).

Run&Gun Shooting with Fast Shutter, then Deshake and Add Motion Blur in Post

Tuesday, July 16th, 2013

Like the title says (providing you have time for post-production), it is sensible to do Run&Gun Shooting with Fast Shutter, then Deshake and Add Motion Blur in Post.

In Summary:

  • For a bumpy aircraft flight, I shot with 1/50 second shutter and stabilized it in post.  The inevitable result, though more pleasing than the non-deshaken footage, exhibited shimmering effects due to motion blur on the various objects in the frame.
  • The shimmering could of course have been vastly reduced by shooting with a much faster shutter-speed.  One reason I didn’t was to avoid the staccato “Saving Private Ryan” look.  However I now realize that a convincing motion blur can (at least in theory, until I test it) be artificially introduced in post, following the de-shaking.
  • Some options exist for artificially introducing motion blur in post:
    • (Some degree of risk: Not perfect, but the imperfections might not necessarily be noticeable, or at least they may be less so than if not following this overall path).
    • RE:Vision Effects’ plug-in ReelSmart Motion Blur (RSMB).
      • Convenient, as it is a plug in for Premiere as well as After Effects (and various NLES/Tools).
    • After Effects’ Time Warp plugin (even if not warping time) has a Motion Blur function.
    • Comparison between them:
        • Time Warp works, but painfully-slowly.
        • RSMB is very-much faster.
        • Sometimes they get fooled when a motion-vector suddenly changes, leading to odd artefacts, though possibly not too noticeable in a changing/moving image.
  • Tips for reducing the problem at shoot-stage:
    • Shoot with elbows on a bean-bag.
    • If camera has rolling-shutter effect, then some cameras reduce this more when you increase the framerate than when you decrease the shutter angle by equivalent amount.
  • Potential methods for removing blur from the original footage, by deconvolution:

In Detail:

I shot a flight in an aeroplane (a Tristar) as a passenger that happened to have a camera (A Sony Z1)  as opposed to a proper production.  I used the Z1 because it had a (small) CCD sensor, thereby avoiding the rolling-shutter effect associated with most (not all) CMOS-sensor cameras.

Lazily, I left it at its default setting of 1/50 second.  I had a vague idea that I didn’t want a fast-shutter staccato “Saving Private Ryan” look, but that’s as far as my thinking went.  Instead of increasing shutter-speed, I applied the Z1’s built-in ND filters so I could keep keep its iris reasonably wide, so as to obtain the shallowest focus possible (with this small-sensor camera), especially to try to defocus as much as possible the inevitable muck on the passenger windows.  I had no particular plan, I was just being a tourist…

But of course, once I got back to the editing, grander plans (belatedly) came to mind: I would string some kind of video together as an entertaining souvenir for the (transient) passengers, staff (one of whom was changing his job) and of course myself.  I found some “little stories”, a “little drama”, a “celebratory ending”, and royalty-free music that was great accompaniment.

One of the first jobs was to stabilize it.  For that I used Gunnar Thalin’s Deshaker plugin for VirtualDub.   Following this, the pictures were smooth and drifting like I wanted them to be, but marred by occasional shimmering of individual objects (like houses below) that had motion-blurred at the sensor stage, prior to deshaking.

If only I had used a faster shutter…  but then it would have got that staccato look I didn’t want.  Or would it?  It occurred to me that some kind of motion-estimation-based post production technique might be able to substitute motion blur (at a stage after the deshaking).

Time for a Web-Search:

  • Google:[fast shutter stabilize motion blur]
      • “Question”:
        • – I get some blurring/shimmering in frames where the camera shakes the most.
      • Response:
        • What you see is probably motion blur, i.e. motion occuring within a frame. This is present in the source video too, but it doesn’t become distracting until Deshaker has removed the motion between the frames.
        • To get rid of this effect you need to use a faster shutter speed while filming. How fast it needs to be depends on the camera and the amount of shake. For a camera with built-in stabilizer, I’d recommend at least 1/200 sec, or so. (Faster if it doesn’t have built-in stabilizer.)
      • Title: Skip the ND (Neutral Density) Filter and add your motion blur back in post
      • One way to add motion-blur in post is RE:Vision Effects’ plug-in ReelSmart Motion Blur (RSMB).
        • This plug-in, designed mainly for 3D graphics artists who need to add blur to their rendered objects, tracks vectors frame-to-frame and generates the appropriate blurs and streaks to mimic actual motion blur.
        • It can be used in two modes: RSMB (basic) and RSMBPro (advanced).  I have only used the basic mode because it has been satisfactory for my needs.
        • Simply drag the plug-in to your clips in your timeline and drop. That’s really about it.  If you want to tweak, there are only two parameters in basic mode:
          • Motion Blur Amount: simulates different shutter speeds.  Since cinema is universally shot with a 1/48 shutter, and this is the default setting of the plug-in, you should really just leave this alone unless going for a special look
          • Motion Sensitivity: adjusts how much the warping reacts to motion.  I’ve found that reducing the sensitivity helps reduce warping artifacts in scenes with intense motion, but in my tests it handled almost all motion well when set at .5 or 50%.
        • Is it flawless? No.  If you freeze-frame some shots, you can see warping where the foreground and background mesh in weird ways. But these artifacts are hardly noticeable when played back a regular speed.
        • The main caveat to using this plug-in is increased render time.  So I would recommend applying it as a final step before rendering for output.  But don’t apply it to all your clips as a compound clip, or even worse as a rendered movie file, because then it will try to warp your different shots together, resulting in some very strange artifacts.
      • Question:
        • we are planning to shoot from the bed of a pickup truck.
        • I am aware that the rental of either a gyro or steadicam rig would be ideal, but the budget is limited and the rental situation here in Idaho is less than ideal.
        • So instead we have rigged a kind of “ghetto fabbed” large cinesadle for our tripod to be loosely ratchet strapped on top of in the truck bed (to reduce vibration), then we will stabilize in CS6 warp stabilizer.
        • My questions is:
          • What do you guys think of 4k 24 VS 3k 48?
          • Currently we have been using a shutter of 192 for 24 and a shutter of 384 for 48, any suggestions here?
      • Reply:
        • your shutter speeds are a bit high. You might get the Private Ryan strobing effect. Have you run any tests?
      • Reply:
        • We have done some testing at 4k, not yet at 3k. What would you think of as an ideal shutter? The problem we have been running into is the motion blur caused by movement/vibration, causing the shots at a more standard shutter speed to look much softer.
        • We have done some testing at 4k, not yet at 3k. What would you think of as an ideal shutter? The problem we have been running into is the motion blur caused by movement/vibration, causing the shots at a more standard shutter speed to look much softer.
      • Reply:
        • Testing testing testing!!!
        • We are doing a similar shoot, except shooting out the front window, ie, driver’s POV on some rough roads.
        • We built a bungee rig with straps through the windows and the camera hanging for the bungees, we tried a steadycam arm on a combo shorty stand, and other goofy looking gadgets.
        • So far the best was going handheld with elbows propped on a padded 2×4. Fortunatly for us, the client wanted smoother and we will be renting one of these
        • $1200 a week i think we were quoted. Maybe we can shoot some other fun stuff the rest of the week!
        • I guess I would err on the side of fast shutters than slow, the stabilize will look better with less motion blur. In our tests, it messed up on the sharp bumps where there was a lot of movement between frames, the motion blur made it a little goofy looking.
      • Reply:
        • The RE:Vision plugin works pretty good in most cases – I agree that it’s better to sacrifice motion blur if there’s ANY plan for post stabilization, and just fake it on the stabilized footage.
        • 48fps is only gonna give you the effect of slower motorcycles when played back at 24, so I would say shoot at 4k to give the stabilization/post blurring process more data to work with (and a wider shot w/less of a cropped sensor). Also, use a lens w/IS.
          • {I assume that means Image Optical Stabilization)
        • All that said, I got some great shots of some electric motorcycles six years ago from a truck bed on a dirt road using a steadicam… (actually a cheap rental Glidecam). It’s easy to operate since you’re just sitting/kneeling there, so you don’t need to hire an experienced operator.
      • Reply:
        • Increase framerate and cushion the camera with bungies or lock the camera hard to the car. Then remove frames in post and stabilize. the upped framrate will get you around rolling shutter in a way the shutter will not. So better to shoot 48fps 360 than 24fps 180.?
          • {Well I’d never have thought of that, good tip.  This is a RED camera forum, but maybe applies to some other cameras also?}
      • Reply:
        • First, partially deflate the tires on the pickup – especially the rear wheels. That will remove most of the vibration.
        • Next, keep the camera lower to the bed – the higher you are, the more “sway” introduced. If you have ever tried to shoot from the back of a truck standing up, you know that stability is lacking – it will fight you.
        • If you want to be clever and build a rig:
          • Mount a high hat in the middle of a piece of plywood.
          • Drill a series of holes at the edge of the plywood.
          • Mount some drilled rails along the top of each side panel of the bed (use the rectangular holes as gravity mounting points)
          • Install numerous high strength rubber straps / cords between the rail holes and plywood holes.
          • Result: a floating camera platform. The rubber straps will absorb a lot of the vibration.
        • Downside is it tracks with the level of the bed – so as the truck takes a corner, the platform tilts.   Then again, a tripod will do the same thing.
        • So, another approach is to construct a floating mount suspended from an overhead rail. With a bit of practice, you can keep the horizon level in a turn. Again, heavy duty rubber to take the load and absorb vibration.
        • If you are able to get a Steadicam or Tyler mount w/gyro, best place to position yourself is sitting down on the tailgate.
      • Reply:
      • Potential methods for removing blur from the original footage, by deconvolution:
      • Matlab algorithms (downloadable)

Windows 7: Move/Recover Offscreen Window

Monday, July 15th, 2013

In Windows XP it was simple: just right-click the TaskBar icon, select Move and drag the window back onto screen.  But WIndows 7 has no such right-click option.  So what’s the Windows 7 alternative?  It is this:

  • Simply activate that window’s icon on the TaskBar, then [Windows-Key] + [RightArrow]

Adobe CS6: Premiere: MultiCam

Sunday, July 14th, 2013

Some links about Multicam in Premiere of CS6:

Rearranging Video Tracks (in various NLEs)

Sunday, July 14th, 2013

The ability to rearrange the order of video and audio (etc.) tracks in an editing-project in a Non-Linear Editing project.

It’s one of those basic things I assumed all NLEs would allow.  But not so.  Some have workarounds involving the creation of new Sequences and pasting in contents from original Sequences, in which case why haven’t they simply automated that workaround?  Bizarre!


GenArts Sapphire Upgrade & Migration

Sunday, July 14th, 2013

I have an existing GenArts Sapphire v.2 installation, as a plugin to Final Cut Pro (FCS7/ FCP7).  I upgraded it to v6 with a view to using the licence instead for plugin to After Effects (AE), since I no longer use FCS, only Adobe Production Premium (CS6).  Before activating for AE, I need to deactivate for FCP.  How to do that?  A Google-search for [final cut pro sapphire deactivate] gave no obvious useful source of information.

Then it found, leading to, as follows:

  • How do I uninstall Sapphire from my current machine?
    • {I was initially concerned by the title, that if I simply uninstalled the application then that might lose me the opportunity to deactivate (and get a deactivation code or whatever GenArts’ process involved…) }
    • To uninstall:
      • On Mac, go to /Applications/GenArtsSapphireFXPLUG folder and double click on “Uninstall Sapphire”.
        • {Actually it was [GenArtsSapphireFxPlug] }
      • If your machine is not connected to the web, then select “Display an uninstall code to register on another computer’s web browser”. Follow the instructions to register the uninstall via another machine.
        • {My machine was connected to the web, and presumably therefore, no opportunity was given for me to select such an option}
    • {Aha! So uninstalling gives you an uninstall-code!  Or decrements my license install-count (presumably held at GenArts), though if it does that, it does it invisibly (which is disconcerting – I’d prefer some explicit confirmation of the resulting install-count)}
      • {I won’t know for sure this worked as intended until I try to apply the serial number on my new After Effects plugin}.

Design a Label for a Printable DVD

Sunday, July 14th, 2013

On the rare occasions I produce a DVD, always the same question: How best (easiest and best quality) do I design and print the on-disk label?

In the end, the best option seemed to be to (download and) use the CD/DVD Label-Designer application that came with my disk-printing capability printer (a Canon).

  • Canon Easy-PhotoPrint EX

Initial use of it brought up a templates-selection stage that appeared clunky and restrictive.  However that was just the initial “wizard” stage of using it, and subsequently I was able to move text, create new text etc. to my satisfaction.


HDV 50i from Sony Vegas to SD 50i Intermediate to Adobe Encore DVD

Sunday, July 14th, 2013

(This is actually an older post, from about a wek or so ago, but it was left languishing in “Draft” status.  But rather than delete it, here it is, out-of-sequence, for posterity)

Nowadays for video editing I mainly use Adobe CS6.  However I have still some old projects edited with Sony Vegas (10) which now have new clients.  One such project was shot as HDV on a Z1, giving 1440×1080 interlaced, at 50 fields/second, which I call 50i (it doesn’t really make sense to think of it as 25 fps).  The required new deliverable from this is a PAL-SD DVD, 720×5786 50i.  In addition, I want to deliver high-quality progressive HD (not V) 1920×1080 progressive.

The PAL-SD frame size of 720×576 has exactly half the width of the HDV source and just over half its height.  My naive initial thought was that the simple/cheap way to convert from the HDV source to the SD deliverable would be to merely allow each of the HDV fields to be downscaled to the equivalent SD field.  This could be performed in Sony Vegas itself, to produce an SD intermediate file as media asset to Encore to produce a DVD.

Some potential complications (or paranoia) that come to mind in this approach are:

  • Levels-changes, through processes associated with the intermediate file.  For example it might accidentally be written as 16-235 range and read at 0-255 range.  In general, uncertainty can arise over the different conventions of different NLEs and also the different settings/options that can be set for some codecs, sometimes independently for write and for read.
  • HD (Rec 709) to SD (Rec 601) conversion: I think Vegas operates only in terms of RGB levels, the 601/709 issue is only relevant to the codec stage, where codec metadata defines how a given data should be encoded/decoded.  The codec I intend to use is GoPro-Cineform, with consistent write/encode and read/decode settings.  Provided Vegas and Encore respect those, there should be no issue.  But there is the worry that either of these applications might impose their own “rules of thumb”, e.g. that small frames (like 720×576) should be interpreted as 601, overriding the codec’s other settings.
  • Interlace field order.  HDV is UFF, whereas SD 50i (PAL) is LFF.  Attention is needed to ensure the field order does not get swapped, as this would give an impression of juddery motion.

So I did some experiments…

  • Vegas (1) Project Settings:
    • Frame Size: 720×576
    • Field Order: LFF
    • PAR: 1.4568
  • Render Settings:
    • Frame Size: (as Project)
    • Field order: LFF (I think the default might have been something else)
    • PAR: 1.4568
    • Video Format: Cineform Codec

What Worked:

  • Sony Vegas (v.10) project for PAL-SD Wide, video levels adjusted to full-range (0-255) via Vegas’s Levels FX, then encoded to GoPro-Cineform.
  • Just as a test, this was initially read into an Adobe Premiere project, set for PAL-SD-Wide.  There, Premiere’s Reference Monitor’s YC Waveform revealed the levels range as 0.3 to 1 volts, which corresponds to NTSC’s 0-100% IRE on the 16-235 scale.  No levels-clipping was observed.
  • So using the 0-255 levels in Vegas was the right thing to do in this instance.
  • The Configure Cineform Codec panel in Sony Vegas (v.10) was quite simple, offering no distinction between encode and decode, allowing only for various Quality levels and for the Encoded Format to be YUV or RGB.  The latter was found to have no effect on the levels seen by Premiere, it only affected the file-size, YUV being half the size of RGB.  Very simple – I like that!
  • In Premiere, stepping forwards by frame manually, the movements looked smooth.

In Adobe Encore (DVD-Maker) CS6:

  • Imported the intermediate file as an Asset and appended it to the existing main timeline.
  • Encore by default assumed it was square-pixels.  Fixed that as follows:
    • [theClip >RtClk> Interpret Footage] to selrct the nearest equivalent to what I wanted: [Conform to SD PAL Widescreen (1.4587)].
      • Why does Encore’s [1.4587] differ from Vegas’s [1.4568] ?  Any consequence on my result?
  • Generated a “Virtual DVD” to a folder.
  • Played that “Virtual DVD” using Corel WinDVD
    • In a previous experiment, involving a badly-produced DVD having swapped field-order, I found this (unlike WMP or VLC) reproduced the juddering effect I had seen on a proper TV-attached DVD player.  So WinDVD is a good test.
  • Made a physical DVD via Encore.
  • The physical DVD played correctly on TV (no judder).

An alternative would be to deinterlace the original 50i to produce an intermediate file at 50p, ideally using best-quality motion/pixel based methods to estimate the “missing” lines in each of the original fields.  But would the difference from this more sophisticated approach be noticeable?

There also exists an AviSynth script for HD to SD conversion (and maybe HDV to SD also?).

  • It is called HD2SD, and I report my use of it elsewhere in this blog.  I found it not to be useful, producing a blurry result in comparison to that of Sony Vegas ‘s scaling (bicubic).


Best NLE for MultiCam Editing? FCPX for Mac, LightWorks for Windows (and in future for Linux then Mac OS)?

Sunday, July 14th, 2013

As explained as part of my recent “Best of Breed” post, I wish to identify the best NLE for multicam editing.  It is possible to achieve such editing in a variety of NLEs, with much the same technical quality.  What matters here is friendliness and flexibility, leading to productivity (and hence, in limited-time situations, to greater product quality).

I like the sound of FCPX (with required add-ons) on Mac OS and of LightWorks which is currently on Windows only, soon to go Linux and intended in future to be on Mac OS also.  I need to watch a few YouTubes about these and and give them a try.  Hopefully I can get a colleague with FCPX to demonstrate it and als I plan to download/install a copy of the free version of LightWorks.  Then try them out on archived previous live-event multicam projects.


FCPX: The Real Cost, Including Add-Ons

Sunday, July 14th, 2013

I strayed upon the following, informative:

  • 5thwall May 8
    • I’ve been using FCPX, mostly. $299 as most everyone knows. But the real cost is closer to $1300 when you add up all the plugins to get more pro support.
    • My list of “helper” apps:
      • Compressor: $50
      • Motion: $50
      • Pro Versioner $60 (for backing up events and projects)
      • Event Manager X: $5 (a must for dealing with loads of events – hopefully Apple will institute better mgmt in software)
      • Xto7: $50 (hate X? send it to 7!)
      • 7toX: $10 (love X? send it from 7 to X!)
      • Sync-N-Link: $200 (replicates Avid functionality for syncing clips with jam synced audio)
      • SliceX with Mocha: $150 (great tracker and object remover)
      • Lock & Load: $100 (a much better image stabilizer)
      • X2Pro Audio Convert: $150 (export to AAF)
      • Pluraleyes: $200 (sync multiple clips with multiple tracks of non-timecoded audio to separate clips FCPX can’t currently do that unless you make a multiclip).
      • Davinci Resolve Lite: Free
    • Total for FCPX and helper Apps: $1325

Avid MC (etc.): Version 7.0: New-Feature Highlights

Sunday, July 14th, 2013

What’s new in Avid Media Composer (etc.) version 7.0?  Below are the highlights that took my attention:

  • Cached Waveform Redraw
    • Less clunky then, hopefully…
  • Track Selection for Relink
  • Background Queue monitoring, inside MC and from web browser
  • Start/Stop/Pause Background Services
  • Spanned Markers
    • About time!
  • Dynamic Media Folders
  • FrameFlex: Reframing HiRes Media
    • e.g. pan/crop/zoom parts of HD into an SD target, or 4/5K to HD etc.
  • AMA Managed Media
    • Prior to this, AMA was a bit of a “Cinderella”, not managed in the manner of Imported media.
  • Audio Mixer Improvements
  • Background Consolidate/Transcode
  • Adjusting Audio Clip Gain in Timeline
  • Consolidate/Transcode only AMA clips
  • Color Management- for various camera types
    • Sounds to me like LUTs and Looks…
  • Change Track of Marker in Marker Window
  • Vertical Scroll in Timeline

Additional links:


Tools/Workflow Philosophy: Best-of-Breed rather than Already-Integrated Suite ?

Sunday, July 14th, 2013

I am becoming less enthusiastic about the “Integrated Suite” philosophy or perhaps actuality of Adobe CS6, in favour of a “Best of Breed” approach, where I cherry-pick the best tool for each kind of job and then design or discover my own workflow for integrating them.

I reached this conclusion from the following experiences:

  • As regards editing itself:
    • For general A & B Roll” editing, I find Premiere is ok, though for improved usability, I’d prefer a Tag-based system (as in FCPX) to the traditional Bin-based one (as in Adobe & Avid).
    • For MultiCam editing, even in Adobe CS6, I find Premiere does the job but I find it clunky, frustrating and limited at times, like it has not yet been fully “baked” (though “getting there”)…
      • e.g. In the two such projects I have so far worked on, there has been an annoying 2-second delay from pressing the spacebar to actual playing.  Maybe some kind of buffering?
        • I found a setting for “Pre-roll” in the Preferences but altering it made no difference.
        • The following suggested that the embedded audio (in video file) could be the issue, the solution to which was to relink to a WAV file.
      • e.g. It brings up a separate MultiCam Monitor instead of using the Source Monitor.  You have to remember to activate this each time before playing.  I find that a nuisance (and time-waster when I forget) especially because I tend to alternate multicam editing as such with tweaking the cut timings until they feel right, and sometimes that can only be done in retrospect.
      • e.g. When you stop playing in multicam mode, it places a cut (that you probably didn’t want) wherever the playhead happens to be at the time.
        • I see I am not the only one complaining about this: “ExactImage, Sep 15, 2012″at
          • A workaround given at that link: Before to stop the playback press the key 0 (zero) of the keyboard and then you can stop the play (with the Space bar) without the cut in the timeline.” Duh!
      • e.g Markers are really useful in multicam, but while Premiere’s are steadily improving with product version, they are way clunkier and more limited than those in Sony Vegas:
        • e.g. I put a marker at the start of an interesting section (of timeline), I select it and define its duration to be non-zero, so I can stretch it out to mark a region, then I drag the playhead to the find the end of that interest, I try to drag the marker’s right-hand end up to the playhead, but instead the playhead gets reset to the start of marker.  Duh!
        • e.g. Markers cannot be promoted from clip (media or nested Sequence) to current Sequence.
        • e.g. waveform displays (assuming you can get them to appear in the first place) go blank when sliding clips around.  Really annoying when trying to synchronise to music etc.
    • …so I will explore other options for multicam:
      • In the past (as will be apparent from the above) I have had more joy, as regards Multicam, with Sony Vegas.
      • I will check out what people think of other NLEs as potential “Best of Breed” for multicam editing.  Thus far I have heard (from web-search) good things about FCPX and LightWorks.
  • For audio enhancement, such as denoising, I find iZotope’s RX2 far superior to the one in Adobe Audition.
  • For making a DVD:
    • I find Encore to be handy in some ways but limited and clunky in others.
      • e.g. can’t replace an asset with one of a different type (e.g. [.avi] and [.mpg]).
    • The advantage of using an integrated DVD-Maker such as Encore might be limited:
      • e.g. many people are not using the direct link, but exporting from Premiere/AME, in which case any third-party DVD Builder could be used.
      • The only significant advantage I am aware of is the ability to define Scene/Chapter points in Premiere and have them recognised/used by Encore.
        • But maybe some third-party DVD Builder applications can also recognise these?  Or can be configured/helped to do so?  Worth finding out.
    • ?

Adobe Encore (DVD Constructor): Error: “Encore failed to encode” & Limitations & Recommended Settings

Sunday, July 14th, 2013

In one Adobe CS6 Encore (a DVD constructor) project, the [Check Project…] feature found no problems, but on attempting to [Build] the project, the following error was reported: “Encore failed to encode”.

A web-search (further below) revealed that this error message could have reflected any of a number of potential problems.

In my specific project’s case, I found that shortening the filename name fixed the problem.  Possibly the filename length was the issue, but it could have been any of the following (experimentation is needed to confirm what it was). Possibly Encore doesn’t like one or more of the following, as regards either filenames or, possibly, the total text representing the volume, folder-chain and file-name.

  • Long filenames
    • Possibly the limit is 80 characters.
  • Specific kinds of character in the filename, such as:
    • Spaces (it’s safer to use underscores instead).
    • Unusual (legal but not popularly used) characters, such as “&” (ampersand).

It is possible to configure Encore to use Adobe Media Encoder (AME) instead of its own internal one.  Doesn’t work for Encore’s [Build] operation but does work for its [asset >RtClk> Transcode Now] operation.  The advantages I expect of of using AME in this way:

  • It has been said (as of CS5) that AME is faster, being 64-bit as opposed to 32-bit for the encoder in Encore of CS5.
  • I suspect/hope that AME might also be more robust than Encore’s internal encoder.
  • …and also higher quality; indeed one post implied this may be true for CS6.
  • Consistency is a great thing; having used AME from Premiere etc. I expect any lessons gained will apply here.
  • AME has some nicer usability-features than Encore, such as a Pause button and the ability to queue a number of jobs.
  • These features could be handy for encoding multiple assets for a DVD or Blu-Ray Disk (BD).

For me, the learning-points about Adobe are:

  • Potentially (to be tested) the best workflow for Encore is:
    • Encode via AME:
      • Preferably from Premiere.
      • Or via AME directly
      • Or, if Encore is so configured (away from its default) then via its [asset >RtClk> Transcode Now] option
        • (doesn’t happen if you instead use the [Build] option, which always employs Encore’s internal encoder).
        • At one poster recommends: << it is a good idea to use “transcode now” before building to separate the (usually longer) transcode of assets step from building the disk.>>
    • I’m guessing that the only “cost” of not using Encore’s internal encoder might be the “fit to disk” aspect, and that might be helpful for quick turn-around jobs.
      • (Though on the other hand, if that encoder is less robust (I don’t know, only suspect), then that factor would constitute a risk to that quick turn-around…)
  • Encore’s error-reporting (error message) system should be more informative, the current “Encore failed to encode” message is too general.
    • According to Adobe Community forum posts identified in the Web-Search (further below):
      • Others make this same point.
      • One post explains that <<Encore uses Sonic parts for some (most?) of the work… and since Sonic does not communicate well with Encore when there are errors… bad or no error messages are simply a way of life when using Encore>>>
      • Another refers to an underpinning software component by Roxio, namely pxengine, which required to be updated for Windows 7 (from the previous XP).
        • The post states (correctly or otherwise – I don’t know) that the file is [PxHlpa64.sys], located in [C:\windows\System32\drivers] and (as of CS5) the version should be [].
      • A further post alleges that the specific subsystem is called Sonic AuthorCore, which is also used by Sonic Scenarist.
      • It would be simple for Adobe to trap filename-type errors in the front-end part of Encore, prior to sending that data to its (alleged) sub-system that is maintained by Sonic.
      • In the long term, the preferred fix would of course be for the sub-system developer to update that system to remove the limitations.
  • Encore currently has some kind of (hidden) limitation on the kind or length of text representing the filename or file-path-and-name, ideally this limitation should be removed or at least the maximum allowed length should be increased.

Not directly relevant, but noticed in passing (while configuring Encore:[Edit > Preferences]):

  • Encore’s “Library” location is: [C:\Program Files\Adobe\Adobe Encore CS6\Library]
  • It is possible to define which display (e.g. external display) gets used for Preview.  Useful for quality-checking.


Adobe CS6 Encore (DVD-Constructor): Asset Replacement

Sunday, July 14th, 2013

In Adobe CS6 Encore, suppose you have a timeline containing a clip, then (maybe after having added Scene/Chapter markers there) for some reason you need to replace the clip, e.g. due to a slight re-edit or tweak.  All you want to do is substitute a new clip for the existing clip, one-for-one, keeping the markers (that you have only just added) in place (together with their links to DVD menu buttons you may also have just now created).

In Encore, media (“Asset”) replacement is not as straightforward or as flexible as in Premiere…

I discovered (the hard way) that:

  • You can’t replace an asset by another of different file extension.
    • e.g. It won’t let you replace an [.avi] file by a [.mpg] file.
  • If you manually delete an existing clip from a timeline, any chapter markers disappear along with it.
    • I guess therefore that such markers “belong” to the clip, not the timeline.
      • This is despite their superficial resemblance to markers appearing in a Premiere timeline, which do belong to the Sequence (of which the timeline is a view).
    • Consistency would be good to have among these suite products…
    • Also in Encore, it would help to have the ability to Copy/Paste markers from one asset to another.
      • Feature Request?


How to open MPEG-2 / VOB files in VirtualDub

Saturday, July 13th, 2013

Ordinarily, VirtualDub cannot understand MPEG-2 video files, but there is a plugin that makes this possible:

  • MPEG-2 plugin v4.5 by fccHandler, Released March 23, 2012
  • File: [MPEG2.vdplugin] goes into folders Plugins32 or Plugins64 (as appropriate) of the folder your [VirtualDub.exe] resides in (32-bit or 64-bit):
    • Not the existing folder, simply called [plugins]
    • VirtualDub will find and use the plugins automatically
  • YouTube:
  • Download link –
  • Discovered via Google:[virtualdub mpeg2]

Prior to awareness of this, I would have used VirtualDubMod, but that development has been discontinued as of 2005 (though at the download area says “Last Update: 2013-05-07”

Best Workflow for High-resolution Master (e.g. HD or HDV) to Multi-Format Including SD-DVD

Saturday, July 13th, 2013

What is the best workflow for going from a high-resolution footage, potentially either progressive or interlaced,  possibly through an intermediate Master (definitely in progressive format) to a variety of target/deliverable/product formats, from the maximum down to lower resolution and/or interlaced formats such as SD-DVD ?

Here’s one big fundamental: Naively one might have hoped that long-established professional NLEs such as Premiere might provide high-quality optical processing based downscaling from HD to SD, but my less optimistic intuition, about the un-likelihood of that, proved correct.  In my post I note the BBC Technical standards for SD Programmes state: <<Most non linear editing packages do not produce acceptable down conversion and should not be used without the broadcaster’s permission>>.

Having only ever used Adobe (CS5.5 & CS6) for web-based video production, early experiences in attempting to produce a number of target/deliverable (product) formats proved more difficult and uncertain than I had imagined…  For a current project, given historical footage shot in HDV (1440×1080, fat pixels), I wanted to generate various products from various flavors of HD (e.g. 1920x1080i50,  1280x720p50) down to SD-DVD (720×576).  So I embarked on a combination of web-research and experimentation.

Ultimately, this is the workflow that worked (and satisfied my demands):

  • Master: Produce a 50 fps (if PAL) progressive Master at the highest resolution consistent with original footage/material.
    • Resolution: The original footage/material could e.g. be HD or HDV resolution.  What resolution should the Master be?
      • One argument, possibly the best one if only making a single format deliverable or if time is no object, might be to retain the original resolution, to avoid any loss of information through scaling.
      • However I took the view that HDV’s non-standard pixel shape (aspect ratio) was “tempting fate” when it came to reliability and possibly even quality in subsequent (downstream in the workflow) stages of scaling (down) to the various required formats (mostly square-pixel, apart from SD-Wide so-called “16:9” pixels, of 1.4568 aspect ratio (or other, depending where you read it).
      • So the Master resolution would be [1920×1080].
    • Progressive: The original footage/material could e.g. be interlaced or progressive, but the Master (derived from this) must be progressive.
      • If original footage was interlaced then the master should be derived so as to have one full progressive frame for each interlaced field (hence double the original frame-rate).
        • The concept of “doubling” the framerate is a moot point, since interlaced footage doesn’t really have a frame rate, only a field rate, because the fields are each shot at different moments in time.  However among the various film/video industry/application conventions, some people refer to 50 fields/second interlaced as 50i (or i50) wile others refer to it as 25i (or i25).  Context is all-important!
    • Quality-Deinterlacing: The best way to convert from interlaced fields-to-frames is via motion/pixel/optical -based tools/techniques:
      • I have observed the quality advantage in practice on numerous projects in the distant past, e.g. when going from HDV or SD (both 50i) to a variety of (lower) corporate web-resolutions.
      • This kind of computation is extremely slow and heavy, hence (for my current machines at least) more an overnight job than a real-time effect… In fact for processing continuously recorded live events of one or two hours, I have found 8 cores (fully utilised) to take a couple of 24-hour days or so – for [AviSynth-MultiThread + TDeint plugin] running on a [Mac Pro > Boot Camp > Windows 7].
      • But (as stated) this general technique observably results in the best quality, through least loss of information.
      • There are a number of easily-available software tools with features for achieving this, Adobe and otherwise:
        • e.g. AviSynth+TDeint, (free) After-Effects, Boris.
        • e.g. FieldsKit is a nice convenient deinterlacing plugin for Adobe (Premiere & After Effects), and is very friendly and useful should you want to convert to a standard progressive video (e.g. 25fps), but (at this time) it can only convert from field-pairs to frames, not from fields to frames.
          • I submitted a Feature Request to FieldsKit’s developers.
    • Intermediate-File Format: A good format for an Intermediate file or a Master file is the “visually lossless” wavelet-based 10-bit 422 (or more) codec GoPro-Cineform (CFHD) Neo
      • Visually lossless (such as CFHD) codecs save considerable amounts of space as compared to uncompressed or mathematically lossless codecs like HuffYUV and Lagarith.
      • I like Cineform in particular because:
        • It is application-agnostic.
        • It is available in both VFW [.avi] and QuickTime [.mov] varieties (which is good because I have found that it can be “tempting fate” to give [.mov] files to certain Windows apps, and indeed not to give it to others).  The Windows version of CFHD comes with a [.avi] <-> [.mov] rewrapper (called HDLink).
        • Another advantage is that CFHD can encode/decode not only the standard broadcast formats (and not only HD) but also specialized “off-piste” formats.  I have found that great for corporate work. It’s as if it always had “GoPro spirit”!
        • CHFD Encoder Settings from within Sony Vegas 10:
          • These settings worked for me in the context of this “Sony-Vegas-10-Initially-then-Adobe-CS6-centric” workflow:
    • Technical Production History of a Master for an Actual Project:
      • This is merely for my own reference purposes, to document some “project forensics” (while I still remember them and/or where they’re documented):
      • This was a “Shake-Down” experience, not exactly straightforward, due to an unexpected “hiccup” between Sony Vegas 10 and AviSynth-WAVSource.  Hiccups are definitely worth documenting too…
      • The stages:
        • Sony Vegas Project: An initial HDV 50i (to match the footage) Intermediate file, containing the finished edit, was produced by Sony Vegas 10 Project:
          • [Master 021a (Proj HDV for Render HDV)  (veg10).veg] date:[Created:[2013-07-01 15:30], Modified:[2013-07-03 20:07]]
          • Movie duration was about 12 minutes.
        • Audio & Video Settings:
          • Project Settings:
            • HDV 1440×1080 50i UFF 44.1KHz
              • The audio was 44.1KHz, both for Project and Render, since most of the audio (music purchased from Vimeo shop) was of that nature.
          • Render Settings:
            • I believe I will have used the following Sony Vegas Render preset: [CFHD ProjectSize 50i 44KHz CFHD (by esp)] .
              • Though I think there may have been a bug in Vegas 10, whereby the Preset did not properly set the audio sampling frequency, so it had to be checked & done manually)
            • The CFHD Codec settings panel only offered two parameters, which I set as follows: Encoded format:[YUV 4:2:2], Encoding quality:[High]
          • The result of Rendering from this Project was the file:
            • [Master 021a (Proj HDV for Render HDV)  (veg10).avi] date:[Created:[2013-07-01 15:30], Modified:[2013-07-01 18:58]]
              • Modified date minus creation date is about 3.5 hours, which I guess accounts for the render-time (on a 2-core MacBook Pro of 2009 vintage winning Windows 7 under Boot Camp).
        • The next stage of processing was to be by AviSynth.
          • However AviSynth had problems reading the audio out of this file (it sounded like crazy buzzes).
          • To expedite the project, and guessing that Vegas 10 had produced a slightly malformed result (maybe related to the audio setting bug?), and hoping that it was just a container-level “audio framing” issue, I “Mended” it by passing it through VirtualDub, in [Direct Stream Copy] mode, so that it was merely rewrapping the data as opposed to decompressing and recompressing it.  The resulting file was:
            • [Master 021a HDV Mended (VDub).avi], date:[Created:[2013-07-08 18:22], Modified:[2013-07-08 18:30]]
          • Since that time, I have discovered the existence of the Cineform tool CFRepair, from forum post at DVInfo: which itself provided a download link as
            • Worth trying it out sometime, on this same “broken” file…
        • This was processed into full HD progressive (one frame per field, “double-framerate”) by an AViSynth script as follows, its results being drawn through VirtualDub into a further AVI-CFHD file, constituting the required Master.
          • AviSynth Script:[HDV to HD 1920×1080.avs] date:[Created:[2013-07-04 18:13], Modified:[2013-07-08 22:05]]
            • I used AvsP to develop the script.  It provides helpful help of various kinds and can immediately show the result in its preview-pane.
            • Multi-threaded:
              • To make best use of the multiple cores in my machine, I used the AviSynth-MT variant of AviSynth.  It’s a (much larger) version of the [avisynth.dll] file.  For a system where AviSynth (ordinaire) is already installed, you simply replace the [avisynth.dll] file in the system folder with this one.  Of course its sensible to keep the old one as a backup (e.g. rename it as [avisynth.dll.original]).
            • Audio Issue:
              • This particular script, using function [AVISource] to get the video and and [WavSource] to get the audio, only gave audio for about the first half of the movie, with silence thereafter.
              • Initially, as a workaround, I went back to VirtualDub and rendered-out the audio as a separate WAV file, then changed the script to read its [WAVSource] from this.
              • That worked fine, “good enough for the job” (that I wanted to expedite)
              • However afterwards I found a cleaner solution: Instead of functions [AVISource] and [WAVSource], use the single function [DirectShowSource].  No audio issues.  So use that in future.  And maybe avoid Vegas 10?
          • The script was processed by “pulling” its output video stream through VirtualDub which saved it as a video file, again AVI-CFHD.  Since no filters (video processing) was to be performed in VirtualDub, I used it in [Fast Recompress] mode.  In this mode, it leaves the video data in YUV (doesn’t convert it into RGB), making it both fast and information-preserving.  Possibly (not tested) I could have simply have rendered straight from AvsP:[Tools > Save to AVI].  When I first tried that, I got audio issues, as reported above, hence I switched to rendering via VirtualDub, but in retrospect (having identified a source, perhaps the only source,  of those audio issues) that (switch) might have been unnecessary.
      • The resulting Master file was [Master 021a HDV 50i to HD 50p 1920×1080 (Avs-VDub).avi] date:[Created:[2013-07-08 21:55], Modified:[2013-07-08 22:47]]
        • “Modified minus created” implies a render-time of just under an hour.  This was on a [MacBook Pro (2009) > Boot Camp > Windows 7] having two cores, fully uitilised.
  • Quality inspection of Master:
    • Check image quality, e.g. deinterlacing, via VirtualDub.
      • VirtualDub is great in a close-inspection role because its Preview can zoom well beyond 100% and, vitally, it displays the video as-is, with no deinterlacing etc. of its own.
        • e.g. zoom to 200% to make any interlacing comb-teeth easily visible.  There should not be any, since this Master is meant to be progressive.
  • Premiere Project: Make a Premiere project consistent with the Master, and add chapter markers here.
    • Make Premiere Project consistent with the Master, not the Target.
      • …especially when there is more than one target…
    • Don’t directly encode the master (by Adobe Media Encoder), but instead go via Premiere.
      • I have read expert postings on Adobe forums stating that as of Adobe CS6, this is the best route.
      • This appears to be the main kind of workflow the software designers had in mind, hence a CS6 user is well-advised to follow it.
        • It represents a “well-trodden path” (of attention in CS6’s overall development and testing).
        • Consequently, (it is only in this mode that) high-quality (and demanding, hence CUDA-based) algorithms get used for any required scaling.
        • Not knowing the application in detail, hence having to adopt the speculative approach to decision-making, it feels likely that this workflow would have a greater chance of reliability and quality than other, relatively off-piste ones.
    • Premiere is the best stage at which to add Chapter Markers etc.
      • Chapter markers etc. get stored as ??XMP?? and are thereby visible to Encore (Adobe’s DVD-Builder)
      • Better to place such markers in Premiere rather than in Encore, since:
        • In Encore, Chapter markers act as if they are properties of Assets, not Timelines.
          • If you delete an asset from a timeline, the chapter markers disappear also.
        • Encore (CS6) Replace Asset has some foibles.
          • In Encore, if you were to put an [.avi] file asset on a timeline, then add markers then try to replace that asset with a [.mpg] file, you would be in for a disappointment; if the file extension differs then the markers disappear. If required, then the markers would have to be re-created from scratch. Same again if you subsequently replaced back to a new [.avi] file.
          • The Foibles of Encore (CS6)’s Replace Asset function, in more detail:
            • Good news: If the new asset has the same file extension then any existing markers are retained.
              • This possibly suggests that they are transferred from the old asset to the new one.
            • Bad news: If the new asset file extension differs from the old one, then:
              • You get an error (popup): ???
                • e.g. it refused my attempt to replace an [.avi] file by a [.m2v] file).
              • Partial-workaround:
                • You can instead delete the existing asset from the timeline, prior to dragging another asset there..
                • ..BUT as a side-effect that deletes any of the old asset’s markers also…
                • …and furthermore Encore has no way to copy a set of markers from one asset to another
                  • …which would otherwise have been a nice work-around for the above side-effect.
  • Premiere Export: Export / Render to Target Format.
    • You may wish to render to a number of formats, e.g. SD-Wide DVD, Blu-Ray Disk (BD), YouTube upload format, mobile phone or tablet.
      • The most efficient strategy is to Queue a number of jobs from Premiere onto Adobe Media Encoder (AME.
        • AME can run some things in parallel (I think).
        • AME has a [Pause] button, very useful for overnight silence or prior to travel (Windows Sleep/Hibernate).
    • Menu:[File > Export > Media]
    • Export Settings:
      • For targets of differing aspect ratio (e.g. SD-Wide derived from HD master):
        • Source Scaling:
          • e.g. for HD -> SD, use [Scale to Fill] since this avoids “pillarboxing” i.e. black bars either side.
      • For DVD Target, use inbuilt preset MPEG2-DVD
        • Ensure [Pixel Aspect Ratio] and interlace sense etc. are as required.
        • The [MPEG2-DVD] preset generates two files:
          • [.m2v] for the video
          • [Dolby Digital] or [MPEG] or [PCM]
            • [PCM] option results in a [.wav] file of 16 bits, 48 KHz (there is no 44.1 KHz option).
      • Maximum Render Quality
        • Use this if scaling, e.g. down from HD Master to SD Target.
      • File Path & Name.
        • Where you want the export/encode result to go.
    • Click the [Queue] button, to send the job to the Adobe Media Encoder (AME)
  • Quality Inspection of Result (intermediate or target file):
    • Check the quality of the encodes via VirtualDub, e.g. for DVD-compatible video media, the correctness of interlacing and for progressive media the quality of deinterlacing.
      • For interlaced downscaled material derived from higher resolution interlaced, the combs should be fine-toothed (one pixel in height).  A poor quality result (as expected for straight downscaling by any typical NLE such as Premiere, from HD interlaced to SD interlaced) would instead exhibit combing with thick blurry teeth.
      • VirtualDub is great tool for a a close-inspection role because its Preview can zoom well beyond 100% and, vitally, it displays the video as-is, with no deinterlacing etc. of its own.
        • In the past I have searched for and experimented with a number of candidate tools to be effective and convenient in this role.  VirtualDub was the best I could find.
        • e.g. zoom to 200% to make the teeth easily visible.
      • Plain VirtualDub is unable to read MPEG2 video, but a plugin is available to add that ability:
        • The [mpeg2.vdplugin] plugin by FCCHandler, from
          • It reads straight MPEG2 files, including [.m2v], but not Transport Stream files such as [.m2t] from the Sony Z1.
          • For [.m2v] files, VirtualDub may throw up an audio-related error, since such files contain no audio.  Fix: In VirtualDub, disable audio.
        • Its ReadMe file contains installation instructions.  Don’t just put it in VirtualDub’s existing [plugins] folder.
  • DVD Construction via Adobe Encore.
    • Name the Project according to the disk-label (data) you would like to see for the final product.
      • If you use Encore to actually burn the disk, this is what gets used for that label.
      • Alternative options exist for just burning the disk, e.g. the popular ImgBurn, and this allows you to define your own disk-label (data).
    • Import the following as Assets:
      • Video file, e.g. [.m2v]
      • If Video File was an [.m2v] then also import its associated Audio file – it does not get automatically loaded along with the [.m2v] file.
    • Create required DVD structure
      • This is too big a topic to cover here.
    • Quality Inspection: [Play From Here]
      • Menu:[File > Check Project]
        • Click [Start] button
        • Typical errors are actions [Not Set] on [Remote] or [End Action]
          • I plan to write a separate blog entry on how to fix these.
        • When everything is ok (within the scope of this check), it says (in status bar, not as a message): “No items found”.
          • A worrying choice of phrase, but all it means is “no error-items found”.
    • Menu:[File > Build > Folder]
      • Don’t select [Disk], since:
        • May want to find and fix any remaining problems prior to burning to disk.
        • May want to use an alternative disk burning application, such as ImgBurn.
          • From forums, I see that many Adobe users opt for ImgBurn.
      • Set the destination (path and filename) for the folder in which the DVD structure will be created.
        • At that location it creates a project-named folder and within that the VIDEO_TS folder (but no dummy/empty AUDIO_TS folder).
          • I once came across an ancient DVD player that insisted on both AUDIO_TS and VIDEO_TS folder being present and also they had to be named in upper-case, not lower.
      • Under [Disk Info] there is a colored bar, representing the disk capacity
        • Although the Output is to a folder, the Format is DVD, single-sided, which Encore realizes can hold up to 4.7 GB.
      • The [DVD ROM] option allows you to include non-DVD files, e.g. straight computer-playable files e.g. ([.mp4])
        • These go to the root of the drive, alongside the VIDEO_TS folder.
      • Finally, click the [Build] button.
        • On one occasion, it failed at this stage, with a “Encode Failed” or “Transcode Failed” (depending where I looked) error.  Solution: Shorten the file name.
          • Ok it was long-ish but I didn’t realize Encore would be so intolerant to that.  The suggestion of it only struck me later (the appearance of this guess was thanks to years of experience with computing etc.).
  • Quality Inspection of the DVD
    • I have found Corel WInDVD to show results representative of a standard TV with a DVD Player.
    • I have found popular media player such as VLC and Windows Media Player (WMP) to behave differently to this, hence not useful for quality-checking.   Problems I found included:
      • False Alarm: Playing went straight to the main video, didn’t stop at the Main Menu (as had been intended).  However it worked fine on a standard physical DVD player.
      • Hidden Problem: In one case I deinterlaced improperly, resulting in “judder” on movements when played on TV (via physical DVD player).  However it appeared fine on both VLC and WMP.
  • Metadata
    • In the case of WMV files, just use Windows Explorer:[aFile >RtClk> Properties > Details] and edit the main items of metadata directly.
    • For DVD generated by Adobe Encore, the Disk label (data) is the same as the Project name.
      • ImgBurn, a popular alternative to Encore as regards actually burning a disk, provides a way of changing this disk-label.

Adobe CS6 (Creative Cloud): Activation/Deactivation/Migration: Problems & Solutions

Friday, July 12th, 2013

I get the main gist of Adobe’s CS6 Cloud concept, which is not as flexible as the Kindle model, but I am nevertheless slightly “cloudy” or at least hazy over practical details like how to seamlessly transfer from one machine to another, and I have concerns such as what would happen if my main machine became unavailable, for example due to loss or damage/corruption.  Or what would happen if I forget to exit or deactivate (whatever) on one machine (e.g. at a work location) then would it still be possible to work on another machine (e.g. at home or remote location)?  I am also concerned whether there is any potential for serious hiccups and delays to a project in progress, resulting from any unknown (to me) intricacies of Adobe’s license control system.  So I set forth (on the web) to find out:

My Summary:
(The following points are based on my own interpretation of Adobe advice e.g. at
I take no responsibility for their correctness, especially since I started from a position of such uncertainty.  However I hope they are helpful).

  • There are two stages:
    • Installation:
      • Can Install only to a maximum of two machines
      • These can be any mixture of Windows and Mac.
      • Could install to a third machine only after (deactivating if active and) uninstalling from either of the existing two machines.
        • (What an un-cloud-like nuisance, wasting time on this, together with worries over any possible loss of plugins, presets, preferences etc.)
    • Activation:
      • Activate/Deactivate: Menu:[Help] – in any of the CS6 applications.
      • Can Activate only one of these (two) “CS6-carrying” machines at a time.
      • Not as flexible as Apple’s FCP7 used to be, where the only condition was essentially that both instances could not be running at the same time.
  • Some potential problems, avoidance and solutions:
    • Virtual machine: Running CS5 software on a virtual machine can increase the activation count.   Solution: Start the software within the virtual machine then Menu:[Help > Deactivate].
    • Computer Modification: Changing (e.g. upgrading) a computer’s configuration (e.g. hardware, hard drive, or operating system).  Have your serial number handy, and click the [Chat Now] button on the Adobe webpage to talk with a live agent.
    • Locate your serial number:
      • Go to
      • Click Sign in, in the upper-right corner.
        • For Adobe ID, enter your email address. This email address is the address you used originally to create your Adobe ID and download or register your product.
        • If you don’t remember your password, click [Trouble signing in?] below the [Sign In] button.
      • From the Welcome menu in the upper right, choose [My products and services].
        • A list of your registered or downloaded products appears.
      • Click the triangle in the left column adjacent to the product name.
        • Your serial number appears below the product name.
    • Activation Error-Code: If your activation attempt fails with an error code number (for example 93:-12 or 93:-14), look up the code in Activation error codes:
    • Computer Inaccessible: Can’t access the previous computer on which you installed the software? Click the [Chat Now] button (at to talk with a live agent. Be sure to have any purchase-related information ready.
    • Failed to Deactivate before Unuinstalling: Uninstalled your software without first deactivating it?  Reinstall the software (presumably any one application) and then in that application do Menu:[Help > Deactivate].
    • Forgot to Exit the application on a works machine?  Tough (I guess), or maybe ask really nicely (and hope)?  A risky situation to be steered clear of…


AviSynth Scripting Basics / Overview

Monday, July 8th, 2013

Multi-Threading in AviSynth

Monday, July 8th, 2013

Enabling multi-threading in AviSynth is dead-easy!


  • Get Modified AviSynth MT
  • Make a copy of the existing [avisynth.dll] on your system
  • Replace the original [avisynth.dll] with the one from “Modified AviSynth MT”
  • Use SetMTMode (with appropriate parameters) at the start of your script.
  • In the case of my simple scripts, that appears to be sufficient!


Windows 7: Backing-Up

Monday, July 8th, 2013

This looks like a good article / explanation.

VirtualDub: Processing Modes (e.g. for YUV-preservation)

Monday, July 8th, 2013

(item from 2003, but still valid today)

yEd: Multi-Line Labels

Monday, July 8th, 2013

In yEd, the wonderfully flexible, smart and free diagramming/graphing tool, suppose I want to write some notes/prose or even paste-in some script (dramatic or algorithmic).  That can be done as follows:

  • Several options:
    • Force a new-line:
      • [Control-Enter]
      • [Enter]
        • when the label is entered in the tabular view of the node’s properties,
        • or in the node’s properties dialog (which can be opened by hitting F6 [Mac OS: Command-I])
      • HTML Markup
        • Example:
          <html><div style="text-align:center">This is a<br>
    • Automatic text-wrapping:
      • Automatic text wrapping for a label is configured using the Cropping label configuration. For a node label, for example, it can be set in the node’s Properties dialog under the [Label] tab:
        • Placement: [Internal: Top]
        • Size: [Fit Node Width]
        • Configuration: [Cropping]

GenArts Sapphire: Video Tutorials / Demonstrations

Monday, July 8th, 2013

Progressive to Interlaced via Optical Flow

Monday, July 8th, 2013

Suppose you have original footage that is different to that of the required product.  For example you have progressive footage and require an interlaced product.  Or perhaps the given footage is interlaced, but at a different resolution to that product.

While it is naively possible to simply “bung whatever footage one shot into an NLE and render the requried format”, this will not in all cases provide the optimum quality.  Obtaining a quality interlaced product from progressive footage (e.g. as-shot or intermediate or an animation) requires some more “beyond the box” thinking and processes.

The following article extract (link and bullet-points) explains how to go from Progressive to Interlaced using a video-processing application such as After Effects.

  • The first stage is to derive double-rate progressive footage from the original, specifically via motion-compensated/estimated /optical-flow tools/techniques as opposed to simple frame-blending (which would give rise to unwanted motion-blur artefacts).  This can be achieved via various applications (e.g. as listed in the article).  For such processes, I have traditionally used AviSynth (e.g. QTGMC & MVTools, which I covered at, but I look forward to evaluating other applications in this regard.
    • For footage that is already interlaced but which is at a different resolution to the required product, I typically use AviSynth’s TDeint plugin, which use motion/optical methods via which one can derive complete progressive frames corresponding to each field of the given footage.  Then these frames can be resized to the required product resolution, prior to the second stage.
  • The second stage is to derive from this (double-rate progressive footage) the required interlaced footage, by extracting each required field (upper and lower alternating) from each frame in turn.  For this, I have traditionally used Sony Vegas, which does this well.  The article claims After Effects does it well, and better than (the erstwhile) Final Cut Pro, but no mention is made of Adobe Premiere (though it may well perform this task well).  Naturally, AviSynth could also be used for this, either by extending its script or as a separate script.
    • I queried whether Premiere could do it, on Adobe Premiere forum:
    • One reply said <<Premiere is pretty smart about such matters.  You should have no issues.>>
  • Note that it can be useful to preserve a double-rate intermediate file for other purposes (e.g. downscaling of HD to SD or maybe in future, double-the-current-normal-rate will become the new normal).


    • Interlacing Progressive Footage
    • {The following is slightly re-worded/paraphrased from the original}
    • Frame-Doubling:
      • The first step is to double up the literal frame count, resulting in one of the following:
        • Double the duration.
        • Double the frame-rate.
      • In order to do this properly, the new frames need to be interpolated by means of a vector-based pixel warping or morphing algorithm.
      • This can be accomplished by a variety of different applications, including:
        • Motion 3 (by use of the Optical Flow feature)
        • After Effects (by use of Layer > Frame Blending > Pixel Motion)
        • Shake
        • Twixtor plugin (which can be used in Final Cut Pro, After Effects and several other host applications)
        • Boris FX
      • You do NOT want to frame-blend this step.
      • The best way to tell if this step is working correctly is to look at the new frames that have been created. If they have an overlapping ghost look to them, then it’s frame-blending, which you do not want. If the new frames literally look like new frames with no ghosting or overlapping, then you’re on the right track.
    • Interlacing:
      • This can be done in After Effects, Final Cut Pro and pretty much any other video application
        • After Effects renders out a cleaner interlace (actually, a perfect interlace) than does Final Cut Pro
      • In Adobe After Effects:
        • Setup:
          • Select the rendered clip in the Project window and right-click it and select Interpret Footage > Main.
          • Suppose the original clip was “30p”, i.e. 29.97 fps, then the rendered clip will be “60p” i.e. 59.94 fps.
          • In the Frame Rate section, conform the frame-rate to the correct value, namely 59.94 fps, or “60p”.
          • Create a new Comp of “60i”
          • Place the 60p clip in that Comp’s timeline
          • (Even though your timeline is only 29.97 FPS and you can’t see the extra frames when scrubbing frame by frame, don’t fear; when you render the final clip, it will use the extra frames in the 60p clip to create the new fields.)
        • Render:
          • Render this by Menu:Composition > Make Movie].
          • This should open up the [Render Queue] window with a new comp in the queue. You’ll need to change the Render Settings either by selecting a pulldown option next to it or by clicking the name next to the pulldown option.
          • Ensure you render this clip with [Field Rendering] turned on. You’ll need to select either Upper Field First (UFF) or Lower Field First (LFF), depending on your editing hardware and format of choice.

VirtualDub’s [Fast Recompress] Option Maintains YUV Color-Space

Monday, July 8th, 2013

Be not afraid to use VirtualDub to save AviSynth script results to a file, provided VirtualDub is in its [Fast Recompress] mode.

I had read that for the benefit of its image-altering filters, VirtualDub operates in RGB color-space, as opposed to YUV color-space, a lightly-compressed alternative that can represent a subset of RGB-space, and is typically used for video storage and transmission.  Given this, when running VirtualDub to take the output of one file, pass it through some “Filters” (effects) and generate another, the implicit color-space transformations would be YUV->RGB->YUV, thereby losing some quality (e.g. quantization banding on smooth gradients such as skies).

In contrast, AviSynth generally maintains YUV-space, unless your script tells it otherwise.  It’s designed so that opening an [.avs] script is broadly equivalent to opening a file.

This initially caused me concern at the thought of using VirtualDub to “run” (open and stream) an AviSynth script file (or rather, AViSynth’s result from that script) and save to a result file (as an [.avi] file).  Was there a way of avoiding the intermediate RGB color-space?  The answer is YES.

When VirtualDub is in its [Fast Recompress] mode, it gains not only “Fast” speed but also avoids quality-loss by maintaining the YUV color-space of the AviSynth video-stream..

Adobe Premiere (CS6): Maximum Bit Depth & Rendering

Monday, July 8th, 2013

I knew basically what these were about:

  • Max bid depth to make use of all the information in a more-than-8-bit video file, such as a 10-bit recording.
  • Max render quality to employ higher-quality but slower scaling algorithms – only relevant when scaling of course.

However, there are options to set them in the Sequence and also in the Render.  Like others, I wanted to know firmly (not just by guesswork) how/when to use these.

The answers appear to be:

  • Their values in the Sequence settings only affect the preview, not the render.
  • Their values in the Render dialog override their values in the Sequence.


  • In the Sequence, one would tend to leave them disabled, other than temporarily for quality check or comparison.
  • In Render dialog, one might tend to have them initially disabled, for render-speed, then enable them later on for final quality-check and production.


Using Cineform’s HDLink to Re-Wrap (ReWrap) from QuickTime (QT) MOV to AVI

Monday, July 8th, 2013

Rewrapping means taking the encoded contents out of one container file-type and putting it in another, with no decode/re-encode happening.  For example, given a [.mov] file, one might rewrap it to a [.avi] file.  These file-types are each merely containers, designed to contain various encode formats (e.g. DV, Lagarith, Cineform, DivX) without having to “understand” them.

Rewrapping may for example be required for some Windows-based applications, that either don’t handle [.mov], either at all or (as I have encountered) not fully.  Similarly, some applications (Windows or Mac based) will only work (or work properly) with [.mov] files.  For instance I have found the Windows variant of Boris RED (versions 4 and 5) to work properly with HD 50 fps progressive only via [.mov] container, as reported at while someone else has found Avid Media Composer 5 to prefer [.mov], reported at

One tool for doing this: HDLink, a utility bundled with the Windows version of Go-Pro-Cineform “visually lossless” wavelet-based codec (that I have used for a number of years).  HDLink can convert Cineform files from [.mov] to [.avi] and vice-versa.  Incidentally, for the Mac version of Cineform, there is a broadly equivalent utility called ReMaster, but that can only convert in one direction, from [.avi] to [.mov].

To re-wrap:

  • (Just now, I merely did [Convert] tab, select file and [Start], ans all worked fine, but maybe full work instruction should be as follows?)
  • Use HDLink’s [Convert] tab.
  • Select/Ensure the required destination file-type:
    • Click [Prefs] button (at bottom of dialog)
    • In [Prefs], ensure [Destination File Format for … Conversion] is set as you require.
    • And (I guess?) enable [Force re-wrap CF MOV->AVI], to ensure it doesn’t sneakily do a transcode?
  • Select the Input file and go.
  • The rewrapped version will appear in the same folder.

The process is of course much faster than transcoding, involving simple computation, hence the overall speed will tend to be limited by the storage (e.g. hard disk and/or its transfer bus, especially if it’s a slow old thing like USB2) rather than the CPU (which may consequently show an extremely low % usage).


HD2SD – A “Package” for AviSynth

Monday, July 1st, 2013

HD2SD is an HD to SD convertor implemented by Dan Isaac as an AviSynth”package” (my term, for the plugin of that name and its dependent bits).

Its development was apparently prompted by the relatively poor scaling performances of NLEs at that time (e.g. Adobe CS4).  Some claim that it is still superior, even to Adobe CS6’s latest CUDA-based scaling algorithms, though those run a close second.  In my own experience to date, of converting a 1440×1080 HDV footage to 720×576 PAL-SD-Wide equivalent, the results were poorer than SOny Vegas 10’s “Best” (Bicubic) scaling algorithm.  Regardless, there is always the possibility of error in such experiments, and in any case, its “place in history” and potential for use in future remain.



Want to Establish Best Workflow(s) for Combined HD to HD (e.g. Blu-Ray) & SD (DVD)

Monday, July 1st, 2013

The story so far:

  • I have a resurfaced (old) project shot in HDV 1440×1080 i50, Video Levels 16-255.
  • This has been edited in Sony Vegas 10, as a project consistent with the footage (hence HDV), but with Audio 44kHz (due to predominantly CD music background), and with levels over full-range 0-255.
  • My first attempt involved (from Vegas 10) rendering down to SD, encoded in GoPro-Cineform.  This I imported to Adobe Encore and generated a DVD which looked acceptable.
    • In retrospect, I discovered that I had enabled Vegas’s renderer’s “Stretch Video / Don’t Letterbox” option.  Ideally I’d have wanted it to be cropped (top and bottom) to fill.  I am less familiar than I would like  with Vegas-10’s nuances in this respect..
  • Subsequently I experimented with the AviSynth’s-HD2SD approach, which prior to Adobe CS5 was claimed by others to give superior results to scaling within Premiere etc.  However:
    • It has since been observed by some that Adobe CS6’s new CUDA-based scaling algorithms are almost as good.
    • In my own experiments with using HD2SD on my current (old) project’s HDV-to-SD requirement, I found HD2SD’s results inferior to (e.g. more blurred than) Sony Vegas’s “Best” (Bicubic) scaling processes, which I believe/assume to happen equivalently both in-project and on-render.


Frame Image Scaling in Adobe CS6 (e.g. Premiere-to-AME CUDA Works Best; HD-to-SD Requires Top&Bottom-Crops)

Monday, July 1st, 2013

Frame image Scaling in Adobe CS6

  • I think I read on various webpages that downscaling and encoding within Encore should be avoided.
    • CS6 CUDA-Based Scaling is Sophisticated/High-Quality:
      • Adobe Media Player in CS6 has sophisticated CUDA-based scaling algorithms that go beyond its non-CUDA-based ones.
      • They are so good that they are said to be broadly equivalent to AviSynth-HD2SD
      • But the CUDA-based algorithms only come into play when AME is encoding direct from a Premiere project (regardless of whether that project is open).
      • They do not happen when encoding either a plain media (e.g. video) file or an After Effects (AE) Composition (Comp).
    • HD to SD Conversion:
      • HD frame (hence sensor and screen) aspect (ratio) is squarer than PAL-SD-Wide.
      • Hence to avoid distortion, one can either:
        • Crop the HD top and bottom (the most pragmatic solution, but then bear in mind effects on “Safe” regions)
        • “Pillarbox” the HD within the SD frame, i.e. pad the HD image’s left and right margins, typically with black.