Archive for the ‘workflow’ Category

Premiere >AAF> Avid: Failed (though reverse works ok)

Tuesday, February 25th, 2014

Given a simple 3-minute dramatic scene with footage from BMCC (as DNxHD 185 of HD 1920×1080 at 25fps) and a Windows-7 system:

From Adobe Premiere CC (latest version) I exported AAF.  Then in AVid I imported that AAF.  Result: Bin created, containing what appeared to be (from brief glance) all relevant Media and Sequence objects (now in Avid’s representation), but the Media objects were offline/unlinked and various “cryptic” popup error messages appeared from Avid.

I had naively assumed that the Media objects would have been AMA-linked to the source footage, which by the way included DNxHD recorded by BlackMagic Cinema Camera.  However, not only were they not linked, but Avid’s Relink function failed to recognize them.

I had previously succeeded in exporting AAF from Avid to Adobe.

A forum post says Adobe can read Avid but not vice-versa – confirming my (limited) experience.  One can only guess at which company is at fault here, but one poster blames Adobe.  Regardless, I wasn’t impressed by Avid’s programmer-level “cryptic” error messages.

I tried Bin:[Select Clip > RightClick] but the [Relink to AMA File)s)] option was greyed-out.   So I tried the next-best (RightClick) option, namely [Import].  The Import process took significant time, because (as I later confirmed) it was doing a transcode (to DNxHD 120) rather than a re-wrap.  Surprising, given it was already DNxHD in the right format and better quality…  And this import didn’t replace the right-clicked clip, it just added the import to the bin as an additional clip.

Not an urgent project, so I give up for now…

(more…)

Shared Storage Options for Windows & Mac Video Editing Collaboration

Friday, October 18th, 2013

In summary:

There’s no magic option, each workstation needs a local storage volume with block-level data access (as opposed to simply file-level access) and formatted to a file system that is native (doesn’t require translation) to that workstation’s operating system.  Migration and collaboration imply file copying/synchronization, which implies read-access to the “foreign” file-system.  Mac OS can read NTFS, Winows can only read HFS+ via third-party add-on utilities.  Furthermore, for speed and responsiveness appropriate to video editing, the local storage should ideally be RAID or SSD.  In either case, it is possible to split the local storage (e.g. via partitioning) into more than one file-system.  At least, that worked on the mutiple occasions I have taken that approach, and have not been aware of any issues.

In greater detail:

Consider the challenge of setting up a shared data storage volume (e.g. RAID array or SSD) for video editing, such that either Windows or Mac computers can connect to it, and a video project started on (and saved to) on one of those operating systems (OS) can be continued on the other (and vice versa).

My current solution is to split the drive into separate volumes, one for each OS.  For example I have done this on RAIDs of various kinds and on an internal drive for Mac systems bootable to either Mac OS or (via Boot Camp) to Windows.  In the case of RAIDs I was advised against this by my system supplier, but got the impression they were just being defensive, not knowing of any definite issues, and to my knowledge I did not experience any issues.

It is is not practical to have just one volume (necessarily in that case, one file-system format), because:

  • Mac OS on its own is able to read NTFS but cannot write to it.
    • This is a show-stopper.  Some of the major video editing applications (e.g. NLEs), slightly disturbingly, may use (or for some functionality, even depend on) read/write access to source-files and the folders containing them.
      • I initially, naively, imagined that video editing systems etc. would only ever read source media files, not write to them, or to the folders containing them.  However that proved very naive indeed…
        • In Apple/Mac’s (erstwhile) Final Cut Pro 7 I regularly used their (moving) image stabilization effect, SmoothCam.  Its analysis phased was typically slow and heavy – not something one would wish to repeat.  The result was a “sidecar” file of similar forename to the analyzed source file, but a different extension, placed in the same folder as the source file.
        • I’m not certain, but got the feeling that maybe the source file (or folder) meta data, such as permissions or somekind of interpretation-change to media files in the quicktime ([.mov] mmedia format.
      • Certainly, Adobe (on Windows and Mac) could adulterate both files (by appending XMP data – being an Adobe media metadata dialect in XML) and the folders they occurred in (depending on uder-configuration) in terms of sidecar-files.
      • Sony Vegas also generates sidecar-files, e.g. for audio peaks.
  • File system translation add-ons can add Windows read/write access to HFS+ (ordinarily it could not even read it) and add Mac OS write access to NTFS (ordinarily it could only read it), but not sufficiently transparent/seamless for big real-time data access as required for demanding video editing endeavours.
    • File system translation add-ons (to operating systems) exist, such as MacDrive, to allow Windows to read/write Mac OS, or Tuxera NTFS, Paragon NTFS or Parallels for Mac to enable it to read/write NTFS, but these (reportedly, and in part of my experience) only really work well for standard “Office” type applications, not so well for heavy (big andd real-time) data applications such as video editing, where they can impede the data throughput.  Doh!
    • Some people have experienced obscure issues of application functionality, beyond data-movement speed issues.
    • {Also, I am concerned over the (unknown/imagined/potential) risk that the “alien” operating system and/or its translation utility might alter the file system in some way that upsets its appearance to the “home” operating system.}
  • FAT is universal but is a riskier option:
    • FAT is un-journaled, hence risks loss not only of individual files but of whole volume (integrity).
      • In video editing, corruption could be disastrous to a project, not only in terms of possible data-loss or time wasting and project delays on data recovery, but also in terms of “weird” effects during editing, such as poor responsiveness to commands, whose cause the user may not appreciate. or even an increased risk of unacceptable flaws in the final product.
    • FAT32 is essentially obsolete, because its maximum file size is (1 bit under) 4GB.
    • exFAT, a kind of “FAT64” is practical, and indeed a big successful corporate Mac-based production company once supplied me with many GB of footage on an exFAT-formatted external disk.
      • The largest file I have so far stored there is 40GB.  No problems.
  • NAS (Network-Attached Storage) sounds at first an easy option, but in my experience they impede big real-time data throughput (as stated earlier for “file system tyranslation” add-ons). Double-Doh!
    • Such devices only permit file-level access.  Consequently, the client systems can e.g. create or retrieve folders and files, but cannot e.g. format the device or address it in terms of lower-level data structures.
    • A likely explanation for the “impedement” of a NAS (to data responsiveness and throughput) is that such devices store in a local format (typically they run linux) that is invisible to the client, then translate to an appropriate protocol for each operating system accessing it.  They normally incorporate a bunch of such protocols.  As always, translation => overhead.
    • Other options, such as SAN and iSCSI, instead of providing file-level access to the client systems, instead offer the lower level of data block access.  Thus they appear to the client system as would any local storage device, and can be formatted as appropriate to the client system.
  • One suggestion I saw was to use a Seagate GoFlex drive, which can be used (read/write) with both Mac and Windows.  But the supplier’s FAQ (about that drive) indicates that it depends upon a translator utility for the Mac:
    •  If you would like to be able to “shuttle” data back and forth between a Mac and a PC, a special driver needs to be installed onto the Mac that allows it to access a Windows-formatted drive (i.e. NTFS). Time Machine will not work in this case, nor will Memeo Premium software for Mac. However, if you want your GoFlex solution to also work with TimeMachine, the drive will need to be reformatted to HFS+ journaled.

So I guess there is no “magic storage” option, my main work setup will have to remain based on separate volumes for each OS.

When transferring an editing project from one OS to another, the following actions will be necessary:

  • Copy any absent or updated files across.
    • e.g. via a file-synch utility such as Syncovery.
  • Allow time etc. for possible file re-linking, re-indexing, re-preview generation, re-“SmoothCam” (or equivalent).
    • This aspect is down to the editing application etc., as opposed to the operating or file systems themselves.
  • Ensure any effects used in the edit are present on both systems.
    • If so then these should presumably still work…

(more…)

Groove Folder Synchronization? What’s that?

Friday, August 23rd, 2013

While recovering and auditing a laptop I came across “Groove Folder Synchronization.  I have vaguely come across its name before, but that’s all.

It’s apparently a dropbox-like thing (loosely-speaking), by Microsoft.

(more…)

Tools/Workflow Philosophy: Best-of-Breed rather than Already-Integrated Suite ?

Sunday, July 14th, 2013

I am becoming less enthusiastic about the “Integrated Suite” philosophy or perhaps actuality of Adobe CS6, in favour of a “Best of Breed” approach, where I cherry-pick the best tool for each kind of job and then design or discover my own workflow for integrating them.

I reached this conclusion from the following experiences:

  • As regards editing itself:
    • For general A & B Roll” editing, I find Premiere is ok, though for improved usability, I’d prefer a Tag-based system (as in FCPX) to the traditional Bin-based one (as in Adobe & Avid).
    • For MultiCam editing, even in Adobe CS6, I find Premiere does the job but I find it clunky, frustrating and limited at times, like it has not yet been fully “baked” (though “getting there”)…
      • e.g. In the two such projects I have so far worked on, there has been an annoying 2-second delay from pressing the spacebar to actual playing.  Maybe some kind of buffering?
        • I found a setting for “Pre-roll” in the Preferences but altering it made no difference.
        • The following http://forums.adobe.com/thread/387405 suggested that the embedded audio (in video file) could be the issue, the solution to which was to relink to a WAV file.
      • e.g. It brings up a separate MultiCam Monitor instead of using the Source Monitor.  You have to remember to activate this each time before playing.  I find that a nuisance (and time-waster when I forget) especially because I tend to alternate multicam editing as such with tweaking the cut timings until they feel right, and sometimes that can only be done in retrospect.
      • e.g. When you stop playing in multicam mode, it places a cut (that you probably didn’t want) wherever the playhead happens to be at the time.
        • I see I am not the only one complaining about this: “ExactImage, Sep 15, 2012″at  http://forums.adobe.com/thread/1069438
          • A workaround given at that link: Before to stop the playback press the key 0 (zero) of the keyboard and then you can stop the play (with the Space bar) without the cut in the timeline.” Duh!
      • e.g Markers are really useful in multicam, but while Premiere’s are steadily improving with product version, they are way clunkier and more limited than those in Sony Vegas:
        • e.g. I put a marker at the start of an interesting section (of timeline), I select it and define its duration to be non-zero, so I can stretch it out to mark a region, then I drag the playhead to the find the end of that interest, I try to drag the marker’s right-hand end up to the playhead, but instead the playhead gets reset to the start of marker.  Duh!
        • e.g. Markers cannot be promoted from clip (media or nested Sequence) to current Sequence.
        • e.g. waveform displays (assuming you can get them to appear in the first place) go blank when sliding clips around.  Really annoying when trying to synchronise to music etc.
    • …so I will explore other options for multicam:
      • In the past (as will be apparent from the above) I have had more joy, as regards Multicam, with Sony Vegas.
      • I will check out what people think of other NLEs as potential “Best of Breed” for multicam editing.  Thus far I have heard (from web-search) good things about FCPX and LightWorks.
  • For audio enhancement, such as denoising, I find iZotope’s RX2 far superior to the one in Adobe Audition.
  • For making a DVD:
    • I find Encore to be handy in some ways but limited and clunky in others.
      • e.g. can’t replace an asset with one of a different type (e.g. [.avi] and [.mpg]).
    • The advantage of using an integrated DVD-Maker such as Encore might be limited:
      • e.g. many people are not using the direct link, but exporting from Premiere/AME, in which case any third-party DVD Builder could be used.
      • The only significant advantage I am aware of is the ability to define Scene/Chapter points in Premiere and have them recognised/used by Encore.
        • But maybe some third-party DVD Builder applications can also recognise these?  Or can be configured/helped to do so?  Worth finding out.
    • ?

Best Workflow for High-resolution Master (e.g. HD or HDV) to Multi-Format Including SD-DVD

Saturday, July 13th, 2013

What is the best workflow for going from a high-resolution footage, potentially either progressive or interlaced,  possibly through an intermediate Master (definitely in progressive format) to a variety of target/deliverable/product formats, from the maximum down to lower resolution and/or interlaced formats such as SD-DVD ?

Here’s one big fundamental: Naively one might have hoped that long-established professional NLEs such as Premiere might provide high-quality optical processing based downscaling from HD to SD, but my less optimistic intuition, about the un-likelihood of that, proved correct.  In my post http://blog.davidesp.com/archives/815 I note the BBC Technical standards for SD Programmes state: <<Most non linear editing packages do not produce acceptable down conversion and should not be used without the broadcaster’s permission>>.

Having only ever used Adobe (CS5.5 & CS6) for web-based video production, early experiences in attempting to produce a number of target/deliverable (product) formats proved more difficult and uncertain than I had imagined…  For a current project, given historical footage shot in HDV (1440×1080, fat pixels), I wanted to generate various products from various flavors of HD (e.g. 1920x1080i50,  1280x720p50) down to SD-DVD (720×576).  So I embarked on a combination of web-research and experimentation.

Ultimately, this is the workflow that worked (and satisfied my demands):

  • Master: Produce a 50 fps (if PAL) progressive Master at the highest resolution consistent with original footage/material.
    • Resolution: The original footage/material could e.g. be HD or HDV resolution.  What resolution should the Master be?
      • One argument, possibly the best one if only making a single format deliverable or if time is no object, might be to retain the original resolution, to avoid any loss of information through scaling.
      • However I took the view that HDV’s non-standard pixel shape (aspect ratio) was “tempting fate” when it came to reliability and possibly even quality in subsequent (downstream in the workflow) stages of scaling (down) to the various required formats (mostly square-pixel, apart from SD-Wide so-called “16:9” pixels, of 1.4568 aspect ratio (or other, depending where you read it).
      • So the Master resolution would be [1920×1080].
    • Progressive: The original footage/material could e.g. be interlaced or progressive, but the Master (derived from this) must be progressive.
      • If original footage was interlaced then the master should be derived so as to have one full progressive frame for each interlaced field (hence double the original frame-rate).
        • The concept of “doubling” the framerate is a moot point, since interlaced footage doesn’t really have a frame rate, only a field rate, because the fields are each shot at different moments in time.  However among the various film/video industry/application conventions, some people refer to 50 fields/second interlaced as 50i (or i50) wile others refer to it as 25i (or i25).  Context is all-important!
    • Quality-Deinterlacing: The best way to convert from interlaced fields-to-frames is via motion/pixel/optical -based tools/techniques:
      • I have observed the quality advantage in practice on numerous projects in the distant past, e.g. when going from HDV or SD (both 50i) to a variety of (lower) corporate web-resolutions.
      • This kind of computation is extremely slow and heavy, hence (for my current machines at least) more an overnight job than a real-time effect… In fact for processing continuously recorded live events of one or two hours, I have found 8 cores (fully utilised) to take a couple of 24-hour days or so – for [AviSynth-MultiThread + TDeint plugin] running on a [Mac Pro > Boot Camp > Windows 7].
      • But (as stated) this general technique observably results in the best quality, through least loss of information.
      • There are a number of easily-available software tools with features for achieving this, Adobe and otherwise:
        • e.g. AviSynth+TDeint, (free) After-Effects, Boris.
        • e.g. FieldsKit is a nice convenient deinterlacing plugin for Adobe (Premiere & After Effects), and is very friendly and useful should you want to convert to a standard progressive video (e.g. 25fps), but (at this time) it can only convert from field-pairs to frames, not from fields to frames.
          • I submitted a Feature Request to FieldsKit’s developers.
    • Intermediate-File Format: A good format for an Intermediate file or a Master file is the “visually lossless” wavelet-based 10-bit 422 (or more) codec GoPro-Cineform (CFHD) Neo
      • Visually lossless (such as CFHD) codecs save considerable amounts of space as compared to uncompressed or mathematically lossless codecs like HuffYUV and Lagarith.
      • I like Cineform in particular because:
        • It is application-agnostic.
        • It is available in both VFW [.avi] and QuickTime [.mov] varieties (which is good because I have found that it can be “tempting fate” to give [.mov] files to certain Windows apps, and indeed not to give it to others).  The Windows version of CFHD comes with a [.avi] <-> [.mov] rewrapper (called HDLink).
        • Another advantage is that CFHD can encode/decode not only the standard broadcast formats (and not only HD) but also specialized “off-piste” formats.  I have found that great for corporate work. It’s as if it always had “GoPro spirit”!
        • CHFD Encoder Settings from within Sony Vegas 10:
          • These settings worked for me in the context of this “Sony-Vegas-10-Initially-then-Adobe-CS6-centric” workflow:
    • Technical Production History of a Master for an Actual Project:
      • This is merely for my own reference purposes, to document some “project forensics” (while I still remember them and/or where they’re documented):
      • This was a “Shake-Down” experience, not exactly straightforward, due to an unexpected “hiccup” between Sony Vegas 10 and AviSynth-WAVSource.  Hiccups are definitely worth documenting too…
      • The stages:
        • Sony Vegas Project: An initial HDV 50i (to match the footage) Intermediate file, containing the finished edit, was produced by Sony Vegas 10 Project:
          • [Master 021a (Proj HDV for Render HDV)  (veg10).veg] date:[Created:[2013-07-01 15:30], Modified:[2013-07-03 20:07]]
          • Movie duration was about 12 minutes.
        • Audio & Video Settings:
          • Project Settings:
            • HDV 1440×1080 50i UFF 44.1KHz
              • The audio was 44.1KHz, both for Project and Render, since most of the audio (music purchased from Vimeo shop) was of that nature.
          • Render Settings:
            • I believe I will have used the following Sony Vegas Render preset: [CFHD ProjectSize 50i 44KHz CFHD (by esp)] .
              • Though I think there may have been a bug in Vegas 10, whereby the Preset did not properly set the audio sampling frequency, so it had to be checked & done manually)
            • The CFHD Codec settings panel only offered two parameters, which I set as follows: Encoded format:[YUV 4:2:2], Encoding quality:[High]
          • The result of Rendering from this Project was the file:
            • [Master 021a (Proj HDV for Render HDV)  (veg10).avi] date:[Created:[2013-07-01 15:30], Modified:[2013-07-01 18:58]]
              • Modified date minus creation date is about 3.5 hours, which I guess accounts for the render-time (on a 2-core MacBook Pro of 2009 vintage winning Windows 7 under Boot Camp).
        • The next stage of processing was to be by AviSynth.
          • However AviSynth had problems reading the audio out of this file (it sounded like crazy buzzes).
          • To expedite the project, and guessing that Vegas 10 had produced a slightly malformed result (maybe related to the audio setting bug?), and hoping that it was just a container-level “audio framing” issue, I “Mended” it by passing it through VirtualDub, in [Direct Stream Copy] mode, so that it was merely rewrapping the data as opposed to decompressing and recompressing it.  The resulting file was:
            • [Master 021a HDV Mended (VDub).avi], date:[Created:[2013-07-08 18:22], Modified:[2013-07-08 18:30]]
          • Since that time, I have discovered the existence of the Cineform tool CFRepair, from forum post at DVInfo: http://www.dvinfo.net/forum/cineform-software-showcase/507364-problem-cfrepair.html which itself provided a download link as http://miscdata.com/cineform/CFRepair.zip.
            • Worth trying it out sometime, on this same “broken” file…
        • This was processed into full HD progressive (one frame per field, “double-framerate”) by an AViSynth script as follows, its results being drawn through VirtualDub into a further AVI-CFHD file, constituting the required Master.
          • AviSynth Script:[HDV to HD 1920×1080.avs] date:[Created:[2013-07-04 18:13], Modified:[2013-07-08 22:05]]
            • I used AvsP to develop the script.  It provides helpful help of various kinds and can immediately show the result in its preview-pane.
            • Multi-threaded:
              • To make best use of the multiple cores in my machine, I used the AviSynth-MT variant of AviSynth.  It’s a (much larger) version of the [avisynth.dll] file.  For a system where AviSynth (ordinaire) is already installed, you simply replace the [avisynth.dll] file in the system folder with this one.  Of course its sensible to keep the old one as a backup (e.g. rename it as [avisynth.dll.original]).
            • Audio Issue:
              • This particular script, using function [AVISource] to get the video and and [WavSource] to get the audio, only gave audio for about the first half of the movie, with silence thereafter.
              • Initially, as a workaround, I went back to VirtualDub and rendered-out the audio as a separate WAV file, then changed the script to read its [WAVSource] from this.
              • That worked fine, “good enough for the job” (that I wanted to expedite)
              • However afterwards I found a cleaner solution: Instead of functions [AVISource] and [WAVSource], use the single function [DirectShowSource].  No audio issues.  So use that in future.  And maybe avoid Vegas 10?
          • The script was processed by “pulling” its output video stream through VirtualDub which saved it as a video file, again AVI-CFHD.  Since no filters (video processing) was to be performed in VirtualDub, I used it in [Fast Recompress] mode.  In this mode, it leaves the video data in YUV (doesn’t convert it into RGB), making it both fast and information-preserving.  Possibly (not tested) I could have simply have rendered straight from AvsP:[Tools > Save to AVI].  When I first tried that, I got audio issues, as reported above, hence I switched to rendering via VirtualDub, but in retrospect (having identified a source, perhaps the only source,  of those audio issues) that (switch) might have been unnecessary.
      • The resulting Master file was [Master 021a HDV 50i to HD 50p 1920×1080 (Avs-VDub).avi] date:[Created:[2013-07-08 21:55], Modified:[2013-07-08 22:47]]
        • “Modified minus created” implies a render-time of just under an hour.  This was on a [MacBook Pro (2009) > Boot Camp > Windows 7] having two cores, fully uitilised.
  • Quality inspection of Master:
    • Check image quality, e.g. deinterlacing, via VirtualDub.
      • VirtualDub is great in a close-inspection role because its Preview can zoom well beyond 100% and, vitally, it displays the video as-is, with no deinterlacing etc. of its own.
        • e.g. zoom to 200% to make any interlacing comb-teeth easily visible.  There should not be any, since this Master is meant to be progressive.
  • Premiere Project: Make a Premiere project consistent with the Master, and add chapter markers here.
    • Make Premiere Project consistent with the Master, not the Target.
      • …especially when there is more than one target…
    • Don’t directly encode the master (by Adobe Media Encoder), but instead go via Premiere.
      • I have read expert postings on Adobe forums stating that as of Adobe CS6, this is the best route.
      • This appears to be the main kind of workflow the software designers had in mind, hence a CS6 user is well-advised to follow it.
        • It represents a “well-trodden path” (of attention in CS6’s overall development and testing).
        • Consequently, (it is only in this mode that) high-quality (and demanding, hence CUDA-based) algorithms get used for any required scaling.
        • Not knowing the application in detail, hence having to adopt the speculative approach to decision-making, it feels likely that this workflow would have a greater chance of reliability and quality than other, relatively off-piste ones.
    • Premiere is the best stage at which to add Chapter Markers etc.
      • Chapter markers etc. get stored as ??XMP?? and are thereby visible to Encore (Adobe’s DVD-Builder)
      • Better to place such markers in Premiere rather than in Encore, since:
        • In Encore, Chapter markers act as if they are properties of Assets, not Timelines.
          • If you delete an asset from a timeline, the chapter markers disappear also.
        • Encore (CS6) Replace Asset has some foibles.
          • In Encore, if you were to put an [.avi] file asset on a timeline, then add markers then try to replace that asset with a [.mpg] file, you would be in for a disappointment; if the file extension differs then the markers disappear. If required, then the markers would have to be re-created from scratch. Same again if you subsequently replaced back to a new [.avi] file.
          • The Foibles of Encore (CS6)’s Replace Asset function, in more detail:
            • Good news: If the new asset has the same file extension then any existing markers are retained.
              • This possibly suggests that they are transferred from the old asset to the new one.
            • Bad news: If the new asset file extension differs from the old one, then:
              • You get an error (popup): ???
                • e.g. it refused my attempt to replace an [.avi] file by a [.m2v] file).
              • Partial-workaround:
                • You can instead delete the existing asset from the timeline, prior to dragging another asset there..
                • ..BUT as a side-effect that deletes any of the old asset’s markers also…
                • …and furthermore Encore has no way to copy a set of markers from one asset to another
                  • …which would otherwise have been a nice work-around for the above side-effect.
  • Premiere Export: Export / Render to Target Format.
    • You may wish to render to a number of formats, e.g. SD-Wide DVD, Blu-Ray Disk (BD), YouTube upload format, mobile phone or tablet.
      • The most efficient strategy is to Queue a number of jobs from Premiere onto Adobe Media Encoder (AME.
        • AME can run some things in parallel (I think).
        • AME has a [Pause] button, very useful for overnight silence or prior to travel (Windows Sleep/Hibernate).
    • Menu:[File > Export > Media]
    • Export Settings:
      • For targets of differing aspect ratio (e.g. SD-Wide derived from HD master):
        • Source Scaling:
          • e.g. for HD -> SD, use [Scale to Fill] since this avoids “pillarboxing” i.e. black bars either side.
      • For DVD Target, use inbuilt preset MPEG2-DVD
        • Ensure [Pixel Aspect Ratio] and interlace sense etc. are as required.
        • The [MPEG2-DVD] preset generates two files:
          • [.m2v] for the video
          • [Dolby Digital] or [MPEG] or [PCM]
            • [PCM] option results in a [.wav] file of 16 bits, 48 KHz (there is no 44.1 KHz option).
      • Maximum Render Quality
        • Use this if scaling, e.g. down from HD Master to SD Target.
      • File Path & Name.
        • Where you want the export/encode result to go.
    • Click the [Queue] button, to send the job to the Adobe Media Encoder (AME)
  • Quality Inspection of Result (intermediate or target file):
    • Check the quality of the encodes via VirtualDub, e.g. for DVD-compatible video media, the correctness of interlacing and for progressive media the quality of deinterlacing.
      • For interlaced downscaled material derived from higher resolution interlaced, the combs should be fine-toothed (one pixel in height).  A poor quality result (as expected for straight downscaling by any typical NLE such as Premiere, from HD interlaced to SD interlaced) would instead exhibit combing with thick blurry teeth.
      • VirtualDub is great tool for a a close-inspection role because its Preview can zoom well beyond 100% and, vitally, it displays the video as-is, with no deinterlacing etc. of its own.
        • In the past I have searched for and experimented with a number of candidate tools to be effective and convenient in this role.  VirtualDub was the best I could find.
        • e.g. zoom to 200% to make the teeth easily visible.
      • Plain VirtualDub is unable to read MPEG2 video, but a plugin is available to add that ability:
        • The [mpeg2.vdplugin] plugin by FCCHandler, from http://sourceforge.net/projects/fcchandler/files/Virtualdub%20Mpeg2%20plugin/.
          • It reads straight MPEG2 files, including [.m2v], but not Transport Stream files such as [.m2t] from the Sony Z1.
          • For [.m2v] files, VirtualDub may throw up an audio-related error, since such files contain no audio.  Fix: In VirtualDub, disable audio.
        • Its ReadMe file contains installation instructions.  Don’t just put it in VirtualDub’s existing [plugins] folder.
  • DVD Construction via Adobe Encore.
    • Name the Project according to the disk-label (data) you would like to see for the final product.
      • If you use Encore to actually burn the disk, this is what gets used for that label.
      • Alternative options exist for just burning the disk, e.g. the popular ImgBurn, and this allows you to define your own disk-label (data).
    • Import the following as Assets:
      • Video file, e.g. [.m2v]
      • If Video File was an [.m2v] then also import its associated Audio file – it does not get automatically loaded along with the [.m2v] file.
    • Create required DVD structure
      • This is too big a topic to cover here.
    • Quality Inspection: [Play From Here]
      • Menu:[File > Check Project]
        • Click [Start] button
        • Typical errors are actions [Not Set] on [Remote] or [End Action]
          • I plan to write a separate blog entry on how to fix these.
        • When everything is ok (within the scope of this check), it says (in status bar, not as a message): “No items found”.
          • A worrying choice of phrase, but all it means is “no error-items found”.
    • Menu:[File > Build > Folder]
      • Don’t select [Disk], since:
        • May want to find and fix any remaining problems prior to burning to disk.
        • May want to use an alternative disk burning application, such as ImgBurn.
          • From forums, I see that many Adobe users opt for ImgBurn.
      • Set the destination (path and filename) for the folder in which the DVD structure will be created.
        • At that location it creates a project-named folder and within that the VIDEO_TS folder (but no dummy/empty AUDIO_TS folder).
          • I once came across an ancient DVD player that insisted on both AUDIO_TS and VIDEO_TS folder being present and also they had to be named in upper-case, not lower.
      • Under [Disk Info] there is a colored bar, representing the disk capacity
        • Although the Output is to a folder, the Format is DVD, single-sided, which Encore realizes can hold up to 4.7 GB.
      • The [DVD ROM] option allows you to include non-DVD files, e.g. straight computer-playable files e.g. ([.mp4])
        • These go to the root of the drive, alongside the VIDEO_TS folder.
      • Finally, click the [Build] button.
        • On one occasion, it failed at this stage, with a “Encode Failed” or “Transcode Failed” (depending where I looked) error.  Solution: Shorten the file name.
          • Ok it was long-ish but I didn’t realize Encore would be so intolerant to that.  The suggestion of it only struck me later (the appearance of this guess was thanks to years of experience with computing etc.).
  • Quality Inspection of the DVD
    • I have found Corel WInDVD to show results representative of a standard TV with a DVD Player.
    • I have found popular media player such as VLC and Windows Media Player (WMP) to behave differently to this, hence not useful for quality-checking.   Problems I found included:
      • False Alarm: Playing went straight to the main video, didn’t stop at the Main Menu (as had been intended).  However it worked fine on a standard physical DVD player.
      • Hidden Problem: In one case I deinterlaced improperly, resulting in “judder” on movements when played on TV (via physical DVD player).  However it appeared fine on both VLC and WMP.
  • Metadata
    • In the case of WMV files, just use Windows Explorer:[aFile >RtClk> Properties > Details] and edit the main items of metadata directly.
    • For DVD generated by Adobe Encore, the Disk label (data) is the same as the Project name.
      • ImgBurn, a popular alternative to Encore as regards actually burning a disk, provides a way of changing this disk-label.

After Effects (etc.) CS6: Workflows for XDCAM-EX Footage

Thursday, February 28th, 2013

As remarked in an earlier blog entry, I was concerned about how best to import/use XDCAM-EX footage in an After Effects CS, especially when that footage could be spanned across more than one [.mp4] file, especially given that their contents can overlap.  In Premiere this is not an issue, because its (new) Media Browser feature provides instead a higher-level view, of clips rather than lower-level [.mp4] essence-files.

Sadly, as yet, AE CS6 has no equivalent of the Media Browser.

Best workaround:

  • In Premiere, use Media Browser to import an XDCAM-EX clip, then copy it and paste that “virtual” clip into AE.

Workflows involving Adobe Prelude:

  • The web-search record (below) not only provides the foundation for the above statements, it also contains an explanation of the different workflows (e.g. whether or not to sort/trim/rename clips in Prelude).  Some workflows are best for short-form (typically involving tens of footage-clips) while other workflows may be more appropriate for long-form (hundreds or thousands of clips).

(more…)

Adobe After Effects CS6: XDCAM-EX Readability Glitch (Solution: Reboot)

Thursday, February 28th, 2013

While editing an Adobe Premiere CS6 project based on XDCAM-EX footage (from an EX3), I thought I’d enhance the footage in After Effects (where more sophisticated enhancement effects than in Premiere are available).  Should be easy I thought, taking advantage of the CS6 suite’s Dynamic Link feature.

In Premiere, I selected the relevant clip and did [RightClick > Replace With After Effects Composition].  As expected, this opened After Effects, with the appropriate dynamic link to Premiere…

…BUT…

All I got on the Preview in After Effects, and indeed back in Premiere,  was Color Bars.  I assumed this indicated some kind of failure in After Effects.

Naively, I concluded that, on my system at least, After Effects CS6 could not read XDCAM-EX.  A brief web-search (further below) revealed user experiences and video convertor article-adverts implying that I was not alone with this problem.  But an Adobe blog entry suggested that no such problem existed in AE CS6 and some and Adobe documentation (pdf) said so explicitly.  For the moment then, I was confused…

Then I rebooted and tried again.  This time it worked.  I succeeded in making AE projects both by directly importing the footage (as mp4 files) in AE and via Dynamic Link from Premier.

The direct import dialog was slightly weird though: it claimed it was listing “All Acceptable Files” but these included not only [.mp4] files but also e.g. [.smi] files, which, when I selected one of these it complained: “…unsupported filetype or extension”.   Incidentally, the reason I tried it at all was that XDCAM-EX is a spanned format, where a single recording can be spanned/split/spread over multiple [.mp4] files. Furthermore, there can be an overlap of content from one [.mp4] to the next (in a span), so in principle (I haven’t tried it), simply placing one [.mp4] after another on the timeline would give rise to a (short) repetition at each transition (from one [.mp4] to the next).

But this is already over-long for a single blog-post, so I’ll deal with that issue in a separate post.

(more…)

Adobe Premiere CS6: Nested Sequences: Slow Response to Play-Button (Re-buffering? Re-parsing?)

Thursday, February 21st, 2013

Context:

  • I had a Sequence  containing two video tracks, each having a pair of (associated) audio tracks.
    • Sequence Properties: 1080p, square pixels, 25 fps.
    • One track contained a single continuous clip of duration just over one hour[01:02:46:10].
      • Properties: 1080p, square pixels, 25 fps.
      • Format: Sony XDCAM-EX: MPEG2 @ 35 Mbps VBR: MPEG2HD35_1920_1080_MP@HL
    • The other track contained a number of discrete clips, intermittently spaced over that time.
      • Properties: 1080i, fat pixels (PAR=4/3), 25 fps (50 fields/sec), UFF.
      • Format: Sony Z1 HDV: MPEG2 @ 25Mbps CBR
  • This sequence, as it stood, played fine.
  • Then I nested that sequence (seqA)  inside another sequence (seqB).
    • Still played fine

Problem:

  • Then I did some multicam “music video” edits, mostly near the end of the sequence
    • Now, when I hit the spacebar to play seqB, there is a delay of several seconds before playing actually begins.
  • If I try re-creating from scratch, by nesting seqA inside new seqC then seqC plays fine.
  • If I try copying the multicam-edited elements of seqB (the multicam edit-sequence) into new seqD (a new multicam edit-sequence) then the sluggish response to [Play] still occurs.
    • Doh!  I had hoped that would be a simple workaround..

Partial Workaround:

  • Following web-advice regarding a broadly-similar issue with multicamera sequences comprising spanned clips (e.g. AVCHD or Canon’s H264) , I tried transcoding the footage to GoPro-Cineform
    • Based on Adobe’s workaround-advice regarding broadly similar problems with long hence spanned AVCHD footage.  My footage is not AVCHD, but the main clip is Sony XDCAM-EX, which has some features (like spanning) in common with AVCHD.  Worth a shot!
      • On a 4-Core i7 PC with GPU, it encoded at about real-time, which in my case was about an hour.  CPU was only 25% i.e. equivalent to a single core
    • Replaced the relevant clip in seqA.
      • To my delight, the clip-markers (in that clip in seqA) were retained/applied in that replacement footage.
  • However, the sluggish [Play]-start remained, though possibly shortened, from about 6 seconds to 4 seconds.

Further Workaround:

  • Duplicate seqA
  • Nest it in a separate multicam sequence (seqE)
  • Do multicam edits on further segments of the event in that (seqE)
    • Intend later to nest/sequence usable bits of each multicam edit-sequence in a Master sequence.
  • Where there’s a will, there’s a workaround…
    • Still, I expect better of Adobe…
    • I lost about 3 hours to this (including web-searching, waiting transcoding and general experimentation).

Further gripes:

  •  God it’s clunky!
    • Every time I stop multicam-preview to tweak the multicam cut timings, then return to multicam editing, I have to remember to activate the multicam monitor, not the timeline (where the tweaks are done).  Unfortunately my reflex is simply to hit the spacebar.  It is a nuisance to have to fight that reflex…
    • Every time I stop multicam-preview, it leaves a cut at the final position of the playhead.  Not useful and simply clutters the timeline, distracting from real cuts.
    • Zoom [+] only affects the Timeline, not the multicam monitor.  As a result, I tend to set the playhead position using the timeline.  Doh! must remember to click (activate) back to the multicam monitor once more…
    • Ranged (duration not zero) markers are great but adjusting their right-hand end can be tricky, since this can change the playhead and/or timeline-display.  Things snatch and interact that shouldn’t (I feel).
    • Sony Vegas is far better in these respects, though not in some others, so I’m sticking with Adobe…
  • Unexpected Preview-Rendering is happening…!?  How come?
    • In principle, that shouldn’t be happening?   I have a state-of -the-art (4-core i7 & GPU) laptop specifically for CS6, no effects applied, just cutting between two cameras, some plain dissolves (between segments of the multicam sequence) – but surely the Mercury Engine should take them in its stride?  (or can’t it cope yet with multicam?).

    (more…)

    Graph Editor – yEd

    Tuesday, January 29th, 2013

    I wanted a graph editor where I could define the connectivity on-the-fly, including inserting new nodes partway along existing connections.  Something “fluid” to use.

    I found it: yEd, available at http://www.yworks.com/en/products_yed_about.html.  Superbly slick, functional, multi-platform and free (gratuit), even for commercial use.

    Main reason for wanting such a thing was to be able to document the media depenencies in a multimedia (e.g. video) project.  For example I might begin with an Adobe Premiere project based on raw footage.  I would document that in a connectivity-graph having a RawFootage node (object) with an arrowed line coming out of it and going into a Project node.  Later on, I might decide to enhance (e.g. CPU-intensively de-noise) the footage and then use that enhanced footage in the project instead (media-replace).  Such an intervention would not have been planned, it would have been an after-thought.  To bring my documentation up-to-date,  in my connectivity-graph, I would want to interpose an Enhance node in the existing footage-to-project connection.  Being able to do that in one single step would be great (no need for individual steps such as delete connection, add node, connect source to node, connect node to project).  Having made many such changes and additions over time, the diagram might become untidy and in need of rearrangement.  So ideally the application should offer Auto-Arrangement, to produce or at least provide a starting-point towards a tidier arrangement.

    The same approach could apply to many things, including general brainstorming/mind-mapping, drama/story-design (prior to screenplays/scripts) and plain down-to-earth production of explanatory graphic diagrams as media themselves for incorporation in multimedia projects.

    All this is way beyond the diagramming tool I have most used in the past few years, namely Visio – at least the (old) versions I have encountered.  I have dabbled with GraphViz, which auto-generates/arranges diagrams from formal connectivity (etc.) definitions in geeky formal notation.  GraphViz gets the job done but from my personal experience it is somewhat clunky and slow to use (involving frequent re-experiments and reading of lookup-notes).  I want something slicker, more “GUI”, more intuitive…

    Haha!  Such a thing does exist!  I found it!  Not only does it allow the kind of graph-editing flexibility I am looking for, it can also import data from Excel etc. and auto-generate graphs from that.  So if I want I can document my connectivity/dependency information first in Excel or Notepad (say) with a view to generating a diagram from it at a subsequent stage.  And it is multi-platform (based on Java) and it is free (gratuit).

    It is called yEd and is available at http://www.yworks.com/en/products_yed_about.html

    (more…)

    Scripting: Python or Ruby? (& iOS Apps for Text/Script Editing)

    Wednesday, January 23rd, 2013

    I recently came across a handy script for recording streams.  It was in the scripting language python.  So I went to obtain it.  So far no problem.

    But that got me thinking: I’d read somewhere a long time ago of python, or was it ruby (on rails or otherwise), being used to support broadcast or film digital production workflows. So first I wanted to confirm that, through web-search.  Then second I wanted to compare the languages, to see which one I felt best about.

    Google:[python ruby workflow video production adobe avid]

    • http://devopsanywhere.blogspot.co.uk/2011/09/how-ruby-is-beating-python-in-battle.html
      • This is a really good article, I can see that for me, python is the obvious choice – more readable to me.  Furthermore:
        • Ruby’s greatest strength is its amazing flexibility. There is a lot of “magic” in ruby and sometimes it is dark magic. Python intentionally has minimal magic. It’s greatest strengths are the best practices it enforces across its community. These practices make Python very readable across different projects; they ensure high quality documentation; they make the standard library kick ass.
        • (For a particulat given simple example, <<The python example is far more readable and maintainable. >>
      • On the other hand (in favour of ruby):
        • If ruby reminds of perl, your eyes do not deceive you. In many ways it is the love child of perl and smalltalk.
          • In the past have had a very good experience of using smalltalk
        • every large program should have its own internal DSL suited to the problem space … it seems much easier to create DSL’s (Domain Specific Languages). Ruby certainly spawns DSLs with much greater frequency than python. No single pythonic build tool dominates the problem space like rake does in the ruby community. Most python projects seems to use setup.py for administrative tasks even though that is not its explicit purpose.
    • http://en.wikipedia.org/wiki/List_of_Python_software
      • Lists Implementations, Development Environments, Applications etc.
    • http://blog.eltrovemo.com/364/diy-broadcast-how-to-build-your-own-tv-channel-with-open-source-other-goodies/
      • DIY BROADCAST: How to build your own TV Channel with Open Source & other goodies
      • Loads of great links eg for screenwriting (Celtx), multicam recording (Ingex Studio), editing (EditShare LightWorks), archive (BackBlaze) and playout (OpenPlayout, MLT).  Also 3D modelling (Blender), color correction (DaVinci Resolve Lite), Live Graphics (CasparCG), Digital Asset Management (EnterMedia).  And more … but you get the drift…
    • http://doingthatwrong.com/home/2012/10/18/running-scripts-with-textexpander
      • Example scripting and, serendipitously, some recommended iOS (iPhone/iPad) apps for note-taking and html script production, namely Nebulous Notes and TextExpander (which can work together).

    G’MIC: Image Processing Pipeline(s) Scripting Language

    Saturday, January 19th, 2013

    GMIC: An Image Processing Pipeline(s) Scripting Language
    I found this by accident, but it looks really handy for “industrial-scale” image processing.

    • http://gmic.sourceforge.net/
      • G’MIC stands for GREYC’s Magic Image Converter. This project aims to:
        • Define a lightweight but powerful script language (G’MIC) dedicated to the design of image processing operators and pipelines.
        • Provide an interpreter of this language, distributed as a C++ open-source library embeddable in third-party applications.
        • Propose four different user interfaces for this image processing framework:
          • The command-line executable gmic to use the G’MIC framework from a shell
            • In this setting, G’MIC may be seen as a serious (and friendly) competitor of the ImageMagick or GraphicsMagick software suites
          • The interactive and extensible plug-in gmic_gimp to bring G’MIC capabilities to the image retouching software GIMP.
          • ZArt: a real-time interface for webcam images manipulation.
          • G’MIC Online, a web service allowing users to apply image processing algorithms directly from a web browser.
      • G’MIC is focused on the design of possibly complex pipelines for converting, manipulating, filtering and visualizing generic 1d/2d/3d multi-spectral image datasets. This includes of course color images, but also more complex data as image sequences or 3d(+t) volumetric float-valued datasets.
      • G’MIC is an open framework: the default language can be extended with custom G’MIC-written commands, defining thus new available image filters or effects. By the way, G’MIC already contains a substantial set of pre-defined image processing algorithms and pipelines (more than 1000).
      • G’MIC has been designed with portability in mind and runs on different platforms (Windows, Unix, MacOSX). It is distributed under the CeCILL license (GPL-compatible). Since 2008, it is developed in the Image Team of the GREYC laboratory, in Caen/France, by permanent researchers working in the field of image processing on a daily basis.
      • Main features:
        • G’MIC defines a complete image processing framework (provides interfaces for C++, shell, gimp and web), and can manage generic image data as other image-related tools. More precisely:
        • It can process a wide variety of image types, including multi-spectral (arbitray number of channels) and 3d volumetric images, as well as image sequences, or 3d vector objects. Images with different pixel types are supported, allowing to process flawlessly images with 8bits or 16bits integers per channel, as well as float-valued dataset.
        • It internally works with lists of images. Image manipulations and interactions can be done either grouped or focused on specific items.
        • It provides small but efficient visualization modules dedicated to the exploration/viewing of 2d/3d multi-spectral images, 3d vector objects (elevation map, isocurves, isosurfaces,…), or 1d graph plots.
        • It is highly extensible through the importation of custom command files which add new commands that become understood by the language interpreter
        • It proposes commands to handle custom interactive windows where events can be managed easily by the user.
        • It is based on the latest development versions of the CImg Library, a well established C++ template image processing toolkit, developed by the same team of developers.

    Sony EX3 Noise & Bits-Resolution & Green-Screen

    Sunday, November 4th, 2012

    It has been said ( I believe by Alister Chapman ) that there are only marginal benefits from recording XDCAM-EX to more than 8 bits, due to the relatively high noise of this camera, as compared to more typical broadcast cameras.

    In my experience, while it was a wonderful step-up from my Z1, certainly it’s recordings are noisier than I’d like, leading me to pretty-process certain footage (using Neat Video denoising plugin to my NLE).  And as a recent project with reasonably well-lit green-screen illustrated, it’s noise in shadows can be a particularly nuisance (much time in post experimenting to work around this).

    So I wondered:

    • Even if marginal, to what extent is 10-bit beneficial to EX3 recording?
    • For the EX3, when recording 10-bit, it is also 4:2:2, surely a benefit to chroma keying and resizing (reframing, stabilising/deshaking/tracking).
    • Could the benefit depend on editing workflow?  For example:
      • What if subsequently de-noising (like I mentioned)?
      • Some NLE’s do bits-dithering, hiding the quantisation/banding that would otherwise be apparent from having only 8 bits.

    I need to do my own experiments, but for now, here (below) are some results from web-searching…

    (more…)

    Adobe Prelude – Usage in Newsroom Context

    Monday, August 27th, 2012

    http://tv.adobe.com/watch/cs6-creative-cloud-feature-tour-for-video/how-to-use-prelude-in-broadcast-news-workflows/

    I don’t work in or for a newsroom as such, but I do cover live events.

    Apple Mac & FCP -> Windows & Adobe

    Tuesday, January 10th, 2012

    Avid Media Composer: Practical Usage in Productions

    Tuesday, January 3rd, 2012

    http://www.youtube.com/watch?v=F1nl-LFwziI&feature=related

    • MC 5.0
    • User and Beta-Tester experiences.
      • They like the (fairly new) Smart Tool
      • AMA is useful for producing on-set rushes and quick edits
        • They mention a Mac (..Book?) being used on-set, taking footage from a P2 card.
        • They show a card from a DSLR being plugged into a Lexar outboard card-reader.
        • {? I wonder if subsequently they ingest/import it in “traditional” fashion, e.g. to take advantage of media management and to minimise risks of obscure issues down the line ?}
      • 01:50 shows Steven Sprung, ACE Editor (Dispatch, Entourage). He looks a bit like me.
      • More than one editing-suite scene shows a graphic tablet being used.
      • [02:12] shows some track labels/assignments.
        • It can be instructive to see how others do it.
      • [02:14] et seq: Smart Tool
      • [02:14] Audio
        • e.g. level meters on each track
        • Track-based RTAS effects etc. are useful to help indicate to the sound department approximately what the editor requires artistically
      • [03:48] Editors (can be) on set 12-14 hours/day might also take work home on laptop.
      • [03:39] Graphic tablet shown as part of edit suite. Which one is it? How useful?
      • [03:59] Matrox MXO Mini enables use of a standard TV as monitor, including calibration tools (what kind?).

    Training: Den Lennie’s “Music Video” Experience

    Thursday, October 27th, 2011

    I attended, working on one of the camera units.  Had a great time, learnt lots, at all sorts of levels.  Even how to make good use of the Movie Slate application on my iPhone!  Link: http://www.fstopacademy.com/

    iPhone: Email Account Config

    Sunday, September 25th, 2011

    Set up email accounts on iPhone, to read (not consume) emails to me at various addresses.  Being no expert at this, I record my experiences for possible future reference.

    (more…)

    Avid MC Workflow – Offline/Online Editing Example

    Thursday, September 8th, 2011

    I found a great simple example of a typical Avid workflow, happens to be for footage from an ARRI ALEXA camera, which can record to ProRes (amongst other things).  Once again, the workflow depends on the clunky/hacky method of taking files offline, in order to substitute others (e.g. proxies), in this case by moving them out of the MXF folder.  The full article is at http://digitalfilms.wordpress.com/2010/11/18/arri-alexa-post-part-3/.  The last time I posted on this sort of issue was http://blog.davidesp.com/archives/367.

    (more…)