Archive for the ‘interlacing’ Category

RE:Vision’s FieldsKit ReInterlacer

Friday, May 16th, 2014

In Summary:

Purpose of FieldsKit ReInterlacer:

  • Transforms progressive video (e.g. HDp25 frames/sec) into spatio-temporal interlaced video (e.g. SDi50 fields/sec).  It achieves this by estimating the fields that would have been shot (had the original video itself been shot as interlaced) between each frame of the progressive video, via a process of motion estimation.
    •  Most NLEs do not use this “perfectionist” method, instead they at best simply combine (ghost-blur) successive frames, with no compensation for time/motion.
    • On an interlaced display, such as an old analog TV or projector,
      • The “NLE-simple” approach may lead to dynamic (changing e.g. moving) scenes and objects appearing flickery.
      • The “perfectionist” approach will instead typically avoid such flicker.

Configuration of  FieldsKit ReInterlacer:

  • Field Order: [Lower First]
  • Output Type: [= Create motion estimated fields]
    • This is not the default (oddly).  But it is the only proper way to get the expected “perfectionist” reinterlacing to happen!
  • Source Layer: [Video 1]

Supplier’s website:

(more…)

Best Workflow for High-resolution Master (e.g. HD or HDV) to Multi-Format Including SD-DVD

Saturday, July 13th, 2013

What is the best workflow for going from a high-resolution footage, potentially either progressive or interlaced,  possibly through an intermediate Master (definitely in progressive format) to a variety of target/deliverable/product formats, from the maximum down to lower resolution and/or interlaced formats such as SD-DVD ?

Here’s one big fundamental: Naively one might have hoped that long-established professional NLEs such as Premiere might provide high-quality optical processing based downscaling from HD to SD, but my less optimistic intuition, about the un-likelihood of that, proved correct.  In my post http://blog.davidesp.com/archives/815 I note the BBC Technical standards for SD Programmes state: <<Most non linear editing packages do not produce acceptable down conversion and should not be used without the broadcaster’s permission>>.

Having only ever used Adobe (CS5.5 & CS6) for web-based video production, early experiences in attempting to produce a number of target/deliverable (product) formats proved more difficult and uncertain than I had imagined…  For a current project, given historical footage shot in HDV (1440×1080, fat pixels), I wanted to generate various products from various flavors of HD (e.g. 1920x1080i50,  1280x720p50) down to SD-DVD (720×576).  So I embarked on a combination of web-research and experimentation.

Ultimately, this is the workflow that worked (and satisfied my demands):

  • Master: Produce a 50 fps (if PAL) progressive Master at the highest resolution consistent with original footage/material.
    • Resolution: The original footage/material could e.g. be HD or HDV resolution.  What resolution should the Master be?
      • One argument, possibly the best one if only making a single format deliverable or if time is no object, might be to retain the original resolution, to avoid any loss of information through scaling.
      • However I took the view that HDV’s non-standard pixel shape (aspect ratio) was “tempting fate” when it came to reliability and possibly even quality in subsequent (downstream in the workflow) stages of scaling (down) to the various required formats (mostly square-pixel, apart from SD-Wide so-called “16:9” pixels, of 1.4568 aspect ratio (or other, depending where you read it).
      • So the Master resolution would be [1920×1080].
    • Progressive: The original footage/material could e.g. be interlaced or progressive, but the Master (derived from this) must be progressive.
      • If original footage was interlaced then the master should be derived so as to have one full progressive frame for each interlaced field (hence double the original frame-rate).
        • The concept of “doubling” the framerate is a moot point, since interlaced footage doesn’t really have a frame rate, only a field rate, because the fields are each shot at different moments in time.  However among the various film/video industry/application conventions, some people refer to 50 fields/second interlaced as 50i (or i50) wile others refer to it as 25i (or i25).  Context is all-important!
    • Quality-Deinterlacing: The best way to convert from interlaced fields-to-frames is via motion/pixel/optical -based tools/techniques:
      • I have observed the quality advantage in practice on numerous projects in the distant past, e.g. when going from HDV or SD (both 50i) to a variety of (lower) corporate web-resolutions.
      • This kind of computation is extremely slow and heavy, hence (for my current machines at least) more an overnight job than a real-time effect… In fact for processing continuously recorded live events of one or two hours, I have found 8 cores (fully utilised) to take a couple of 24-hour days or so – for [AviSynth-MultiThread + TDeint plugin] running on a [Mac Pro > Boot Camp > Windows 7].
      • But (as stated) this general technique observably results in the best quality, through least loss of information.
      • There are a number of easily-available software tools with features for achieving this, Adobe and otherwise:
        • e.g. AviSynth+TDeint, (free) After-Effects, Boris.
        • e.g. FieldsKit is a nice convenient deinterlacing plugin for Adobe (Premiere & After Effects), and is very friendly and useful should you want to convert to a standard progressive video (e.g. 25fps), but (at this time) it can only convert from field-pairs to frames, not from fields to frames.
          • I submitted a Feature Request to FieldsKit’s developers.
    • Intermediate-File Format: A good format for an Intermediate file or a Master file is the “visually lossless” wavelet-based 10-bit 422 (or more) codec GoPro-Cineform (CFHD) Neo
      • Visually lossless (such as CFHD) codecs save considerable amounts of space as compared to uncompressed or mathematically lossless codecs like HuffYUV and Lagarith.
      • I like Cineform in particular because:
        • It is application-agnostic.
        • It is available in both VFW [.avi] and QuickTime [.mov] varieties (which is good because I have found that it can be “tempting fate” to give [.mov] files to certain Windows apps, and indeed not to give it to others).  The Windows version of CFHD comes with a [.avi] <-> [.mov] rewrapper (called HDLink).
        • Another advantage is that CFHD can encode/decode not only the standard broadcast formats (and not only HD) but also specialized “off-piste” formats.  I have found that great for corporate work. It’s as if it always had “GoPro spirit”!
        • CHFD Encoder Settings from within Sony Vegas 10:
          • These settings worked for me in the context of this “Sony-Vegas-10-Initially-then-Adobe-CS6-centric” workflow:
    • Technical Production History of a Master for an Actual Project:
      • This is merely for my own reference purposes, to document some “project forensics” (while I still remember them and/or where they’re documented):
      • This was a “Shake-Down” experience, not exactly straightforward, due to an unexpected “hiccup” between Sony Vegas 10 and AviSynth-WAVSource.  Hiccups are definitely worth documenting too…
      • The stages:
        • Sony Vegas Project: An initial HDV 50i (to match the footage) Intermediate file, containing the finished edit, was produced by Sony Vegas 10 Project:
          • [Master 021a (Proj HDV for Render HDV)  (veg10).veg] date:[Created:[2013-07-01 15:30], Modified:[2013-07-03 20:07]]
          • Movie duration was about 12 minutes.
        • Audio & Video Settings:
          • Project Settings:
            • HDV 1440×1080 50i UFF 44.1KHz
              • The audio was 44.1KHz, both for Project and Render, since most of the audio (music purchased from Vimeo shop) was of that nature.
          • Render Settings:
            • I believe I will have used the following Sony Vegas Render preset: [CFHD ProjectSize 50i 44KHz CFHD (by esp)] .
              • Though I think there may have been a bug in Vegas 10, whereby the Preset did not properly set the audio sampling frequency, so it had to be checked & done manually)
            • The CFHD Codec settings panel only offered two parameters, which I set as follows: Encoded format:[YUV 4:2:2], Encoding quality:[High]
          • The result of Rendering from this Project was the file:
            • [Master 021a (Proj HDV for Render HDV)  (veg10).avi] date:[Created:[2013-07-01 15:30], Modified:[2013-07-01 18:58]]
              • Modified date minus creation date is about 3.5 hours, which I guess accounts for the render-time (on a 2-core MacBook Pro of 2009 vintage winning Windows 7 under Boot Camp).
        • The next stage of processing was to be by AviSynth.
          • However AviSynth had problems reading the audio out of this file (it sounded like crazy buzzes).
          • To expedite the project, and guessing that Vegas 10 had produced a slightly malformed result (maybe related to the audio setting bug?), and hoping that it was just a container-level “audio framing” issue, I “Mended” it by passing it through VirtualDub, in [Direct Stream Copy] mode, so that it was merely rewrapping the data as opposed to decompressing and recompressing it.  The resulting file was:
            • [Master 021a HDV Mended (VDub).avi], date:[Created:[2013-07-08 18:22], Modified:[2013-07-08 18:30]]
          • Since that time, I have discovered the existence of the Cineform tool CFRepair, from forum post at DVInfo: http://www.dvinfo.net/forum/cineform-software-showcase/507364-problem-cfrepair.html which itself provided a download link as http://miscdata.com/cineform/CFRepair.zip.
            • Worth trying it out sometime, on this same “broken” file…
        • This was processed into full HD progressive (one frame per field, “double-framerate”) by an AViSynth script as follows, its results being drawn through VirtualDub into a further AVI-CFHD file, constituting the required Master.
          • AviSynth Script:[HDV to HD 1920×1080.avs] date:[Created:[2013-07-04 18:13], Modified:[2013-07-08 22:05]]
            • I used AvsP to develop the script.  It provides helpful help of various kinds and can immediately show the result in its preview-pane.
            • Multi-threaded:
              • To make best use of the multiple cores in my machine, I used the AviSynth-MT variant of AviSynth.  It’s a (much larger) version of the [avisynth.dll] file.  For a system where AviSynth (ordinaire) is already installed, you simply replace the [avisynth.dll] file in the system folder with this one.  Of course its sensible to keep the old one as a backup (e.g. rename it as [avisynth.dll.original]).
            • Audio Issue:
              • This particular script, using function [AVISource] to get the video and and [WavSource] to get the audio, only gave audio for about the first half of the movie, with silence thereafter.
              • Initially, as a workaround, I went back to VirtualDub and rendered-out the audio as a separate WAV file, then changed the script to read its [WAVSource] from this.
              • That worked fine, “good enough for the job” (that I wanted to expedite)
              • However afterwards I found a cleaner solution: Instead of functions [AVISource] and [WAVSource], use the single function [DirectShowSource].  No audio issues.  So use that in future.  And maybe avoid Vegas 10?
          • The script was processed by “pulling” its output video stream through VirtualDub which saved it as a video file, again AVI-CFHD.  Since no filters (video processing) was to be performed in VirtualDub, I used it in [Fast Recompress] mode.  In this mode, it leaves the video data in YUV (doesn’t convert it into RGB), making it both fast and information-preserving.  Possibly (not tested) I could have simply have rendered straight from AvsP:[Tools > Save to AVI].  When I first tried that, I got audio issues, as reported above, hence I switched to rendering via VirtualDub, but in retrospect (having identified a source, perhaps the only source,  of those audio issues) that (switch) might have been unnecessary.
      • The resulting Master file was [Master 021a HDV 50i to HD 50p 1920×1080 (Avs-VDub).avi] date:[Created:[2013-07-08 21:55], Modified:[2013-07-08 22:47]]
        • “Modified minus created” implies a render-time of just under an hour.  This was on a [MacBook Pro (2009) > Boot Camp > Windows 7] having two cores, fully uitilised.
  • Quality inspection of Master:
    • Check image quality, e.g. deinterlacing, via VirtualDub.
      • VirtualDub is great in a close-inspection role because its Preview can zoom well beyond 100% and, vitally, it displays the video as-is, with no deinterlacing etc. of its own.
        • e.g. zoom to 200% to make any interlacing comb-teeth easily visible.  There should not be any, since this Master is meant to be progressive.
  • Premiere Project: Make a Premiere project consistent with the Master, and add chapter markers here.
    • Make Premiere Project consistent with the Master, not the Target.
      • …especially when there is more than one target…
    • Don’t directly encode the master (by Adobe Media Encoder), but instead go via Premiere.
      • I have read expert postings on Adobe forums stating that as of Adobe CS6, this is the best route.
      • This appears to be the main kind of workflow the software designers had in mind, hence a CS6 user is well-advised to follow it.
        • It represents a “well-trodden path” (of attention in CS6’s overall development and testing).
        • Consequently, (it is only in this mode that) high-quality (and demanding, hence CUDA-based) algorithms get used for any required scaling.
        • Not knowing the application in detail, hence having to adopt the speculative approach to decision-making, it feels likely that this workflow would have a greater chance of reliability and quality than other, relatively off-piste ones.
    • Premiere is the best stage at which to add Chapter Markers etc.
      • Chapter markers etc. get stored as ??XMP?? and are thereby visible to Encore (Adobe’s DVD-Builder)
      • Better to place such markers in Premiere rather than in Encore, since:
        • In Encore, Chapter markers act as if they are properties of Assets, not Timelines.
          • If you delete an asset from a timeline, the chapter markers disappear also.
        • Encore (CS6) Replace Asset has some foibles.
          • In Encore, if you were to put an [.avi] file asset on a timeline, then add markers then try to replace that asset with a [.mpg] file, you would be in for a disappointment; if the file extension differs then the markers disappear. If required, then the markers would have to be re-created from scratch. Same again if you subsequently replaced back to a new [.avi] file.
          • The Foibles of Encore (CS6)’s Replace Asset function, in more detail:
            • Good news: If the new asset has the same file extension then any existing markers are retained.
              • This possibly suggests that they are transferred from the old asset to the new one.
            • Bad news: If the new asset file extension differs from the old one, then:
              • You get an error (popup): ???
                • e.g. it refused my attempt to replace an [.avi] file by a [.m2v] file).
              • Partial-workaround:
                • You can instead delete the existing asset from the timeline, prior to dragging another asset there..
                • ..BUT as a side-effect that deletes any of the old asset’s markers also…
                • …and furthermore Encore has no way to copy a set of markers from one asset to another
                  • …which would otherwise have been a nice work-around for the above side-effect.
  • Premiere Export: Export / Render to Target Format.
    • You may wish to render to a number of formats, e.g. SD-Wide DVD, Blu-Ray Disk (BD), YouTube upload format, mobile phone or tablet.
      • The most efficient strategy is to Queue a number of jobs from Premiere onto Adobe Media Encoder (AME.
        • AME can run some things in parallel (I think).
        • AME has a [Pause] button, very useful for overnight silence or prior to travel (Windows Sleep/Hibernate).
    • Menu:[File > Export > Media]
    • Export Settings:
      • For targets of differing aspect ratio (e.g. SD-Wide derived from HD master):
        • Source Scaling:
          • e.g. for HD -> SD, use [Scale to Fill] since this avoids “pillarboxing” i.e. black bars either side.
      • For DVD Target, use inbuilt preset MPEG2-DVD
        • Ensure [Pixel Aspect Ratio] and interlace sense etc. are as required.
        • The [MPEG2-DVD] preset generates two files:
          • [.m2v] for the video
          • [Dolby Digital] or [MPEG] or [PCM]
            • [PCM] option results in a [.wav] file of 16 bits, 48 KHz (there is no 44.1 KHz option).
      • Maximum Render Quality
        • Use this if scaling, e.g. down from HD Master to SD Target.
      • File Path & Name.
        • Where you want the export/encode result to go.
    • Click the [Queue] button, to send the job to the Adobe Media Encoder (AME)
  • Quality Inspection of Result (intermediate or target file):
    • Check the quality of the encodes via VirtualDub, e.g. for DVD-compatible video media, the correctness of interlacing and for progressive media the quality of deinterlacing.
      • For interlaced downscaled material derived from higher resolution interlaced, the combs should be fine-toothed (one pixel in height).  A poor quality result (as expected for straight downscaling by any typical NLE such as Premiere, from HD interlaced to SD interlaced) would instead exhibit combing with thick blurry teeth.
      • VirtualDub is great tool for a a close-inspection role because its Preview can zoom well beyond 100% and, vitally, it displays the video as-is, with no deinterlacing etc. of its own.
        • In the past I have searched for and experimented with a number of candidate tools to be effective and convenient in this role.  VirtualDub was the best I could find.
        • e.g. zoom to 200% to make the teeth easily visible.
      • Plain VirtualDub is unable to read MPEG2 video, but a plugin is available to add that ability:
        • The [mpeg2.vdplugin] plugin by FCCHandler, from http://sourceforge.net/projects/fcchandler/files/Virtualdub%20Mpeg2%20plugin/.
          • It reads straight MPEG2 files, including [.m2v], but not Transport Stream files such as [.m2t] from the Sony Z1.
          • For [.m2v] files, VirtualDub may throw up an audio-related error, since such files contain no audio.  Fix: In VirtualDub, disable audio.
        • Its ReadMe file contains installation instructions.  Don’t just put it in VirtualDub’s existing [plugins] folder.
  • DVD Construction via Adobe Encore.
    • Name the Project according to the disk-label (data) you would like to see for the final product.
      • If you use Encore to actually burn the disk, this is what gets used for that label.
      • Alternative options exist for just burning the disk, e.g. the popular ImgBurn, and this allows you to define your own disk-label (data).
    • Import the following as Assets:
      • Video file, e.g. [.m2v]
      • If Video File was an [.m2v] then also import its associated Audio file – it does not get automatically loaded along with the [.m2v] file.
    • Create required DVD structure
      • This is too big a topic to cover here.
    • Quality Inspection: [Play From Here]
      • Menu:[File > Check Project]
        • Click [Start] button
        • Typical errors are actions [Not Set] on [Remote] or [End Action]
          • I plan to write a separate blog entry on how to fix these.
        • When everything is ok (within the scope of this check), it says (in status bar, not as a message): “No items found”.
          • A worrying choice of phrase, but all it means is “no error-items found”.
    • Menu:[File > Build > Folder]
      • Don’t select [Disk], since:
        • May want to find and fix any remaining problems prior to burning to disk.
        • May want to use an alternative disk burning application, such as ImgBurn.
          • From forums, I see that many Adobe users opt for ImgBurn.
      • Set the destination (path and filename) for the folder in which the DVD structure will be created.
        • At that location it creates a project-named folder and within that the VIDEO_TS folder (but no dummy/empty AUDIO_TS folder).
          • I once came across an ancient DVD player that insisted on both AUDIO_TS and VIDEO_TS folder being present and also they had to be named in upper-case, not lower.
      • Under [Disk Info] there is a colored bar, representing the disk capacity
        • Although the Output is to a folder, the Format is DVD, single-sided, which Encore realizes can hold up to 4.7 GB.
      • The [DVD ROM] option allows you to include non-DVD files, e.g. straight computer-playable files e.g. ([.mp4])
        • These go to the root of the drive, alongside the VIDEO_TS folder.
      • Finally, click the [Build] button.
        • On one occasion, it failed at this stage, with a “Encode Failed” or “Transcode Failed” (depending where I looked) error.  Solution: Shorten the file name.
          • Ok it was long-ish but I didn’t realize Encore would be so intolerant to that.  The suggestion of it only struck me later (the appearance of this guess was thanks to years of experience with computing etc.).
  • Quality Inspection of the DVD
    • I have found Corel WInDVD to show results representative of a standard TV with a DVD Player.
    • I have found popular media player such as VLC and Windows Media Player (WMP) to behave differently to this, hence not useful for quality-checking.   Problems I found included:
      • False Alarm: Playing went straight to the main video, didn’t stop at the Main Menu (as had been intended).  However it worked fine on a standard physical DVD player.
      • Hidden Problem: In one case I deinterlaced improperly, resulting in “judder” on movements when played on TV (via physical DVD player).  However it appeared fine on both VLC and WMP.
  • Metadata
    • In the case of WMV files, just use Windows Explorer:[aFile >RtClk> Properties > Details] and edit the main items of metadata directly.
    • For DVD generated by Adobe Encore, the Disk label (data) is the same as the Project name.
      • ImgBurn, a popular alternative to Encore as regards actually burning a disk, provides a way of changing this disk-label.

Progressive to Interlaced via Optical Flow

Monday, July 8th, 2013

Suppose you have original footage that is different to that of the required product.  For example you have progressive footage and require an interlaced product.  Or perhaps the given footage is interlaced, but at a different resolution to that product.

While it is naively possible to simply “bung whatever footage one shot into an NLE and render the requried format”, this will not in all cases provide the optimum quality.  Obtaining a quality interlaced product from progressive footage (e.g. as-shot or intermediate or an animation) requires some more “beyond the box” thinking and processes.

The following article extract (link and bullet-points) explains how to go from Progressive to Interlaced using a video-processing application such as After Effects.

  • The first stage is to derive double-rate progressive footage from the original, specifically via motion-compensated/estimated /optical-flow tools/techniques as opposed to simple frame-blending (which would give rise to unwanted motion-blur artefacts).  This can be achieved via various applications (e.g. as listed in the article).  For such processes, I have traditionally used AviSynth (e.g. QTGMC & MVTools, which I covered at http://blog.davidesp.com/archives/502), but I look forward to evaluating other applications in this regard.
    • For footage that is already interlaced but which is at a different resolution to the required product, I typically use AviSynth’s TDeint plugin, which use motion/optical methods via which one can derive complete progressive frames corresponding to each field of the given footage.  Then these frames can be resized to the required product resolution, prior to the second stage.
  • The second stage is to derive from this (double-rate progressive footage) the required interlaced footage, by extracting each required field (upper and lower alternating) from each frame in turn.  For this, I have traditionally used Sony Vegas, which does this well.  The article claims After Effects does it well, and better than (the erstwhile) Final Cut Pro, but no mention is made of Adobe Premiere (though it may well perform this task well).  Naturally, AviSynth could also be used for this, either by extending its script or as a separate script.
    • I queried whether Premiere could do it, on Adobe Premiere forum: http://forums.adobe.com/thread/1250083.
    • One reply said <<Premiere is pretty smart about such matters.  You should have no issues.>>
  • Note that it can be useful to preserve a double-rate intermediate file for other purposes (e.g. downscaling of HD to SD or maybe in future, double-the-current-normal-rate will become the new normal).

Steps:

  • http://library.creativecow.net/articles/solorio_marco/interlacing_progressive_footage.php
    • Interlacing Progressive Footage
    • {The following is slightly re-worded/paraphrased from the original}
    • Frame-Doubling:
      • The first step is to double up the literal frame count, resulting in one of the following:
        • Double the duration.
        • Double the frame-rate.
      • In order to do this properly, the new frames need to be interpolated by means of a vector-based pixel warping or morphing algorithm.
      • This can be accomplished by a variety of different applications, including:
        • Motion 3 (by use of the Optical Flow feature)
        • After Effects (by use of Layer > Frame Blending > Pixel Motion)
        • Shake
        • Twixtor plugin (which can be used in Final Cut Pro, After Effects and several other host applications)
        • Boris FX
      • You do NOT want to frame-blend this step.
      • The best way to tell if this step is working correctly is to look at the new frames that have been created. If they have an overlapping ghost look to them, then it’s frame-blending, which you do not want. If the new frames literally look like new frames with no ghosting or overlapping, then you’re on the right track.
    • Interlacing:
      • This can be done in After Effects, Final Cut Pro and pretty much any other video application
        • After Effects renders out a cleaner interlace (actually, a perfect interlace) than does Final Cut Pro
      • In Adobe After Effects:
        • Setup:
          • Select the rendered clip in the Project window and right-click it and select Interpret Footage > Main.
          • Suppose the original clip was “30p”, i.e. 29.97 fps, then the rendered clip will be “60p” i.e. 59.94 fps.
          • In the Frame Rate section, conform the frame-rate to the correct value, namely 59.94 fps, or “60p”.
          • Create a new Comp of “60i”
          • Place the 60p clip in that Comp’s timeline
          • (Even though your timeline is only 29.97 FPS and you can’t see the extra frames when scrubbing frame by frame, don’t fear; when you render the final clip, it will use the extra frames in the 60p clip to create the new fields.)
        • Render:
          • Render this by Menu:Composition > Make Movie].
          • This should open up the [Render Queue] window with a new comp in the queue. You’ll need to change the Render Settings either by selecting a pulldown option next to it or by clicking the name next to the pulldown option.
          • Ensure you render this clip with [Field Rendering] turned on. You’ll need to select either Upper Field First (UFF) or Lower Field First (LFF), depending on your editing hardware and format of choice.