02 - Live FX User Guide


General

Live FX offers a toolset for live on-set compositing, pre-visualization and virtual production. It includes media- and version management, metadata handling and recording options and as such allows you to both easily prepare live setups in advance as well as create relevant metadata input for post-production of the scenes. Here are some examples of supported workflows:

  • Basic green-screen setup with 2D background – using live-tracker to offset the background image.
  • Use camera / lens dynamic metadata directly in a composite / color grade: e.g. link focal length to a blur parameter of the background image.
  • Use camera tracking to rotate a 360 equirectangular background image.
  • Capture Camera image and tracking and composite additional virtual elements into the image.
  • Capture camera tracking and pass that on to other systems and receive a rendered image back that is combined with the live camera image capture.
  • Use Live FX as a pure keyer solution for other applications with all layering / masking and plug-in tools available and option to send out separate RGB and Alpha Video IO channels.
  • Record live composites on-set for instant review without the need for offloading cards from the camera(s). Capture all camera tracking and other dynamic metadata in a sidecar file for post-production/VFX usage.
  • Playback of pre-recorded (non-CGI) media on LED wall background, having advanced grading and framing tools available as well as DMX light control to create ambi-light or vfx-light effect solutions, camera tracking for parallax offset display and optionally separate monitors for actors cueing.

Live FX exposes live tracking and other dynamic metadata in a way that compositors, VFX specialists and DITs can work with it in a familiar creative context rather than in a programming style environment.

Live FX is an application within the Assimilate Product Suite installer and can be combined with other Assimilate product licenses.

This (part of the) the user guide covers the Live FX specific functions and tools. General functions of the Assimilate Product Suite, such as creating a project or specific playback-features, are covered in the generic section / chapters of the product user guide.

Live FX tabs

On opening a project in Live FX, you have 2 main tabs available.


  • Construct: manage media (versions) and your live composites.
  • Live FX: player, grade and composite tools.

Certain functions in the Live FX application are only available in Live FX while at the same time it only exposes a subset of functions that are available in other Assimilate Product Suite applications. Only functions that are relevant in a live context are available. If you have multiple product licenses activated, then you can switch between toolsets using the Toolset selector to the left of the main tabs.

Construct

On the main toolbar on the Construct tab you have the option to load media into your project or to create a new live composite – using the corresponding Live Setup button.


Live Setup

Open the Create Live Composition panel to create a new (live) composition node and optionally include a green screen background and / or camera tracking options.


Start by selecting the type of composition you require from the Model dropdown. Next use the Name, Camera-ID and Scene text entries to set the corresponding shot properties. Note that you can change any of the properties set in this panel at any time later.

From the Capture dropdown select the base live capture channel that you want to use for your composite. Note that it is possible to add multiple live capture nodes into your composite at a later stage. Depending on whether the channel source is already connected, the (proxy) image of the channel will appear.

In the Format options you specify the resolution and framerate of your composite. By default these are set to the project defaults. The Length option specifies the duration of the composite. However, note that playback of the composite is always looped and any recording will continue until explicitly stopped.

In the Background section you can set a background shot for your composite by either using the Browse option and selecting a shot from disk or by drag & dropping a shot from the construct onto the proxy control next to the Browse button. The option to indicate that the shot is an equirectangular image is enabled automatically by scanning the shot metadata but can also be set manually. The R button clears the background selection.

From the Camera Tracker dropdown you select if and which type of camera tracking you require for your composite. The tracker is then automatically connected to the virtual camera or to the 360 equirectangular viewing parameters.

Once you hit Create, the software creates a composite node, which is placed on the first empty slot on the timeline and then opens the shot for playback and further setup, effectively entering the Live FX Tab. By placing the composite node on the timeline it is part of your project and can next time be started directly from the timeline. All options that have been set in the Create Live Composite panel can be modified in the Player. There are also additional Quick Paths to add elements to your composite with a minimum number of clicks.

Live FX tab

If you are familiar with any of the other Assimilate Product Suite applications, you will recognize the generic Player layout in the Live FX tab, including the color grading and composite tools. The Live FX toolset also adds some specific and different than other tools and behaviors to the Player.

  • Playback is always started automatically and in loop mode.
  • No timeline playback, by default only single (live) shot playback. If you require timeline playback then enable the 'Live FX Timeline' option in the User Preferences panel (use the gear-icon in bottom right corner).
  • More prominent display of playback / pause (highlight in yellow) to warn that the image showing is not live.
  • When entering the Player with a live composite, the Record options are available.
  • When adding a fill/matte on a layer, the playback mode is automatically set to loop (rather than once).
  • An additional Live FX menu is available from the bottom left menu buttons.

Live FX menu

The Live FX menu is meant to provide an overview and shortcuts of Live FX functions.


Live Links

Live Links opens the live links panel to manage Live Link sources that capture camera tracking and other metadata that can be used in your composite setup. Live Links are discussed in more detail later.

Scene-Take

Open the Scene-Take panel to adjust the scene / take metadata for the current shot. The scene / take metadata is included with any recording you make.

Camera Profiles

Opens the Camera Profiles panel to manage camera profiles, calibrate sensor, camera position and lens distortion.

DMX

The DMX button opens the corresponding DMX panel for on-set light control. The DMX functions are discussed later in this guide.

Performance Monitor

Toggle the Performance Monitor displayed center-top in the View Port on/off to check the playback performance and real time abilities of the system.

Quick Paths

The Quick Paths drop down is a quick way to add specific functions to your composite. A quick path automatically adds layers, plug-ins and live link animations to create the desired function.  A Quick Path can be undone with a single undo.

Active Components

This section shows you how many Video IO streams are currently active and being used in the current composite, how many Video IO outputs are active and the number of Live Links are active in the system. This can give you an indication how resources are being used on the system and give possible suggestions what to adjust if playback is no longer realtime. Evey input and output stream requires system resources - even if an input stream is not used in the current composite shot that is playing.

Auto Sync

This is a shortcut to toggle the auto-sync option with each input channel on/off rather than having to navigate to each live capture node and toggle the setting on/off one by one. The Auto-Sync option with input channels allows you to automatically synchronize incoming frames from different feeds based on the associated timecode. See for a more elaborate description of this mechanism in the section about capture nodes.

Note that if the current composite contains (multiple) capture nodes with the Auto Sync option enabled while at the same time there are active input feeds with the option enabled but that are not used in the composite - a warning exclamation character is shown. Since all feeds will sync with each other even if not used - the not-used feed might influence the timing of the active shot. It is then better to toggle the Auto Sync option off and back on again. When switching the Auto Sync on from this menu - only the channels that are being used in the active composite shot are in fact enabled.

Shot Length

In most cases the length of a live composite is of less interest as it will playback in a loop and as such always provides a continuous live image. If you however combine the live composite with pre-recorded media or add fixed time animations then the length of the composite – and as such the repeat time – is of importance. When enabling the Continuous option, the mini-timeline for scrubbing the play position is no longer visible. You can only set the continuous mode for live capture nodes. For media nodes, you can adjust the in- and out-point of the clip.

Record options

When a live capture node / shot is active in the player, the main toolbar will show a record button. In the record tab inside the Live FX menu you can set the recording options: the Output folder and file naming mask as well as the format and whether to record alpha- and certain audio channels.


Live FX allows to record in one of three formats: ProRes, H264 or DNx. Depending on the selected format additional options become available like ProRes sub-codec, H264 quality or DNx bit rate control.
ProRes 4444 and 4444 XQ also allow to record the created alpha channel of the composite along with the video.

At the bottom of the menu you can enable one or more audio channels to record. In Live FX you can capture the audio with the video input or select a separate audio input through the Audio panel (Capture tab), which is available from the top menu. Optionally you can delay the audio to sync with the captured video.

In addition to the recording format and audio there are a number of options on what should be recorded.

  • Full Shot versus Source Capture. The first option records a single clip with all the sources and grades baked in. The second option records each live capture node in the composition in a separate output, without any grade applied. These source shots are used to create an offline version of the live composition when loaded back into the project. In a later stage you can use these offline compositions to create the online composition with the actual camera raw media. The workflow to create offline and online composites is discussed later in this guide.
  • Write Dynamic Metadata to a sidecar file. All dynamic metadata and live link data, whether actively used in the composite or not, is written to a comma separated (.csv) sidecar file. The file has the same name as the recorded media file and contains the per frame timecodes so you can link any metadata to a specific recorded frame (see below on how to use the metadata from the sidecar file).
  • Auto Load the recorded clip. This option loads the recorded clip as version shot into the same slot as the composite shot when the recording is ended. As such, the clip is immediately available for review from the version stack in the player.
  • Auto Increment the Scene-Take number of the current (live) shot after recording ends.
Load in Sidecar

The metadata sidecar file that can be created with a recording contains comma separated value (csv) data. You can load and use this data for your online composite with the camera raw media through the Animation editor. If you want to use the metadata for the virtual camera, first enable manual animation in the camera menu and then open the Animation menu. Select the channels you want to animate and then use the Import button to open the csv file. The software will automatically recognize the sidecar file and open the channel mapping dialog to link the metadata to a specific animation channel.

The Sync drop down tells the software from where to apply the animation data:

  • Start at the first frame (in-point) of the shot.
  • Use the current play-position as the start.
  • Start the animation by syncing the shot timecode and the timecode column in the csv. This requires that the csv has a timecode column and that you selected that column from the second drop down.

Next, select the csv column with each selected animation channel using the drop down in the second column of the data grid. Use the options in the Start and End column to determine the shape of the animation curve at the start and end frame. The last option determines if the each of the animation point is interpreted as part of a curve or each animation point is connected  with a linear line.

Player Controls

When a live capture is playing the main toolbar has a different set of functions than when playing back a media file. Next to the Play/Pause and step frame Forward / Backward there are the Record controls.

    

Auto / Rec

The Auto option is only available in Live FX and when the software is able to recognize and read the record state of a camera from the SDI signal. When the Auto option is enabled, recording will automatically start when the camera starts recording. Alternatively when the Auto option is off or disabled you can click the Rec button at any time to start / stop recording.

Note that you can also use the Remote Control function keys to start/stop recording. The Remote Control is available from the Tools drop down menu in the top toolbar in the Player.

Cue Up

Clicking the Cue Up button will ensure that any fill shots (e.g. used as a background in a green-screen context) and/or frame animations are reset to their start position.

Live Links

Live Links are at the heart of Live FX. They are a source of dynamic metadata that can be used as parameters with composite elements. Live link metadata can also be stored alongside the recording so that it can be used in a post-production context.

There are 2 types of Live Link sources

·         Sources linked to a specific node. Examples are a Live Tracker plug-in node that is placed on a live capture to track a specific element. The tracking data is available elsewhere in the composite setup to be re-used for other elements. Another example would be the Focal Length dynamic metadata that comes with a live SDI capture of a camera. In these cases the live link data is available throughout the composite, in every (sub) node that is on the same level or downstream from the live link source node.

·         Global sources. Global live links are tied to an external to the software source. This can be a specific camera tracking system (like Mo-Sys) or a generic (Open sound Control (OSC)) source that sends dynamic metadata to the software. A global Live Link source needs to be explicitly activated, but is then available in any composite node and at any level.  Global Live Links are managed from the Live Links panel that is opened from the Live FX menu or the Tools dropdown in the top menu bar.

Manage Global Live Links


The left of the Live Links panel shows all available live links and their state: active / inactive. To activate or to deactivate a Live Link, select it in the list and click the Activate button.

Delay

The Delay setting allows you to delay the data of certain Live Links to be able to sync them with a live image source which might have some built-in latency. The delay is entered in milliseconds. In case the image latency is only known in frames, use the formula [frames * (1 / framerate) * 1000]. A 2 frame delay for a 25fps framerate would amount to: [2 * (1/25) *1000] = 80 milliseconds.

Settings Import / Export

All live link settings are stored in the project database. To use all the live links settings from one project in another project, Export the settings using the Settings dropdown and then use the Import option while in the other project.

Live Link Sources

Mo-Sys Tracker

This captures the tracking data from a Mo-Sys tracking system (www.mo-sys.com). Enter the UDP network port that the Mo-Sys system uses to send the tracking data to the system the software is running on. Currently, it captures a maximum of 5 trackers at the same time. Both rotation and xyz-offsets are captured as well as camera focus and zoom values (0.0 – 1.0) if available from the Mo-Sys system. Use the Set Origin button to use the current position as the origin. Ctrl+click the button to clear the origin offset and get the raw values from the tracker.

OSC App Sensor Tracker

This Live Link uses the sensor data from a mobile phone, which is sent by one of the supported apps over the OSC protocol to determine the pan, tilt and roll of the phone (which can be mounted to a camera). It can support up to 3 phone trackers at the same time.

In each of the apps you set the proper (UDP) port number and IP address to send data to, as well as select the appropriate dataset to get the rotation data. Some of the applications allow setting a tag with the data stream. If set, then make sure you also enter that tag into the Live Link settings. Depending on how the phone is mounted to your camera set the mount option to Flat (phone is mounted with display facing up), Landscape or Portrait (phone is mounted with display towards the camera operator).

The currently supported smartphone apps are: OscHook v2 (Android only), GyrOsc (iOS only) and ZigSim Pro (iOS only for the pro version + ARkit support).

App

Vectors

Enable

Runs in Background

GyrOSC (iOS)

Pan, tilt, roll

Gyro

Yes

OscHook v2 (Android)

Pan, tilt, roll

Orientation

Yes

ZigSim Pro (iOS)

Pan, tilt, roll

X, Y, Z

ARKit

No

Make sure that in The Live Link panel you set the same port number and (optional) tag label as you set in the phone app.

The Smooth option applies a filter to the sensor data to get rid of any jitter. Note that the stronger the filter, any movement changes might come over as somewhat delayed. This is inherent to filtering real time data.

With the Origin option you set the origin (0,0,0) of the rotation data that is passed along in the Live Link. Ctrl+click the button to clear the offset and receive the raw values from the tracker again.

If the Phone app also provides xyz positional data then you can also enter the mount offset from the phone to the camera pivot point (XYZ).

  • The horizontal left/right position of the tracker device in relation to the camera sensor (millimeter)
  • The vertical up/down position of the tracker device in relation to the camera sensor (millimeter)
  • The lateral front/back position of the tracker device in relation to the camera sensor (millimeter)

It is important to measure these offsets as precise as possible because the tracker camera has a different pivot point than the cinema camera, meaning that a roll of the cinema camera might also result in an offset over the x and y axis for the tracker camera. With the entered offset this can be compensated for.

OSC Sender

The OSC Sender Live Link allows you to send dynamic metadata to other systems supporting the OSC protocol. Enter the IP Address and (UDP) port number of the system to send the data to. Optionally add a custom tag for the OSC message. Click Connect to start the sending of live link data.

Which data is being sent, depends on the selection options with each of the connections.

  • Camera – Send the Virtual Camera XYZ, Pan/Tilt/Roll and Field of View. Note that the Virtual Camera might be linked to an external Live Link (tracker). In that case this option forwards all the data of that tracker.
  • Player – Send the current playhead position (timecode) and player state (play/pause/record).
  • Metadata – Send scene/take data of the current shot.
  • Animation – Use this to send generic animation data. This option sends the xyz-translation, xy-scale and xyz-rotation data of the Canvas of the first layer in the stack. Use an empty layer to just send out data that you do not want to influence the local composite but do require on another system.

Note that on the far right of this Live Link, the OSC message formatting is displayed. Use this to properly interpret the data on the receiving third party system.

[The data send out over OSC can be wrapped in individual packets per value or (by default) as a single bundle. For certain receiving applications this can be important. In the Advanced System Settings you can switch off the Use OSC Bundles option if needed].

OSC Source

The OSC Source Live Link captures all OSC data that is send to the system over the ports specified. You can enter 3 different ports to receive data from multiple sources. Below the port number you find the local IP-address that you can use in the settings of the source application to send to the correct system. The list next to the ports show all the tags that are coming in. From that list you can select items to create actual Live Links which values can be used in your composite. Select an item from the left list and use the > button to add it to the right Live Link list. Use the < button to remove the Live Link item.

OpenVR Tracker

The OpenVR Tracker Live Link uses the HTC Vive headset & trackers and Steam SDK to capture tracking data from an HTC tracker that is mounted to a camera. Read the HTC/Steam documentation to set up the base stations and calibrate the trackers for the base stations. After that you need to calibrate the system inside Live FX to be able to use the tracker data on the (virtual) camera.

The system supports a maximum of 5 trackers at once. To filter out jitter with the trackers, use the Smooth setting. Note that the stronger the filter, any movement changes might come over as somewhat delayed. This is inherent to filtering real time data.

Witch each of the (active) trackers you can specify the way the physical tracker is mounted to the camera so the correct translation and rotations can be passed. The XYZ-offset represent:

  • The horizontal left/right position of the tracker device in relation to the camera sensor (millimeter)
  • The vertical up/down position of the tracker device in relation to the camera sensor (millimeter)
  • The lateral front/back position of the tracker device in relation to the camera sensor (millimeter)

If the tracker is mounted perpendicular to the camera, set the mount type to “P”.

To be able to use the tracker data you first need to align horizontal planes of the tracker-grid and the internal virtual camera grid, by setting a Rotation (in degrees). You can determine the value automatically by calibrating the system, following these steps:

  • Enable the Calibrate button. The image in the Viewport will show an overlay of the virtual camera grid.
  • Place the physical camera at the start position pointing in a straight 90 degree angle at the scene.
  • Hit the Start button. This will mark the current position as the origin xyz as well as the rotation origin.
  • Now move the camera at least 0.5 meters in a straight line towards the scene. Once you moved far enough the Start button will automatically switch off and the Rotation-value is calculated.
  • In case you cannot move the camera/tracker towards the scene, enable the “S” (Sideways) button and move the camera 0.5 meters (or until the Start buttons switches off) to the right, in parallel with the scene.

You need to re-calibrate the system every time the base station/tracker system setup has changed.

During any calibration the origin is set to the start position. However, after calculating the Rotation to align the tracker-grid and virtual-camera grid, you can set the origin to any other position by clicking the Set Origin button. To remove any origin and see the raw tracker values, us the R (Reset) button next to the Set Origin.

Finally, the Reset Overlay option resets the display of the overlay that is shown when in calibration mode. This overlay can be rotated by dragging it in any direction or can be scaled by using Alt+drag. The Reset button sets it back to its original position and scale.

RealSense Tracker

The RealSense Live Link uses the Intel RealSense tracker camera for positional and rotational data. The RealSense tracker camera would be mounted on top of the cinema camera to track its position. The RealSense camera is only available for Windows, so is this Live Link.

Set Origin

This option resets the camera start tracking point and then uses that initial position as origin (zero) position.Ctrl+click the button to clear the offset and get the raw values from the tracker again.

Reset

The Reset option only resets the internal zero-origin position, not the camera start position. If a camera is not perfectly horizontal aligned when using the Set Origin option, the function will nevertheless use this as the 0 (zero) rotation point. When resetting the origin, the values will reflect that actual angle of the camera.

XYZ Offsets

The XYZ offsets represent:

  • The horizontal left/right position of the tracker device in relation to the camera sensor (millimeter)
  • The vertical up/down position of the tracker device in relation to the camera sensor (millimeter)
  • The lateral front/back position of the tracker device in relation to the camera sensor (millimeter)

It is important to measure these offsets as precise as possible because the tracker camera has a different pivot point than the cinema camera, meaning that a roll of the cinema camera might also result in an offset over the x and y axis for the tracker camera. With the entered offset this can be compensated for.

Unreal Live Link

The Unreal Live Link forms a bridge from Live FX to the Epic Unreal render engine. The Live Link sends the (virtual) camera positional data as well as scene/take and record state metadata from Live FX to Unreal.

To connect with an Unreal system, enter the correct IP address and (UDP) port to communicate with the software. When running on the same machine, you can use ‘local’ as IP address. Click Connect to start sending data.

To make the camera positional data more directly usable in the Unreal model, you can add a positional offset as well as a scaling factor. Use the XYZ offsets to give the Unreal camera a different origin position. Use the scale factor if the Unreal model does not have the same (meter) scale as the Live FX (virtual) camera.

Download the plugin version that corresponds to your version of Unreal Engine:


Plug-in

To make the interface between Live FX and Unreal Engine more easy to manage, a UE plug-in is available that simplifies setting up and using camera tracking data from Live FX. When clicking the UE plug-in link shown with the Live Link, the folder with the plug-in is opened. Follow these steps to include the plug-in in your UE project:

  • Download the correct plugin version with your version of Unreal.
  • Unzip the LiveLink folder that is in the downloaded zip file and place it in the  “..\yourUnRealProject\Plugins” folder.
  • (re)Start Unreal, activate the new plug-in and add a new Live Link Source. Make sure the Unreal Live Link has the same TCP port number set as the live link inside Live FX.


Feed-back

There are many different use models for Live FX with Unreal Engine. The Unreal Live Link can forward camera tracking data to Unreal. At the same time Live FX can capture the image output from Unreal in different ways to composite this into a live camera feed. Live FX can capture the Unreal output through SDI or NDI. In general this will cause some latency issues as the Unreal signal takes more time than the live camera image. For that the live capture node in Live FX has a delay option. By delaying the live camera capture you can sync it perfectly with the Unreal output.

Alternatively, Live FX provides a so called Spout plug-in, which allows for direct image sharing between UE and Live FX on the GPU. This only works if both applications run on the same machine but since the image is shared directly on the graphics card, there is no latency. The Spout plug-in is discussed in more detail later.

Wave Generator

The Wave Generator Live Link allows you to generate a repeating signal that can be used for various purposes in a composite, such as using the signal to position a layer or to dynamically alter a grading parameter.  With each signal (except for the Random signal) you can set the wave length in frames. The available wave forms are:

  • Sine wave – values between -1.0 and 1.0.
  • Square wave – values alternating between 0.0 and 1.0.
  • Triangle wave – values between -1.0 and 1.0.
  • Saw tooth wave – values between -1.0 and 1.0.
  • Random – values between 0.0 and 1.0.

Node Live Links

There is currently only a single (plug-in) node that generates live link data – the Live Tracker node.

Live Tracker Node

The Live Tracker plug-in is created through the plug-in browser and is placed on a layer in your composite. It allows you to create any number of point trackers to track a specific object in the image. The XY coordinates of each of the trackers are available as live links in your composite. The data of multiple point trackers is combined to calculate scaling and rotational data, which is then also exposed as live link.

   
Add Point Tracker

Add a new point tracker – the overlay of the tracker becomes visible in the image. Drag the tracker to the object that you want to track. The outer box of the tracker represents the search area. The inner box represents the object to search for. You can adjust the sizes of the boxes by dragging the outer borders. Note that the smaller the boxes the more efficient the tracker works, but the chance of the tracker ‘losing’ the object increases.

Remove Point Tracker

Remove the current selected tracker. Select a tracker by clicking the overlay of the tracker. The overlay tracker number highlights in red for the selected tracker.

Use Relative Position

With this option enabled, the node will pass the relative position to the live link – meaning the offset from the origin position. The origin position is either set when dragging the tracker to a new position, or by clicking the Set Origin button. If the Relative Position is disabled then the node passes the absolute XY coordinates in the image as live links.

Smooth

This applies a smoothing filter over the results of a tracker to filter out any jitter.

Set Origin

Takes the current position of all trackers and sets them as origin. If the Relative Position is enabled then the position of the trackers relative to this origin position is passed as live link.

Track Yaw Pitch Roll

If more than 4 trackers are active, this option tries to calculate a 3 dimension rotation from the tracker results. This option presumes that the trackers all track an object that have the same relative distance from the camera. This distance is entered as Z Value (meters) in the corresponding control.

ArUco Roll

With this option enabled you indicate that the tracked region contains a so called ArUco marker. In that case, the Live Tracker will not just track the xy position of the search region within the image but also try and calculate the angle of the marker. Although theoretically it can determine the rotation of the marker over 3 different axis, currently only the roll-angle of the marker is passed as live link.

Live Animation

Live Link metadata can be used in a composite shot through the Live Animation module. Live Animations are an extension to the standard animation module in the Assimilate Product Suite. In the standard animation module you can create and apply an animation curve to a grade / composite parameter. Through the Live Animation extension you link a Live Link to a grade / composite parameter and optionally apply a transform to the metadata to fit the parameter range. Before you can link a parameter to a Live Link you first switch on the Live-option for an animation channel.


Live

The Live option applies to all parameters in the animation channel. When the Live option is enabled, the Link option becomes available which opens the Animation Editor for the animation channels shown in the current menu.

Link

This opens the Animation Editor for the current selected animation channel, where you manage live links.

Live Animation Editor

The Live Animation Editor behaves similar to the standard Animation Editor, where in the tree view on the left side you select the channels you want to edit / link.


Live Sources

In the right panel of the editor you link the selected animation channel to a live source by selecting a Live Link from the corresponding Live Source’s dropdown.

Live Value

After selecting a live source you determine how to interpret the live link values.

  • Absolute. Use the Live Link value one-on-one as parameter value.
  • Invert. Multiply the Live Link value with -1 to invert the sign of the value.
  • Offset. Add or subtract (negative offset) an amount to the Live Link value and apply the result to the channel parameter.
  • Invert + Offset (combine the two above)
  • Factor. Scale the Live Link value by multiplying it with a number.
  • Normalize. Scale the Live Link value to a value between 0.0 and 1.0. Enter the minimum and maximum values that the Live Link can produce.
  • Range. Scale the Live Link value to a pre-defined range. Enter the minimum and maximum values that the Live Link can produce as well as the minimum and maximum range the scaling should produce.

Virtual Camera

The (virtual) camera in Live FX works to a certain extent the same way as you might be familiar with in other 3D render software.


There are however also differences. You manage the (virtual) camera settings from the Camera menu. The PanZoom panel (start from the top menu bar) shows a schematic view of the virtual camera.

  

The virtual camera is always looking at the main (primary) image.

  • For normal (re)view of media - live captured or pre-recorded - the virtual camera settings do not matter. Only once layers are added to a scene, the virtual camera (settings) become important as that determines how the main image relates to layers which might have content (fill elements, backgrounds, etc.).
  • All grading / compositing layers that are created on top of the primary image are placed in a 3D space, by default in front of the virtual camera when the virtual camera is at its origin position.
  • The virtual camera can also be placed and moved in 3D space but this then only affects how the virtual camera looks at / through the layers that you created, onto the primary image.
  • The virtual camera can move along the xyz-axis with the corresponding controls, as well as rotate around the axis (pan, tilt and roll).
  • Please note that the z-axis is forward/backward in the Live FX model and the y-axis is up/down. This might differ from other 3D render software.
  • Optionally you can set a layer to be positioned relative to the virtual camera position rather than an absolute position in 3D space. Use the Relative option in the Canvas menu to ensure that a layer 'sticks' to the virtual camera position as it changes position.
  • Note that each shot has its own virtual camera. Even a shot that is used inside a composition shot has its won virtual camera - although in most cases you would only use the virtual camera of the main / master shot.

The aim is to have the virtual camera behave the same as the actual camera. For that a number of settings are important. The following diagram below shows the various aspects of the (virtual) camera, which are discussed in more detail below.


Focal Length and Sensor Size

The virtual camera has a Focal Length setting as well as a Sensor type / size. The Focal Length setting should be the same as that of the actual camera you are using. The Focal Length settings can be animated. The Sensor type can be selected from a predefined list or set to custom. For the correct functioning of the (virtual) camera the Sensor Size and Crop value is important. The combination of Focal Length, Sensor Height and Crop determine the (vertical) Field of View of the camera.

(vertical) FoV = 2 x atan((Sensor Height * Crop) / ( 2 * Focal Length))

The exact Crop value of the sensor by default is 1.0 but can be calibrated through the Camera Profile panel, discussed below.

Field of View

The (vertical) Field of View determines the default position of a layer. The distance between the default (zero) camera position and the default (zero) canvas position is:

Z Distance = [Image Height] * 0.5 / tan(FoV  * 0. 5)

This virtual Z distance is what is call the Working distance for the real camera.

Working Distance

By default all camera and layer positional data is in pixel and is relative to the primary image size. However, when you use an external camera tracker the positional data is most likely in meters. For the virtual camera to behave similar to the actual camera, the pixels-per-meter ratio is needed. By entering a Working Distance you implicitly set this ratio. Once you entered the Working Distance, the xyz virtual camera position is interpreted in meters and so are the xyz translate parameters of a layer (Canvas menu). In fact, when the Working Distance is set, the Translate parameters in the Canvas menu will show that the values are interpreted in meters.


The value of the Working Distance is equal to the distance between the camera focal point and the person/object in the scene that you are shooting. The more precise this is determined, the better the virtual model will behave like the actual camera / scene. The Working Distance can be calibrated from the Camera Profile panel, discussed below.

Ground Level

By default the virtual camera and layer positions are (0,0,0). The actual camera is most likely not at floor level but mounted at a certain height. This absolute height is likely also passed through the camera tracking data, while it is more convenient for the virtual model to start at height 0.

Internally, the Ground Level value is subtracted from the Y position (height) and as such can be used to start at height 0. Use the Set button to copy over the current Y value as Ground Level and thus setting the height origin.

In case the camera tracker passes a relative height position rather than an absolute height position (and as such starts at 0), then leave the Ground Level at zero as otherwise the model would show the virtual camera to be positioned below the ground level.

The Ground Level can be calibrated from the Camera Profile panel, discussed below.

By-Pass

The By-pass option in the Camera menu allows you to ignore the camera position and rotation for rendering of the scene while maintaining a link with an external tracker through a Live Link.

The By-Pass option is useful, if you need to adjust certain settings in your composite and need the image to be temporarily free from any motion. Also, if you are linking to an external tracker / Live Link and forwarding this tracker information to another system but do not require the motion in the local composite.

Camera Profiles

A camera profile contains all the information for the Live FX virtual camera to correctly function in a specific context: focal length,  sensor size and crop, working distance, ground level, lens distortion parameters and potentially the offset of the tracker device and the camera focal point. Camera profiles are created or applied from the Camera Profile panel which can be opened from the Live FX menu or the Camera menu. The Camera Profiles panel also contains tabs for the various calibration steps.


On the first tab of the panel you create, manage and apply profiles. The second tab offers functions to calculate the lens distortion of the physical camera. The third tab offers functions to calculate the working distance and ground level of the physical camera.

Select Profile

Select one of the available camera profiles. If no profiles are available yet, click New. Note that camera profiles are stored in the project database and as such are only available inside the current project. See Import / Export options to move profiles from one project to the next.

New

Create a new camera profile.

Delete

Remove the selected profile. Note that this action cannot be undone.

Profile

Name

Set the camera profile name.

Sensor

Select one of the predefined sensor types from the drop down to match your camera or select Custom and adjust the Width and Height of the sensor (in millimeters). By default the crop value is 1.0. The crop value can be determined by doing a calibration with the functions on the third tab of the panel.

Focal Lenth

Enter the camera focal length in millimeters. Note that when applying the profile to a virtual camera, this value can be animated. Given that the Sensor data is correct, the internally used Field of View value can be calculated from the Sensor size, crop and camera focal length.

Working Distance

The distance in meters from the camera focal point to the main focus element on-set. The value can be calculated on the third tab of the panel.

Ground Level

The height of the camera position in meters. This value can be calculated on the third tab of the panel. Note that when a camera tracker passes a relative position rather than an absolute position then this value is best left to 0, so the virtual camera's default position is now below the virtual ground level.

Lens Distortion Parameters

5 coefficients that together describe the lens distortion. These parameters can be used with the (Un)distort plug-in to create an (un)distorted image. The distortion parameters can be calculated with the functions on the second tab of the panel.

Tracker Mound Offsets

A camera tracker is mounted at an offset to the actual focal point of the camera. As such a rotation of the tracker also includes a small xyz translation that does not account for the camera. To compensate for this, you can enter offsets with most Live Link camera tracker options. However, it can be useful to also store these values in the camera profile. From the perspective of the camera operator (behind the camera): the Z offset represents the distance the tracker is mounted behind (positive) the focal point, the y offset how far the tracker is mounted above (positive) the focal point and the x offset how far to the right (positive) of the focal point. All in millimeters. If the current virtual camera is linked to a tracker then using the Apply or Update functions in the Profile panel will update the offsets in the tracker or in the profile.

Update

Copy the camera settings of the current active shot to the selected profile. Note that if the active shot contains a layer with an Undistort plug-in, the parameters of that plug-in are automatically added to the profile.

Apply

Apply the selected profile on the camera settings of the current active shot. Note that if the profile contains lens distortion parameters, the Apply option will automatically create a layer with an Undistort plug-in, using the parameters from the profile.

Import / Export

Profile settings are stored in the project database. If you want to re-use profiles in a different project then Export the settings to a file on disk from the current project and open that file again in the other project by using the Import. Note that the All option determines if all profiles are exported to file or only the current selected profile.

Close

Close the camera profile panel.

Lens Calibration

To determine the lens distortion parameters, you need to capture a checkerboard pattern a number of times from different positions / angles with the camera, so that all captures together cover the full camera image.

Start again with specifying the type of checkerboard you are using: this can either be a Charuco board or a plain checkerboard. A Charuco board might be easier in use as the capture does not require the full board to be in sight but can also work with a partial capture. Next, set the properties of the board used: number of rows and columns of squares as well as the size of a square (in millimeters) and in case of a Charuco, the size of a (square) Aruco marker within a square of the checkerboard. The final preparation step is to set the number of captures you want to do: by default this is set at 20 but in some cases a higher number is requires to get accurate results. Next continue with the following manual steps:

  • Hold up the checkerboard in front of the camera so that the checkerboard is visible.
  • Click the Manual Capture button. If the software detects the checkerboard an orange outline is drawn.
  • Repeat this step the number of times you've set it to - each time placing the checkerboard on different position (or keep the board at a fixed position an rotate the camera). The software will indicate the step number.
  • Possibly use the Reset button to start from the beginning.
  • After doing the last capture, the software will calculate the distortion parameters and update the profile.

Alternatively to the Manual Capture, you can also enable the Auto Capture. Once enabled, this function will try to capture the board every 2 seconds automatically without the need to explicitly press a button. Another way to make the manual calibration process easier is to use the remote control (web) application that comes with the Assimilate Product Suite. The Remote Control can be started from the Tools menu in the Player. This opens the http server panel with a QR code that you can scan with a phone or tablet to open the application. The application has the standard remote control playback controls but also a series of function controls (F1 – F5). These can be assigned a function in the server panel. One of those functions can be to do a Capture Board for the calibration process.

  

This way you can move the checkerboard to a new position and hit the function key on your mobile phone instead of having to go back to the Live FX computer to click the button in the interface.

Calibrate Camera

Calculate the camera Field of View and from that determine the virtual camera sensor crop value - given its size. Use the second calibration step to determine the camera working distance and ground level - which effectively determine the camera position relative to the origin of your scene.

Checkerboard

Calibrating the Field of View and camera sensor requires a regular checkerboard pattern or a Charuco board. For this calibration there is no preference for one or the other. Make sure the board properties are correct filled out: rows and column count as well as the exact size of the squares and (optionally) the size of the Aruco marker inside a square.

Field of View / Crop calibration steps
  • Enter the distance between the camera lens to the board in meters.
  • Click the Capture Board button to capture the board.  If the board is detected correctly, an orange rectangle is drawn around it.
  • The calculated FoV and derived Crop factor are shown and the profile is updated.
Working Distance / Ground Level Calibration steps
  • Place a single (printed) Aruco marker (not a Charuco board) on the ground at the origin of your scene. An example Aruco marker can be opened from the Show Example link in the panel. If the Aruco marker can not be placed exactly at the origin of the scene, then place it at a closer position to the camera and enter the exact offset (in meters) of the marker to the scene.
  • Point the camera (down) to have the Aruco marker in view and click the Capture button.
  • If the marker is detected an overlay will show the angle under which it was captured and the distance and ground level are calculated and the profile is updated.

Note that after a calibration only the profile is updated, not the active virtual camera of the current shot. Use the Apply button on the first tab to update the active virtual camera.

Composition Assemble

In the paragraph about Recording the option to record the clean sources (rather than the full graded composition) was briefly discussed. In this paragraph the workflow to start with a live composition, create and tweak an offline composition and ultimately create the final online composition is discussed. The workflow is displayed in the diagram below.

With the Record: Source Capture option,  the Auto Load Clip and the Write Metadata Sidecar options set in the Live FX menu, every recording of a live composition shot is automatically transformed into an offline composition shot, where:

  • A full copy of the original (composition) shot is made.
  • Any live capture node inside the composition shot is replaced with the recorded source.
  • Any live link animation is replaced with data from the side car file and put in an animation curve for the specific parameter.
  • The shot is tagged as an offline composite.

An Offline Composition can be worked on as any normal composition: e.g. smooth animations or adjust grades. Once the high resolution camera media has been offloaded from the camera and loaded into your project, you can create the Online Composition. To transform an Offline Composition into an Online Composition (or directly a Live Composition into the Online Composition), you use the Online Composition Assembler which is available from the Online Compositions button in the Construct menu.

The panel lists all Live and Offline composition shots in the current timeline. If a shot is selected int he timeline it is enabled in the list, otherwise it is disabled. This way you can quickly transform just a single node. Click the button in the first column in the list or use the Select option below the list to alter the selection. The next step is to point the Assemble function where to look for the Online media: either select a specific group of timeline in the project or use the Media drop down to select a specific folder on disk. Then make sure that you point the Assemble function to the correct folder with the sidecar files with (animation) metadata. The function will match the correct files with the correct composition.

After pointing the function to the correct locations for media and metadata, start the matching function and tell it where to put the results by using the Target drop down. The Assemble button will start the the actual creation of the Online Composition:

  • Copy the Live or Offline composition.
  • Replace live capture nodes or offline recordings with the online high resolution camera media.
  • In case of a live source, replace the live links with the animation data from the sidecar file.
  • Place the new Online Compositions in either a new timeline, as versions in the same slot as the Live / Offline compositions or attached to the pen from where you can drop them anywhere you want.

Note that you can use Create All Takes in one go when going directly from a Live Composition to an Online Composition. However, you do need to make sure that the Online media has to correct cam-id and scene-number metadata to match with the Live Compositions as a timecode match will not work.

In the right part of the Composition Assembler you can see the matched Online shots  that were found with the Live or Offline Composition. If for some reason you find there is a wrong match then either use the Remove button to remove the individual shot or use the Clear Matches button to clear all matches with all nodes in the list to start a clean new match round.

Live FX nodes

Live FX contains all the available media and (effects) plug-in nodes as the Finishing and VR version of the Assimilate Product Suite. There are however a number of nodes only for available in Live FX or they are of more importance in the Live FX context.

Capture Node

At the heart of Live FX is the live Capture node to capture the live feed through the Video IO interface – using SDI, NDI or even USB for certain cameras. To ensure that the node can capture the correct media feeds, first make sure that the Video IO is setup correct. Go to the System Settings panel from the startup screen and then select the Configure Video IO button in the General tab of the System Settings. In the Video configuration panel make sure the correct devices and input- output channels are enabled as well as the correct formats set.

On the node itself, select the Device and Channel to capture. If an SDI channel is captured from one of the known cameras then select the correct camera so that the metadata from the camera is interpreted correctly.


The Auto TC Sync option allows you to synchronize multiple captured feeds based on their timecodes by buffering frames until all feeds produced the frame with the same timecode. Note that the setting is linked to the selected capture channel, not to the capture node (you can have multiple capture nodes that all link to the same channel). Also note that if you enable auto-sync on multiple channels and one channel fails to produce real-time frames, this will stall the other channels. If the timecodes of teh various feeds differ too much, auto sync is ignored.

Alternatively to the Auto-Sync you can also set a manual delay with an individual feed, in milliseconds. This is useful when you capture multiple feeds from different sources which do not have (similar) timecodes.

If the source device is NDI or USB then use the Configure option to open the source selection panel. There you can select the correct source from a list or possibly search for a source based on an ip address.

Live Tracker

The live tracker was discussed in the paragraph about Live Link nodes.

(Un)distort Node

The Lens (Un)distort plug-in node can be used to distort or undistort an image based on lens distortion parameters or by directly using a UV-map.


The plug-in has a calibration mode which works similar to the calibration in the Camera Profile lens calibration menu.

To edit the lens distortion parameters directly, first select the Edit option. After adjusting the values, make sure you select the Calculate option. Since creating a new (un)distortion map always takes a bit of time, this cannot be done dynamically when changing the parameters, but needs to be done as a single post process.

You can export the parameters in an xml format, which can be loaded back onto another instance of the plug-in or in some cases with other software using the similar OpenCV algorithms for the (un)distort function. Alternatively you can load / save a UV-map to (un)distort an image.

Note that based on the same parameters you can both do a distort as well as an undistort, using the Mode setting. However, when loading an external UV Map, you cannot inverse that.

Spout GPU Share

The Spout GPU Share plug-in allows using an image from another application directly from the GPU. If the other application supports the Spout framework and is active (on the same machine) then the application will list as a source in the drop down control.

Sharing images directly on the GPU prevents latency issues, when a frame first has to travel over the Video IO output interface of the source application and the Video IO capture interface of Live FX. Note though that this only works if both applications run on the same system – which in turn might cause performance issues.

RealSense Depth

The Intel RealSense depth camera uses Lidar to generate a depth image. The RealSense depth plug-in captures this image and can generate a matte from it, which can be combined with an RGB camera image. Note that the RealSense depth camera is only available for Windows. Further note that this plug-in is still a bit experimental. The resolution of the camera is limited, as well as the accuracy of the depth information and in dealing with the reflection of certain surfaces. Nevertheless – the plug-in offers a way to start using new technology in a creative way. The aim of the current implementation is to use it as an overlay on the actual camera image and as such to create a dynamic mask based on the depth information which potentially could replace green screen keying.


Mode
  • Alpha Mask. Use the Threshold distance to generate a matte with everything further away than the threshold being fully opaque.
  • Depth as Alpha. Put all depth information in the alpha channel of the image. Use the input shot of the plug-in for the RGB part.
  • Depth as RGB. Generate RGB image based on the depth information.
Max Distance

The max distance is used to better reflect the depth range in the alpha or RGB shades.

Threshold

Used with the Alpha Mask mode to generate a matte from all that is closer or further away than the threshold distance. Used to mimic a green-screen setup, without the green-screen.

Invert Mask

Inverts the alpha values / mask related to the distance.

Smooth Radius

Smoothens the alpha values to get rid of jitter along the edges of objects.

Calibrate

Enable this setting to get an RGB representation of the depth info so it is easier to align the depth image with the actual camera image.

Scale & Offset

Scale and offset the depth image to match the actual camera image.

Eq -> 2D Transformer

You can easily link camera (rotation) tracking Live Links to the yaw/pitch/roll controls of the plug-in and as such adjust the 360° background image of a green-screen setup with the camera motion.

NOTCH Blocks

Notch Builder (https://www.notch.one/) allows you to create 3D elements or completes scenes which can be exported as Notch Blocks. Live FX can load such a Notch Block as part of a (live) composition shot, where the various parameters of the notch block can be set, animated or linked to a live source and as such be combined with a live camera feed.

A Notch Block can be loaded directly from disk or by first selecting a Notch Block plug-in node through the plug-in browser in the player. When loaded through the plug-in browser, you are asked to select a specific Notch Block *.dfxdll file from disk. To be able to render the content of a Notch Block you need the Notch render engine installed as well as a Notch license. Certain Notch Blocks can take some time to load, the first time you open them. Loading a project with the Notch Blocks nodes in it, will in itself not take much extra time. However the first time a Notch Block needs to render an image in the player might take extra time again.

Each Notch Block has its own set of parameters / controls to. However, all Notch block have the same 3 general controls: Height and Width, which determine the size of the output of the Notch Block. Note that by default the size if the same as the size of the Live FX node but can be adjusted to e.g. a smaller size to speed up rendering (while you can then upscale the node to the composition resolution). The third general control with each Notch Block is the Layer selection. A Notch Block can contain one or more different layers of which only one can be selected at any one time. Note however that you could instantiate multiple Notch nodes in a single composition, each with a different layer selected.

Certain Notch Blocks can contain properties that are exposed as Live Links (similar to the output of the Live Tracker). These Live Links can in turn be used to control and steer other aspects of the composition. Check the Live Animation editor for available Notch Block parameters.

NotchLC

Next to rendering Notch Blocks, Live FX also supports playback of (QuickTime) media encoded with the NotchLC codec. The NotchLC codec is used to ensure real-time playback due to its GPU use model. 

LED Wall Back Projection

Using an LED wall on-set for projecting the scene background is becoming more common. The Assimilate Product Suite and Live FX in particular have a series of tools to support LED back projection workflows. Some of the tools are mentioned here, but detailed description are available elsewhere in the user manual. The Live FX particular functions are described in more detail in this and the next paragraph.

If the background footage is pre-recorded (not dynamic generated CGI), then having a professional player with support for a wide range of formats (including camera raw) is important. The player should have advanced options for scaling and framing as well as playback of non-rectilinear media to make sure that the background image fits with the actors in the foreground.

Furthermore, since the background is part of the final recording – having a full set of professional grading tools is crucial. The on-set conditions will require color adjustments, that are hard to predict on forehand. Grading the background separate in post-production increases cost.

Part of the grading / setting the look on-set is regulating the on-set lights: make sure the lights work together with the back projection to set the proper look. Live FX has a DMX module to control and sync lighting with the actual background projection. Either to create an Ambilight effect or possibly to do light effects. The DMX module is described in more detail later.

In some cases it is easier to have multiple systems playing in parallel, each with its own function: background playback, actor display, light control. The Sync Player function links the playback function of multiple instances of the software with one system acting as master system.

The SDI output of the Assimilate Product Suite is automatically gen-locked if an external gen-lock source is detected.

Actor Cues and Monitor

Since in an LED wall setup, the actors stand in most cases with their back to the background projection, it can be useful to provide cues to the actors to sync their actions with what is happening in the background.

Using the multi-display output of the Assimilate Product Suite software, you can easily add a monitor for the actors on which you can display a countdown: the actor cue.

Actor cues can be enabled as part of the Guides function. After enabling this option in the Guides menu and enabling it for a specific output in the Player – Settings – Monitors menu, you create one or more notes at specific points in the shot, that is played as background. After creating the note (i.e. at the top of the right side metadata stack), a countdown is displayed on the actor monitor. The countdown is shown in the color of the note so you can create countdowns for each actor in his own color.

   

Next to using the Actor Cues for timing actor actions – having a special actor monitor can in some cases be interesting to either give actors info on what is playing in the background, or you can provide a separate ‘foreground feed’ so the actor can see what would be in front of him (e.g. a car scene where the actors can see the turn coming). Also, the actor monitor can provide a more natural reflection on the actor in line with the scene. Optionally, the Sync Player function can be used to provide the actor monitor from a separate system.

DMX

With the DMX module in Live FX you can control on-set lights, based on the media that is played back on the LED background wall (or any other media upon which the lighting should act). You can start the DMX module from the Live FX menu or from the Tools dropdown menu in the player top menu bar.


Using the DMX functions involves 3 steps:

  • Connect to one or more DMX devices.
  • Fixture setup: select the lights and channel layout with those lights.
  • Link the light to a specific part of the (back projection) image to adjust its color dynamically during playback.

DMX Devices

The DMX module supports two interfaces for connecting to a DMX device:

  • ArtNet. This uses Ethernet to connect to one or more DMX devices. This device might be a light/fixture with the Art Net interface build in or it might be a device that in turn connects to the lights/fixtures through a serial connection.
  • USB – supporting Enttec devices which in turn connect to a light/fixture using XLR.

To manage the connections, use the corresponding tabs in the DMX panel. When starting the DMX function, the software automatically scans for Art Net and USB devices. Use the Refresh button to scan again if you connected new devices to the network/computer while the panel was already started.

ArtNet

Local Address

If the system has multiple network connections then select the network to scan for ArtNet devices.

Active Nodes

All ArtNet devices found are listed in the Active Nodes table. That table also shows the ip address of the nodes as well as the DMX Universes the nodes are using.

Sub-Net

An ArtNet device can have a specific sub-net set to be used in addressing the device. If you want to address that device, then the Sub-net settings should have the same value. Usually this is 0. The main Net number for the DMX ArtNet module in the software is currently always 0.

Port Universes

By default the DMX ArtNet implementation has 4 output ports, where each port can address a different DMX Universe. You can alter the DMX Universe to any value in the 1-16 range.

USB Devices

All devices found are listed in the table where the name and serial number are displayed as well as the DMX Universe that the device is linked to. You can change the DMX Universe by clicking the column and entering a different number [1-16].

Fixture Setup

The first tab on the DMX Controller lists all the available fixtures / lights. The second set of tabs (Fixture, Universe, Channels) contains the settings of the selected fixture.

Add

This opens a new panel with a list of predefined fixtures. The first time you open this list, it will show empty, but a download will automatically start to retrieve a list of existing fixtures. When the list is populated, select a manufacturer, the fixture and then the mode it should operate in. This will then determine the DMX channel layout for the fixture. Click OK to add the selected fixture to your setup.


New

Click New to create a new ‘empty’ fixture for which you will enter the DMX channel layout yourself.

Dupl

Duplicate / make a copy of the current selected fixture.

Delete

Remove the selected fixture from your setup.

Up / Down

Move the position of the selected fixture in the list of fixtures.

Universe

In the Universe tab you assign the selected fixture its position within a DMX universe. You can have multiple DMX Universes and each DMX Universe contains 512 channels.

Universe

Assign the DMX Universe the fixture is located in. The DMX Universe is tied to a DMX device (ArtNet or USB) that you are connected to.

Start

The start channel of the fixture within the DMX Universe.

Channels

The number of channels the fixture spans.

Repeat

With this option you basically create additional fixtures of the same type and assign them the subsequent DMX channels after the selected fixture. When the repeat count is higher than 1, additional options become available.

Distribute / Duplicate

The Duplicate option means, that each instance of the (repeated) fixture uses the same selected image area for its color assignment. The Distribute option means that the selected image area is split up in segments and each instance of the (repeated) fixture is using a segment for its color assignment.

Segments

This setting determines the number of rows created to divide up the selected image area that is used by the instances of the repeated fixture. Value 1 divides the selected area up horizontally in the number of repeated fixtures.

Inverse

Normally each of the repeated segments is assigned a segment of the selected image area going  from top left to right bottom. By enabling the Inverse setting, the assignment goes from the right bottom to the left top. This is useful if you want to inverse a certain light effect – e.g. “rolling lights to mimic driving through a tunnel”.

Channels

Each function of a fixture is controlled through a DMX channel. The mapping of functions through channels is determined on the fixture itself – either in a fixed way or depending on the mode it operates in. The DMX module of the software only controls a limited set of functions. In the Channels tab you map the fixture channels to the software supported functions. Each of the channels is assigned one of the following functions by selecting if from the dropdown in the ‘Type’ column of the channel table:

  • None. The channel is not linked to a function. You can however still set a constant value for the channel in the value-column of the channel table. If no explicit value is set then the software will send out value 0 for this channel.
  • Dimmer / Dimmer fine. Control the brightness of the fixture. The value is determined by the slider control on the Fixture tab. If the fixture allows for a two channel dimmer function, for more fine grained control, then use both the ‘fine’ and regular option on subsequent channels.
  • Red / Red fine. Control the Red color channel of the fixture. The value comes from the image (area) that the fixture is linked to.
  • Green / Green fine. Control the Green color channel of the fixture.
  • Blue / Blue fine. Control the Blue color channel of the fixture.
  • White. Control the white color channel of the fixture. The value comes from the luminance of the image (area) that the fixture is linked to.

Use the last column in the channel table to add a description with a channel for later reference.

Fixture

On the fixture tab you control the fixture settings – its color – by linking it to the current image.

Name

Enter / change the fixture label.

Active

When de-activating a fixture that is in a universe with other fixtures that are still active, all channels on the deactivated fixture as set to 0. If there is no other active fixture within that universe then no new data is send to the fixture.

Dimmer

Adjust the brightness of the selected fixture.

Image Area Select

Each of the fixtures in the DMX Fixtures list has a corresponding overlay displayed in the view port that represents the image area it is linked to. On playback, the average color of this area is determined and is send to the fixture.


You can drag the position and size of the overlay. Alternatively use the X and Y, Width and Height controls to adjust the position and size of the selected image area.

Color Scale / Constant

Rather than determining the color for a fixture dynamically from the playback clip, you can also set a constant color for one or more fixtures. For this, enable the Constant option and use the color pot next to it to determine the color for the fixture.

Use the min/max color controls to set a minimum / maximum color that a fixture should receive. When the min/max color is set, then the calculated average color of the image area is scaled within these minimum and maximum colors.

General

The DMX module has a number of generic functions to control its behavior.

Per Clip

The general fixture setup is stored per project. The settings on the Fixture tab (Active / Dimmer / Area select / Scale / Constant) for all fixtures are however by default stored per clip. If a clip does not have any DMX settings with it, the last known settings are used. However, any change that is made to the settings is only stored with the current active clip. Moving back to an earlier clip will restore the settings for that clip. If you switch the Per Clip setting off, then settings are no longer stored per clip but only a single set of settings are maintained for all clips.

Pause

(Temporarily) switch off sending any DMX data to the active devices.

Overlay

(Temporarily) switch off drawing the area select overlays in the View Port.

Import / Export

DMX settings are stored in the project database. Use the Export / Import options to save the DMX settings to an external file and load them into another project. Use the All option to export the settings of all fixtures or only the current selected. This then also determines the extension of the file: ‘*.fdmx’ for a single fixture vs. ‘*.admx’ for all fixtures.

LED Wall Control

In most cases when outputting to an LED wall, the grade and look is all done from withing Live FX. However, it can be very useful to have control over the overall brightness and color temperature of the LED wall. The LED Wall Control utility in Live FX, which is available from the Tools menu in the Player top menu bar - can interface with the Tessera LED Wall Processors from Brompton Ltd.

From the control panel you can adjust the overall brightness of an LED Wall as well as the Intensity Gain and Temperature settings. With the Config option you can define multiple walls, each referencing one or more processors by just adding lines to configuration file. Start each line with a name / wall identifier and add one or more IP-addresses of the processors associated with the wall - separated by a comma.

Wall 1, 10.10.10.1, 10.10.10.2, 10.10.10.3

Wall 1, 10.10.10.25, 10.10.10.26

One of the advantages of adjusting the overall brightness of the LED wall rather than adjusting (the grade on) the outgoing signal is that you can use more of the available (limited) bit depth for the nuance of the grade.

Keyers and Alpha Output

Keyer Pre- and Post-Options

To be able to create a key in a live context without having to create multiple layers and/or use additional plug-ins, the Qualifier menu has been split in a separate color selection tab and a pre- / post-options tab with a set of new functions.

The new functions are:

  • Denoise pre-option, to remove noise in the source image to create a cleaner key.
  • Clip Black / White post-options, which basically works as a lift-function on the alpha channel from the key, clipping the alpha at a certain level with minimal effect to the existing edges.

(all other qualifier functions are described in the main manual)

Alpha output over SDI / Dual head

Live FX offers the option to output the alpha channel of a (live capture) shot as a black and white image through an SDI output or through the dual head GPU output. This offers an opportunity to use the qualifier / keyer function of Live FX – including all layering and garbage mask functions, plug-ins and compositing options – for other systems downstream in the pipeline. The SDI output can be gen-locked and contains the active timecode of the shot / live captured input. To output the black/white matte just select the option with the Display settings in the Player – Settings – Monitors menu.


 


Posted - Saturday, November 27, 2021 2:51:20 AM
http://www.assimilatesupport.com/akb/KnowledgebaseArticle51035.aspx