Live Console
The live console lets you type and execute DSL cue expressions in real time during a performance. Any cue the score system supports can be triggered from the console -- synthesis, audio playback, animation, navigation, OSC messaging, and more.
The console can run in two modes: embedded inside the score view, or as a standalone window using the dedicated view system. In standalone mode, commands are sent over WebSocket to the score window and executed there.
Opening the Console
In the score view: click the >_ button in the top bar, or open it programmatically.
As a standalone window: add ?view=live to the URL:
http://localhost:3000/?project=my-score&view=live
This opens a lightweight window with only the live console and WebSocket connection -- no score rendering, no audio context, no animation loop.
Panel Sections
The console has three vertically stacked sections. Drag the bars between them to resize.
Editor -- a text area for writing DSL expressions. Type a cue expression and press Ctrl+Enter to execute the current line, or Ctrl+Shift+Enter to execute all lines.
Output -- shows execution results, errors, and (in standalone mode) a browsable list of all cue expressions found in the score.
Signals -- a live monitor of all ParamBus signals. Shows the current value of every active signal path. Use the filter field to narrow the display.
The panel is draggable (by its header bar) and resizable (all edges and corners).
Keyboard Shortcuts
| Key | Action |
|---|---|
Ctrl+Enter |
Execute the current line |
Ctrl+Shift+Enter |
Execute all lines |
Ctrl+J |
Enter cue browser (standalone mode) |
Tab |
Insert two spaces |
Escape |
Exit cue browser |
Target Selection
Some cue types operate on a specific SVG element -- rotate, scale, o2p, color, fade. Before executing these, you need to select a target.
In the score view: click pick and click on an element in the score. Or type a uid or element ID into the target input and press Enter.
In standalone mode: the pick button is unavailable (no local score). Instead, type into the target input -- it autocompletes from all element IDs in the score. When you insert a cue expression from the browser, the element is targeted automatically since the expression IS the element's ID.
Cue types that do not need an element (synth, audio, speed, nav, osc, stop, etc.) execute without a target.
Cue Browser
In standalone mode, the console fetches the project SVG on open and lists all DSL cue expressions in the output panel. You can interact with this list to select and insert cues into the editor.
Keyboard workflow
- Press
Ctrl+Jto enter browse mode - Navigate with arrow keys (or
j/k) - Press
Enterto insert the selected expression at the cursor - Focus returns to the editor automatically
- Press
Escapeto exit without inserting
You can also click any entry to insert it.
Multi-line Expressions
The editor supports multi-line DSL expressions. Lines with unbalanced parentheses are joined to the next line, so long expressions can be split for readability:
synth(
wave:sin,
freq:440,
dur:2,
amp:0.6
)
Ctrl+Enter detects which expression the cursor is inside and executes the whole thing.
Signal Monitor
The bottom section shows all active ParamBus signals in real time. Each row displays a signal path and its current value, updated at 5 fps.
Use the filter field to narrow the display. For example, typing freq shows only signal paths containing "freq".
This is useful for verifying that control bindings are working -- you can see whether a fader's t value is changing, what frequency a synth is receiving, etc.
Standalone Mode Details
The standalone ?view=live window is a lightweight client. It connects to the same WebSocket server as the score window but does not load the score, initialise audio, or run the animation loop.
What happens when you execute a command:
- The console sends a
livecode_execmessage via WebSocket - The server relays it to all other connected clients
- The score window receives it, resolves the target element, and calls
handleCueTrigger
This means multiple performers can have their own live console windows open, all sending commands to the same score.
Live Rects: Audio Waveforms and Synth Scopes
When you trigger audio or synth cues from the console, there is no score element to anchor the visual display. The system creates ephemeral SVG rects in the viewport automatically. Audio cues get waveform contours with playback cursors. Synth cues get real-time oscilloscope traces. Both use the same rect system.
Each unique uid gets its own rect. Rects stack vertically from the center of the viewport. They can be dragged by their body and resized by their edges. They persist until the voice stops or the global stop button clears them.
The waveform parameter controls where the display goes:
| Value | Behaviour |
|---|---|
self (default) |
Voice gets its own rect |
none |
No visual display |
<uid> |
Display renders into another voice's rect |
By varying uid, poly, and waveform, you can build anything from a single combined display to a multi-rect spatial layout.
Audio Examples
Single file with its own waveform
audio(src:drone.wav, loop:0, fade:2, uid:pad1)
A blue dashed rect appears with the waveform rendered inside and a cursor tracking playback.
Multiple independent layers
Each uid gets a separate rect. Use this to visually separate different sound roles:
audio(src:"noise/rain.wav", amp:0.6, loop:0, uid:rain)
audio(src:"pads/drone.wav", amp:0.3, speed:0.5, uid:drone)
audio(src:"perc/click.wav", amp:0.8, loop:0, uid:clicks)
Three separate rects, each with its own waveform and cursor. Drag them apart to organise the display.
Layering voices on a shared waveform
Use the same uid with poly:0 to stack multiple voices onto one rect. Each voice gets its own coloured peak layer and independent cursor:
audio(src:"pads/drone.wav", loop:0, speed:1, uid:mix, poly:0)
audio(src:"pads/drone.wav", loop:0, speed:0.5, uid:mix, poly:0)
audio(src:"pads/drone.wav", loop:0, speed:0.25, uid:mix, poly:0)
One rect, three cursors sweeping at different speeds, three peak contours in different colours from the 12-colour palette. Useful for building chordal textures from a single source.
Hot-updating a running voice
In mono mode (default poly:1), retriggering the same uid updates the running voice without restarting it. Execute these one at a time:
audio(src:drone.wav, loop:0, uid:myDrone)
audio(src:drone.wav, speed:0.5, uid:myDrone)
audio(src:drone.wav, amp:0.3, pan:-0.5, uid:myDrone)
audio(src:drone.wav, in:2, out:8, uid:myDrone)
audio(src:other.wav, uid:myDrone)
audio(src:drone.wav, loop:1, uid:myDrone)
Speed and gain change immediately via smooth ramps. In/out points and src changes take effect on the next loop iteration. Setting loop:1 finishes after the current iteration.
Reverse playback
Negative speed plays the file backwards. The waveform mirrors horizontally and the cursor sweeps right-to-left:
audio(src:texture.wav, speed:-1, loop:0, fade:2, uid:rev1)
Polyphony with voice cap
Cap overlapping voices. When the limit is reached, the oldest voice stops:
audio(src:grain.wav, loop:0, speed:rand(0.3, 2), uid:grains, poly:4)
Execute this repeatedly. Each trigger creates a new voice at a random speed. After four voices, the fifth stops the oldest.
AudioPool Examples
One-shot from a folder
Each execution selects and plays one file from the pool:
audioPool(path:"sfx/birds", mode:shuffle, pan:rand(-1, 1), uid:birds)
The waveform redraws on each trigger to show whichever file was selected. Execute repeatedly to walk through the pool.
Polyphonic pool
Allow multiple pool hits to overlap:
audioPool(path:"sfx/wood", mode:rand, amp:rand(0.3, 1), pan:rand(-0.8, 0.8), poly:5, uid:wood)
Execute rapidly to build up a dense layered texture with random pan positions.
AudioImpulse Examples
Start a texture process
Impulse runs autonomously once triggered -- no need to re-execute:
audioImpulse(
path:"sfx/rain",
rate:30,
jitter:0.5,
amp:rand(0.2, 0.6),
pan:rand(-1, 1),
speed:rand(0.8, 1.3),
fadeout:"40%",
poly:6,
lifetime:process,
uid:rain1
)
A rect appears with a base waveform. Coloured peak layers and cursors appear and disappear as hits fire and complete.
Update a running impulse
Retrigger with the same uid to change parameters on the fly:
audioImpulse(path:"sfx/rain", rate:60, jitter:0.8, poly:8, uid:rain1)
Rate and jitter update immediately. Subsequent hits use the new values.
Granular cloud
Push the rate high and the speed range wide for granular-style textures:
audioImpulse(
path:"grains/voice",
rate:200,
jitter:0.9,
amp:rand(0.05, 0.2),
pan:rand(-1, 1),
speed:rand(0.1, 4),
fadein:0.01,
fadeout:rand(0.01, 0.1),
poly:12,
lifetime:process,
uid:voiceGrain
)
Synth Examples
Simple tone with scope
A live oscilloscope trace appears in an ephemeral rect:
synth(uid:tone, wave:sine, freq:440, amp:0.1)
Chord with envelope
Multiple oscillators rendered as a single scope:
synth(uid:pad, wave:sine, freq:[440, 550, 660], env:{a:2, r:3}, amp:0.12)
Hot-updating a synth
In mono mode (default), retriggering the same uid updates the running voice. Wave type, frequency, filter, amp, and pan update smoothly:
synth(uid:pad, wave:sine, freq:440, amp:0.1)
synth(uid:pad, wave:triangle, freq:330, amp:0.15)
synth(uid:pad, freq:[220, 330, 440], amp:0.12)
synth(uid:pad, filter:{type:lp, freq:800, q:4})
Random chords
rand() expressions evaluate fresh on each execution. Run the same line repeatedly to get different voicings:
synth(
uid:cluster,
wave:triangle,
freq:[rand(200, 800), rand(300, 1000), rand(400, 1200)],
env:{a:2, r:3},
amp:0.1
)
Separate scopes for separate roles
Give each synth its own uid to get independent rects. Drag them apart to arrange by function:
synth(uid:bass, wave:saw, freq:55, amp:0.1)
synth(uid:mid, wave:triangle, freq:440, amp:0.08)
synth(uid:high, wave:sine, freq:2200, amp:0.05)
Three rects appear, each showing the waveform shape of its oscillator.
Stacking scopes on a shared display
Use waveform:uid to direct one synth's scope into another synth's rect. Both scopes render in the same element with distinct colours:
synth(uid:pad1, wave:sine, freq:[440, 550, 660], amp:0.12)
synth(uid:pad2, wave:square, freq:880, waveform:pad1, amp:0.08)
pad1 gets a rect with its scope. pad2's scope stacks into pad1's rect in a different colour. Useful for visually grouping related voices.
Poly overlapping voices
With poly:N, each execution creates a new overlapping voice. All poly voices share a single scope rect:
synth(
uid:swell,
wave:sine,
freq:[rand(100, 800), rand(200, 1200)],
env:{a:4, r:6},
dur:8,
amp:0.1,
poly:12
)
Execute repeatedly. Each trigger creates a two-oscillator chord with random frequencies, fading in over 4 seconds, auto-stopping at 8 seconds with a 6-second release tail. Up to 12 voices overlap. All scope traces stack in one rect with distinct colours. The 13th voice steals the oldest.
Poly cloud building
Combine rand(), dur, env, and poly to build evolving textures by repeated execution:
synth(
uid:cloud,
wave:triangle,
freq:[rand(33, 333), rand(33, 533), rand(33, 1333)],
env:{a:6, r:4},
dur:12,
amp:0.08,
poly:20
)
Each execution adds a new three-oscillator chord with random frequencies. Voices fade in over 6 seconds, sustain, then auto-stop at 12 seconds with a 4-second release. One rect holds all scopes.
Signal-bound synth
Bind synth parameters to controlXY faders or o2p animations. Move the fader to modulate the synth in real time:
synth(uid:ctrlDrone, wave:saw, freq:fader1.t[80,800], amp:fader1.y[0,0.2])
Check the signal monitor panel to verify values are flowing.
Mixing Audio and Synth
Shared display for different source types
Audio waveforms and synth scopes use the same live rect system. Direct a synth scope into an audio rect using waveform:uid:
audio(src:drone.wav, loop:0, amp:0.3, uid:layer1)
synth(uid:synPad, wave:sine, freq:[220, 330], waveform:layer1, amp:0.1)
The audio waveform and synth scope share the same rect. The synth scope renders on top in a distinct colour.
Separate rects for separate roles
For clarity during performance, keep different sound roles on separate rects:
audio(src:"pads/warm.wav", loop:0, fade:3, uid:backing)
audioImpulse(path:"sfx/rain", rate:20, poly:4, uid:texture)
synth(uid:lead, wave:saw, freq:440, filter:{type:lp, freq:1200}, amp:0.08)
Three rects, each with its own visual identity. Drag them to different screen positions to create a spatial performance layout.
Layered drone setup
Build a complex layered drone from the console, mixing file playback with synthesis:
audio(src:"pads/drone.wav", loop:0, speed:0.5, fade:4, uid:droneFile)
audio(src:"pads/drone.wav", loop:0, speed:0.25, fade:4, uid:droneFile, poly:0)
synth(uid:droneSyn, wave:saw, freq:55, filter:{type:lp, freq:400, q:2}, amp:0.06)
synth(uid:droneOvertone, wave:sine, freq:[110, 165, 220], waveform:droneSyn, amp:0.04)
The two audio voices share one rect (same uid with poly:0, two peak layers, two cursors). The two synths share another rect (droneOvertone's scope targets droneSyn). Two rects total, four sound sources, each pair visually grouped.
External Audio Search Path
By default, audio cues resolve files from the project's audio/ directory, falling back to the shared shared/audio/ directory. When working from a blank template or using samples stored elsewhere on the system, you can point the server at an external directory.
Set a path
audiopath(/home/rob/samples)
audiopath("/mnt/audio library/field recordings")
The server validates that the path exists and is a directory. On success, the output shows what's available at the root (subdirectories and audio file count). All audio resolution -- single file, pool, and impulse -- now falls back to this directory when files are not found in the project.
Query the current path
audiopath()
audiopath(status)
Reports the currently active path, or "not set".
Clear the path
audiopath(clear)
Removes the external path. Audio resolution reverts to project + shared only.
How it works
The server mounts the external directory at /ext-audio/. The fetch chain for all audio cues becomes: project directory, then external path, then shared directory. Pool and impulse file listings also scan the external path when the project directory is empty.
The path is per-server-session (not persisted across restarts). All connected clients benefit from the mount -- if one performer sets the path from a standalone live console, the score window's audio resolution picks it up immediately.
Example workflow
// Point at a sample library
audiopath(/home/rob/freesound-pack)
// Now use folders from that library
audioPool(path:rain, mode:shuffle, uid:rain1)
audioImpulse(path:birds, rate:20, jitter:0.6, poly:6, uid:birds1)
audio(src:"drone/low-hum.wav", loop:0, uid:hum)
// Done with external samples -- revert
audiopath(clear)
Spatial Control
The live console can drive all four spatial control modes. Sources that use init:armed(...) support mode ④ — the scored+free hybrid where a source follows a path, pauses for free positioning or preset sequences, then resumes.
Recall a controlXY preset
ui(action:'controlXYRecall', preset:'front_left')
ui(action:'controlXYRecall', preset:'overhead', dur:3, ease:'easeInOutSine')
Works on any controlXY handle (mode ③) and on any paused o2p source (mode ④). During pause-drag, o2p sources publish to controlXY:<uid> ParamBus channels, making them targetable by the preset system.
Run a spatial sequence
ui(action:'controlXYDefineSequence', name:'orbit_snap', steps:'front,right,rear,left,front')
ui(action:'controlXYSequence', seq:'orbit_snap', dur:2, ease:'easeInOutSine', loop:true)
ui(action:'controlXYSequenceStop')
Define a sequence of saved positions, then play it. Each step tweens to the next preset over dur seconds. The sequence runs on all active controlXY handles and any paused o2p sources.
Pause and resume an o2p source
The armed lifecycle is accessible via the animation registry. Pause a scored source, run a spatial gesture, then resume from the new position:
// Pause src1 (triggers _onPause → enables free drag)
const reg = window.oscillaAnimRegistry.src1;
reg.el.dispatchEvent(new CustomEvent('oscilla-hit', { bubbles: true, detail: { kind: 'o2p', uid: 'src1' } }));
After pausing, the signal monitor shows controlXY:src1.x and controlXY:src1.y — the source is now in the preset system. Run preset recalls or sequences, then resume:
// Resume src1 (triggers _onResume → nearest-t restart)
reg.el.dispatchEvent(new CustomEvent('oscilla-hit', { bubbles: true, detail: { kind: 'o2p', uid: 'src1' } }));
The resume handler finds the nearest point on the path to the current position and restarts the animation from there.
Composed interlude example
A typical mode ④ workflow from the console — pause src1, tween it through two positions, then resume:
// 1. Pause
const el = window.oscillaAnimRegistry.src1.el;
el.dispatchEvent(new CustomEvent('oscilla-hit', { bubbles: true, detail: { kind: 'o2p', uid: 'src1' } }));
// 2. Tween through positions (DSL)
ui(action:'controlXYRecall', preset:'front_left', dur:2, ease:'easeInOutSine')
Wait for the tween to complete, then:
ui(action:'controlXYRecall', preset:'rear_right', dur:3, ease:'easeOutQuad')
// 3. Resume
el.dispatchEvent(new CustomEvent('oscilla-hit', { bubbles: true, detail: { kind: 'o2p', uid: 'src1' } }));
The signal monitor shows the full arc: spatial:src1.azi sweeps through the tween positions, then snaps to the nearest path point on resume.
Direct tween (skip presets)
Move a controlXY handle to an arbitrary position without saving a preset first:
window.controlXYPresets.tweenTo({
domeMixer: { h_src5: { x: 0.5, y: 0.5 } }
}, { dur: 3, ease: 'easeInOutSine' });
Monitor spatial signals
The signal monitor panel shows all active spatial channels. Filter by spatial: to see AED values, or controlXY: to see normalized positions. During mode ④ transitions, watch the channels switch between animated path values and free-drag values.
Stopping
Stop a specific voice
stop(uid:rain1)
synthStop(uid:pad, rel:2)
stop(uid:X) works for audio voices and impulse processes. synthStop works for synth voices and accepts an optional release time override. For poly synths, synthStop(uid:X) stops all sub-voices in the group.
Stop everything
The global stop button in the top bar stops all audio, all synth voices, all impulse processes, and clears all live rects. From the console:
stop()
Related
- Dev: Live Console -- technical reference
- Spatialisation -- spatial audio system and four control modes
- controlXY -- presets, sequences, and spatial positioning
- synth -- synth cue reference
- audio -- audio cue reference
- audioPool -- pool cue reference
- audioImpulse -- impulse cue reference
- audio_shared -- shared audio features
- Cue System -- overview of all cue types
- Control & Modulation -- signal routing
Tip: use ← → or ↑ ↓ to navigate the docs