Up to this point, you’ve learned how audio flows:
- containers feed buffers
- buffers run processors
- processors shape data.
Now we expand the vocabulary of processors themselves. In MayaFlux, mathematics, logic, feedback, and generation are not side features, they are first-class sculpting tools. This tutorial explores how computational expressions become sound-shaping primitives.
In MayaFlux, polynomials don't calculate—they sculpt. Logic doesn't branch—it decides. This tutorial shows you how mathematical expressions become sonic transformations.
- Tutorial: Polynomial Waveshaping{#toc-tutorial-polynomial-waveshaping}
- The Simplest Path{#toc-the-simplest-path}
- Expansion 1: Why Polynomials Shape Sound{#toc-expansion-1-why-polynomials-shape-sound}
- Expansion 2: What `vega.Polynomial()` Creates{#toc-expansion-2-what-vega.polynomial-creates}
- Expansion 3: PolynomialMode::DIRECT{#toc-expansion-3-polynomialmodedirect}
- Expansion 4: What `create_processor()` Does{#toc-expansion-4-what-create_processor-does}
- Try It{#toc-try-it}
- Tutorial: Recursive Polynomials (Filters and Feedback){#toc-tutorial-recursive-polynomials-filters-and-feedback}
- The Next Step{#toc-the-next-step}
- Expansion 1: Why This Is a Filter{#toc-expansion-1-why-this-is-a-filter}
- Expansion 2: The History Buffer{#toc-expansion-2-the-history-buffer}
- Expansion 3: Stability Warning{#toc-expansion-3-stability-warning}
- Expansion 4: Initial Conditions{#toc-expansion-4-initial-conditions}
- Try It{#toc-try-it-1}
- Tutorial: Logic as Decision Maker{#toc-tutorial-logic-as-decision-maker}
- The Simplest Path{#toc-the-simplest-path-1}
- Expansion 1: What Logic Does{#toc-expansion-1-what-logic-does}
- Expansion 2: Logic node needs an input{#toc-expansion-2-logic-node-needs-an-input}
- Expansion 3: LogicOperator Types{#toc-expansion-3-logicoperator-types}
- Expansion 4: ModulationType - Readymade Transformations{#toc-expansion-4-modulationtype—readymade-transformations}
- Try It{#toc-try-it-2}
- Tutorial: Combining Polynomial + Logic{#toc-tutorial-combining-polynomial-logic}
- The Pattern{#toc-the-pattern}
- Expansion 1: Decision Trees in Audio{#toc-expansion-1-decision-trees-in-audio}
- Expansion 2: Chain Order Matters{#toc-expansion-2-chain-order-matters}
- Try It{#toc-try-it-3}
- Tutorial: Processing Chains and Buffer Architecture{#toc-tutorial-processing-chains-and-buffer-architecture}
- Tutorial: Explicit Chain Building{#toc-tutorial-explicit-chain-building}
- The Simplest Path{#toc-the-simplest-path-2}
- Expansion 1: What `create_processor()` Was Doing{#toc-expansion-1-what-create_processor-was-doing}
- Expansion 2: Chain Execution Order{#toc-expansion-2-chain-execution-order}
- Expansion 3: Default Processors vs. Chain Processors{#toc-expansion-3-default-processors-vs.-chain-processors}
- Try It{#toc-try-it-4}
- Tutorial: Various Buffer Types{#toc-tutorial-various-buffer-types}
- Generating from Nodes (NodeBuffer){#toc-generating-from-nodes-nodebuffer}
- The Next Pattern{#toc-the-next-pattern}
- Expansion 1: What NodeBuffer Does{#toc-expansion-1-what-nodebuffer-does}
- Expansion 2: The `clear_before_process` Parameter{#toc-expansion-2-the-clear_before_process-parameter}
- Expansion 3: NodeSourceProcessor Mix Parameter{#toc-expansion-3-nodesourceprocessor-mix-parameter}
- Try It{#toc-try-it-5}
- FeedbackBuffer (Recursive Audio){#toc-feedbackbuffer-recursive-audio}
- The Pattern{#toc-the-pattern-1}
- Expansion 1: What FeedbackBuffer Does{#toc-expansion-1-what-feedbackbuffer-does}
- Expansion 2: FeedbackBuffer Limitations{#toc-expansion-2-feedbackbuffer-limitations}
- Expansion 3: When to Use FeedbackBuffer{#toc-expansion-3-when-to-use-feedbackbuffer}
- Try It{#toc-try-it-6}
- StreamWriteProcessor (Capturing Audio){#toc-streamwriteprocessor-capturing-audio}
- The Pattern{#toc-the-pattern-2}
- Expansion 1: What StreamWriteProcessor Does{#toc-expansion-1-what-streamwriteprocessor-does}
- Expansion 2: Channel-Aware Writing{#toc-expansion-2-channel-aware-writing}
- Expansion 3: Position Management{#toc-expansion-3-position-management}
- Expansion 4: Circular Mode{#toc-expansion-4-circular-mode}
- Try It{#toc-try-it-7}
- Closing: The Buffer Ecosystem{#toc-closing-the-buffer-ecosystem}
- Tutorial: Audio Input, Routing, and Multi-Channel Distribution{#toc-tutorial-audio-input-routing-and-multi-channel-distribution}
- Tutorial: Capturing Audio Input{#toc-tutorial-capturing-audio-input}
- The Simplest Path{#toc-the-simplest-path-3}
- Expansion 1: What `create_input_listener_buffer()` Does{#toc-expansion-1-what-create_input_listener_buffer-does}
- Expansion 2: Manual Input Registration{#toc-expansion-2-manual-input-registration}
- Expansion 3: Input Without Playback{#toc-expansion-3-input-without-playback}
- Try It{#toc-try-it-8}
- Tutorial: Buffer Supply (Routing to Multiple Channels){#toc-tutorial-buffer-supply-routing-to-multiple-channels}
- The Pattern{#toc-the-pattern-3}
- Expansion 1: What "Supply" Means{#toc-expansion-1-what-supply-means}
- Expansion 2: Mix Levels{#toc-expansion-2-mix-levels}
- Expansion 3: Removing Supply{#toc-expansion-3-removing-supply}
- Try It{#toc-try-it-9}
- Tutorial: Buffer Cloning{#toc-tutorial-buffer-cloning}
- The Pattern{#toc-the-pattern-4}
- Expansion 1: Clone vs. Supply{#toc-expansion-1-clone-vs.-supply}
- Expansion 2: Cloning Preserves Structure{#toc-expansion-2-cloning-preserves-structure}
- Expansion 3: Post-Clone Modification{#toc-expansion-3-post-clone-modification}
- Try It{#toc-try-it-10}
- Closing: The Routing Ecosystem{#toc-closing-the-routing-ecosystem}
Tutorial: Polynomial Waveshaping
The Simplest Path
Run this code. Your file plays with harmonic distortion.
void compose() {
auto sound = vega.
read_audio(
"path/to/file.wav") | Audio;
auto poly = vega.
Polynomial([](
double x) {
return x * x; });
auto processor = MayaFlux::create_processor<PolynomialProcessor>(buffers[0], poly);
}
auto Polynomial(Args &&... args) -> CreationHandle< MayaFlux::Nodes::Generator::Polynomial >
auto read_audio(const std::string &filepath) -> CreationHandle< Kakshya::SoundFileContainer >
std::vector< std::shared_ptr< Buffers::ContainerBuffer > > get_last_created_container_buffers()
Retrieves the last created container buffers from the Creator.
Replace "path/to/file.wav" with an actual path.
The audio sounds richer, warmer—subtle saturation. That's harmonic content added by the squaring function.
Expansion 1: Why Polynomials Shape Sound
Click to expand: Transfer Functions as Geometry
When you write x * x, you're not "squaring numbers." You're defining a transfer curve:
- Input -1.0 → Output 1.0
- Input 0.5 → Output 0.25 (quieter)
- Input 1.0 → Output 1.0 (same)
This asymmetry adds harmonics. The waveform's shape **bends**—its geometry changes.
Analog distortion (tubes, tape) works this way: input voltage doesn't map linearly to output. The circuit's response curve adds character.
Polynomials let you design that curve digitally. x * x is gentle. x * x * x adds different harmonics (odd instead of even). std::tanh(x) mimics tube saturation.
You're sculpting frequency response through function shape.
Expansion 2: What <tt>vega.Polynomial()</tt> Creates
Click to expand: Nodes vs. Processors
vega.Polynomial([](double x) { return x * x; }) creates a **Polynomial node**—a mathematical expression that processes one sample at a time.
By itself, the node doesn't touch your audio. You wrap it in a PolynomialProcessor:
auto processor = MayaFlux::create_processor<PolynomialProcessor>(buffers[0], poly);
Why this separation?
- Node: The math itself—reusable, chainable, inspectable
- Processor: The attachment mechanism—knows how to apply the node to a buffer
Same node, different processors → different results. You'll see this pattern everywhere in MayaFlux.
The node is the idea. The processor is the application.
Expansion 3: PolynomialMode::DIRECT
Click to expand: Three Processing Modes
Polynomials have three modes:
- DIRECT:
f(x) where x is the current sample (what you just used)
- RECURSIVE:
f(y[n-1], y[n-2], ...) where output depends on previous outputs
- FEEDFORWARD:
f(x[n], x[n-1], ...) where output depends on input history
Right now you're using DIRECT mode—each sample transformed independently. This is memoryless waveshaping.
Later sections explore time-aware modes. RECURSIVE creates filters and feedback. FEEDFORWARD creates delay-based effects.
For now: DIRECT mode = instant transformation. No memory. No delay.
Expansion 4: What <tt>create_processor()</tt> Does
Click to expand: Attaching to Buffers
When you call:
auto processor = MayaFlux::create_processor<PolynomialProcessor>(buffers[0], poly);
MayaFlux does this:
- Creates a
PolynomialProcessor wrapping your polynomial node
- Gets
buffers[0]'s processing chain (every buffer has one)
- Adds the processor to that chain
- Returns the processor handle
The buffer now runs your polynomial on every cycle:
- 512 samples arrive from the Container
- Your polynomial processes each sample:
y = x * x
- Transformed samples continue to speakers
The processor is now part of the buffer's flow. It runs automatically every cycle until removed.
Try It
auto poly = vega.
Polynomial([](
double x) {
return x * x * x; });
auto poly = vega.
Polynomial([](
double x) {
return 2*x*x - 1; });
return x / (1.0 + std::abs(x));
});
return std::sin(x * 5.0);
});
Creator vega
Global Creator instance for creating nodes, buffers, and containers.
Listen to each. Same structure, different curves. Each curve generates different harmonic content.
You're not "processing audio"—you're sculpting the transfer function.
Tutorial: Recursive Polynomials (Filters and Feedback)
The Next Step
You have memoryless waveshaping. Now add memory.
void compose() {
auto sound = vega.
read_audio(
"path/to/file.wav") | Audio;
[](const std::deque<double>& history) {
return 0.5 * history[0] + 0.3 * history[1];
},
PolynomialMode::RECURSIVE,
2
);
auto processor = MayaFlux::create_processor<PolynomialProcessor>(buffers[0], recursive);
}
Run this. You hear echo/resonance—the signal feeds back into itself.
Expansion 1: Why This Is a Filter
Click to expand: IIR Filters Are Recursive Polynomials
Classic IIR filter equation:
y[n] = b0*x[n] + a1*y[n-1] + a2*y[n-2]
Your recursive polynomial is that filter—just written as a lambda:
[](const std::deque<double>& history) {
return 0.5 * history[0] + 0.3 * history[1];
}
Difference: You can write nonlinear feedback:
[](const std::deque<double>& history) {
return history[0] * std::sin(history[1]);
}
Traditional DSP libraries can't do this. Fixed coefficients only.
Polynomials let you design arbitrary recursive functions—not just linear filters.
Expansion 2: The History Buffer
Click to expand: How RECURSIVE Mode Works
When you write:
PolynomialMode::RECURSIVE, 2
The polynomial maintains a buffer of previous outputs:
history[0] = y[n-1] (last output)
history[1] = y[n-2] (two samples ago)
Each cycle:
- Your lambda reads from
history
- Computes new output
- Polynomial pushes output into
history (shifts everything down)
- Loop repeats
The buffer size determines how far back you can look. Larger buffers = longer memory.
For a 100-sample buffer at 48 kHz:
100 samples ÷ 48000 Hz ≈ 2 ms of history
This is how you build delays, reverbs, resonant filters—anything that needs temporal memory.
Expansion 3: Stability Warning
Click to expand: Recursive Systems Can Explode
Critical rule: Keep feedback coefficients summing to < 1.0 for guaranteed stability.
Safe:
return 0.6*history[0] + 0.3*history[1];
Dangerous:
Why? Each cycle multiplies previous output by 1.2. Exponential growth. Your speakers won't thank you.
MayaFlux won't stop you—this is a creative tool, not a safety guard. Instability can be interesting (briefly). Controlled feedback explosion creates chaotic textures.
But for stable filters: keep gain < 1.0.
Expansion 4: Initial Conditions
Click to expand: Seeding the History Buffer
Recursive polynomials need starting values. Default: [0.0, 0.0, ...]
You can seed them:
recursive->set_initial_conditions({0.5, -0.3, 0.1});
Why?
- Impulse responses: Inject energy without external input. The filter "pings" on its own.
- Self-oscillation: Non-zero initial conditions + feedback gain ≥ 1.0 = continuous tone.
- Warm start: Resume from previous state instead of cold-starting at zero.
Example (resonant ping):
[](const std::deque<double>& history) {
return 0.99 * history[0] - 0.5 * history[1];
},
PolynomialMode::RECURSIVE,
2
);
resonator->set_initial_conditions({1.0, 0.0});
Try It
[](const std::deque<double>& history) {
return 0.996 * (history[0] + history[1]) / 2.0;
},
PolynomialMode::RECURSIVE,
100
);
string->set_initial_conditions(std::vector<double>(100, vega.
Random(-1.0, 1.0)));
[](const std::deque<double>& history) {
double fb = 0.8 * history[0];
return std::tanh(fb * 3.0);
},
PolynomialMode::RECURSIVE,
1
);
[](const std::deque<double>& history) {
return history[0] + 0.5 * history[50];
},
PolynomialMode::RECURSIVE,
50
);
auto Random(Args &&... args) -> CreationHandle< MayaFlux::Nodes::Generator::Stochastics::Random >
Tutorial: Logic as Decision Maker
The Simplest Path
Run this code. You'll hear rhythmic pulses.
void compose() {
auto logic = vega.
Logic(LogicOperator::THRESHOLD, 0.0);
auto processor = MayaFlux::create_processor<LogicProcessor>(
buffer,
logic
);
processor->set_modulation_type(LogicProcessor::ModulationType::REPLACE);
auto sine = vega.
Sine(2.0);
logic->set_input_node(sine);
}
auto AudioBuffer(Args &&... args) -> CreationHandle< MayaFlux::Buffers::AudioBuffer >
auto Logic(Args &&... args) -> CreationHandle< MayaFlux::Nodes::Generator::Logic >
auto Sine(Args &&... args) -> CreationHandle< MayaFlux::Nodes::Generator::Sine >
What you hear: 2 Hz pulse train—beeps every half second.
The sine wave crosses zero twice per cycle. Logic detects the crossings. Output becomes binary: 1.0 (high) or 0.0 (low).
Expansion 1: What Logic Does
Click to expand: Continuous → Discrete Conversion
LogicProcessor makes binary decisions about audio.
Every sample asks: _"Is this value TRUE or FALSE?"_ (based on threshold)
Output: 0.0 or 1.0.
Uses:
- Gate: Silence audio below threshold (noise reduction)
- Trigger: Fire events when signal crosses boundary (drums, envelopes)
- Rhythm: Convert continuous modulation into discrete beats
Example: Feed a slow LFO (0.5 Hz sine) into logic → square wave clock.
Digital doesn't care what the input "means"—just whether it passes the test.
Expansion 2: Logic node needs an input
Click to expand: Continuous → input signal
Logic nodes need an input signal to evaluate. This is also true for other nodes like Polynomial. So far, you did not have to manually set inputs because you used ContainerBuffer which automatically feeds audio into processors.
So, instead of creating an AudioBuffer, you can load a file:
auto sound = vega.
read_audio(
"path/to/file.wav") | Audio;
auto logic = vega.
Logic(LogicOperator::THRESHOLD, 0.0);
auto processor = MayaFlux::create_processor<LogicProcessor>(
buffer[0],
logic
);
processor->set_modulation_type(LogicProcessor::ModulationType::REPLACE);
The audio from the file is automatically fed into the logic node. Considering how all previous examples relied on file contents, and the natutre of rhythmic pulses not exploiting the intricacies or richness of audio files, we are using a sine wave as inputs of the logic node in the main example.
Expansion 3: LogicOperator Types
Click to expand: Binary Operations
LogicOperator defines the test:
- THRESHOLD:
x > threshold → 1.0, else 0.0
- HYSTERESIS: Two thresholds (open/close) to avoid flutter
- EDGE: Trigger on transitions (0→1 or 1→0)
- AND/OR/XOR/NOT: Boolean algebra on current vs. previous sample
- CUSTOM: Your function
Right now you're using THRESHOLD—the simplest test.
Example (hysteresis gate for noisy signals):
auto gate = vega.
Logic(LogicOperator::HYSTERESIS);
gate->set_hysteresis_thresholds(0.1, 0.3);
Signal must exceed 0.3 to open, then drops below 0.1 to close. Prevents rapid on/off flickering.
Expansion 4: ModulationType - Readymade Transformations
Click to expand: Creative Logic Applications
ModulationType provides readymade ways to apply binary logic to audio:
Basic Operations:
- REPLACE: Audio becomes 0.0 or 1.0 (bit reduction)
- MULTIPLY: Audio × logic (standard gate - preserves timbre)
- ADD: Audio + logic (adds impulse on logic high)
Creative Operations:
- INVERT_ON_TRUE: Phase flip when logic high (ring mod effect)
- HOLD_ON_FALSE: Freeze audio when logic low (granular stutter)
- ZERO_ON_FALSE: Hard silence when logic low (noise gate)
- CROSSFADE: Smooth fade based on logic (dynamic blending)
- THRESHOLD_REMAP: Binary amplitude switch (tremolo from logic)
- SAMPLE_AND_HOLD: Freeze on logic changes (glitch/stutter)
- CUSTOM: Your function
Example (granular freeze effect):
processor->set_modulation_type(LogicProcessor::ModulationType::HOLD_ON_FALSE);
Example (amplitude tremolo):
processor->set_modulation_type(LogicProcessor::ModulationType::THRESHOLD_REMAP);
processor->set_threshold_remap_values(1.0, 0.2);
Logic becomes a compositional control for transforming audio in musical ways.
Try It
auto gate = vega.
Logic(LogicOperator::THRESHOLD, 0.2);
auto proc = MayaFlux::create_processor<LogicProcessor>(buffer, gate);
proc->set_modulation_type(LogicProcessor::ModulationType::ZERO_ON_FALSE);
auto freeze = vega.
Logic(LogicOperator::THRESHOLD, 0.3);
auto proc = MayaFlux::create_processor<LogicProcessor>(buffer, freeze);
proc->set_modulation_type(LogicProcessor::ModulationType::HOLD_ON_FALSE);
auto crusher = vega.
Logic(LogicOperator::THRESHOLD, 0.0);
auto proc = MayaFlux::create_processor<LogicProcessor>(buffer, crusher);
proc->set_modulation_type(LogicProcessor::ModulationType::REPLACE);
auto lfo = vega.
Sine(4.0);
auto trem_logic = vega.
Logic(LogicOperator::THRESHOLD, 0.0);
trem_logic->set_input_node(lfo);
auto proc = MayaFlux::create_processor<LogicProcessor>(buffer, trem_logic);
proc->set_modulation_type(LogicProcessor::ModulationType::THRESHOLD_REMAP);
proc->set_threshold_remap_values(1.0, 0.3);
Tutorial: Combining Polynomial + Logic
The Pattern
Load a file. Detect transients with logic. Apply polynomial only when transient detected.
void compose() {
auto sound = vega.
read_audio(
"drums.wav") | Audio;
auto bitcrush = vega.
Logic(LogicOperator::THRESHOLD, 0.0);
auto crush_proc = std::make_shared<LogicProcessor>(bitcrush);
crush_proc->set_modulation_type(LogicProcessor::ModulationType::REPLACE);
auto clock = vega.
Sine(4.0);
auto freeze_logic = vega.
Logic(LogicOperator::THRESHOLD, 0.0);
freeze_logic->set_input_node(clock);
auto freeze_proc = std::make_shared<LogicProcessor>(freeze_logic);
freeze_proc->set_modulation_type(LogicProcessor::ModulationType::HOLD_ON_FALSE);
auto destroyer = std::make_shared<Polynomial>([](double x) {
return std::copysign(1.0, x) * std::pow(std::abs(x), 0.3);
});
auto poly_proc = std::make_shared<PolynomialProcessor>(destroyer);
chain->add_processor(crush_proc, buffers[0]);
chain->add_processor(freeze_proc, buffers[0]);
chain->add_processor(poly_proc, buffers[0]);
buffers[0]->set_processing_chain(chain);
}
std::shared_ptr< Buffers::BufferProcessingChain > create_processing_chain()
Creates a new processing chain for the default engine.
Or if you want direct control without manual processor creation, you can use the fluent API
auto sound = vega.
read_audio(
"drums.wav") | Audio;
auto bitcrush = vega.
Logic(LogicOperator::THRESHOLD, 0.0);
auto crush_proc = MayaFlux::create_processor<LogicProcessor>(buffers[0], bitcrush);
crush_proc->set_modulation_type(LogicProcessor::ModulationType::REPLACE);
auto clock = vega.
Sine(4.0);
auto freeze_logic = vega.
Logic(LogicOperator::THRESHOLD, 0.0);
freeze_logic->set_input_node(clock);
auto freeze_proc = MayaFlux::create_processor<LogicProcessor>(buffers[0], freeze_logic);
freeze_proc->set_modulation_type(LogicProcessor::ModulationType::HOLD_ON_FALSE);
auto destroyer = std::make_shared<Polynomial>([](double x) {
return std::copysign(1.0, x) * std::pow(std::abs(x), 0.3);
});
auto poly_proc = MayaFlux::create_processor<PolynomialProcessor>(buffers[0], destroyer);
Expansion 1: Processing Chains as Transformation Pipelines
Click to expand: Sequential Audio Surgery
You just built a transformation pipeline:
bitcrush → freeze → destroy
Each processor transforms the output of the previous one. This is **compositional signal processing**—you build complex effects by chaining simple operations.
The power comes from order dependency:
gate → distort // Clean transients, heavy saturation
distort → gate // Distorted everything, then choppy
Swap the order = completely different sound.
Extend it:
detect transients → sample-and-hold → bitcrush → wavefold → compress
Traditional plugins give you "distortion with 3 knobs." You compose the distortion algorithm itself.
Every processor is a building block. Chain them to create effects that don't exist as plugins:
- Bitcrush → Freeze → Invert = Glitch stutterer
- Remap → Fold → Gate = Rhythmic harmonizer
- Threshold → Hold → Distort = Transient emphasizer
Logic + Polynomial + Chains = programmable audio transformation system.
Expansion 2: Chain Order Matters
Click to expand: Non-Commutative Processing
Swap the order of logic and polynomial → different result:
Logic → Polynomial // Detect, then distort
Polynomial → Logic // Distort, then detect
Processors are non-commutative. Audio math doesn't follow algebra rules.
Order determines signal flow. You're building a graph, not an equation.
Try It
auto logic = vega.
Logic(LogicOperator::THRESHOLD, 0.3);
auto poly_compress = vega.
Polynomial([](
double x) {
return x * 2.0; });
auto poly_expand = vega.
Polynomial([](
double x) {
return x * 0.5; });
Tutorial: Processing Chains and Buffer Architecture
Tutorial: Explicit Chain Building
The Simplest Path
You've been adding processors one at a time. Now control their order explicitly.
void compose() {
auto sound = vega.
read_audio(
"path/to/file.wav") | Audio;
auto distortion = vega.
Polynomial([](
double x) {
return std::tanh(x * 2.0); });
auto gate = vega.
Logic(LogicOperator::THRESHOLD, 0.1);
auto compression = vega.
Polynomial([](
double x) {
return x / (1.0 + std::abs(x)); });
chain->add_processor(std::make_shared<PolynomialProcessor>(distortion), buffer);
chain->add_processor(std::make_shared<LogicProcessor>(gate), buffer);
chain->add_processor(std::make_shared<PolynomialProcessor>(compression), buffer);
buffer->set_processing_chain(chain);
}
Run this. You hear: clean audio → saturated → gated (silence below threshold) → compressed (controlled peaks).
Swap the order:
chain->add_processor(gate_processor);
chain->add_processor(distortion_processor);
chain->add_processor(compression_processor);
Different sound. Order matters.
Expansion 1: What <tt>create_processor()</tt> Was Doing
Click to expand: Implicit vs. Explicit Chain Management
Previously, when you wrote:
auto processor = MayaFlux::create_processor<PolynomialProcessor>(buffer, poly);
MayaFlux did this behind the scenes:
- Created the processor
- Got the buffer's existing processing chain
- Automatically added the processor to that chain
- Returned the processor
You didn't see this because it was implicit. The processor was silently appended to whatever chain existed.
Now you're building chains explicitly:
chain->add_processor(proc1);
chain->add_processor(proc2);
buffer->set_processing_chain(chain);
When to use explicit chains:
- You need precise order control
- You're building reusable processor "presets"
- You want to swap entire chains dynamically (e.g., switch between clean/distorted modes)
- You're debugging processor interactions
When implicit is fine:
- Simple cases (1-2 processors)
- Order doesn't matter (parallel-like effects)
- Rapid prototyping
Expansion 2: Chain Execution Order
Click to expand: Sequential Data Flow
Chains execute like a for-loop over processors:
for (auto& processor : chain->get_processors()) {
processor->process(buffer);
}
Data flows sequentially:
Container → Buffer (512 samples)
↓
Processor₁: Distortion (modifies samples in-place)
↓
Processor₂: Gate (zeroes out quiet samples)
↓
Processor₃: Compression (reduces peaks)
↓
Speakers
Each processor sees the output of the previous processor.
This is not parallel processing. No branches. No simultaneous paths. Pure sequential transformation.
(Parallel routing requires BufferPipeline—covered in a later tutorial.)
Expansion 3: Default Processors vs. Chain Processors
Click to expand: The Two-Stage Processing Model
Every buffer has two processing stages:
Stage 1: Default Processor (runs first, always)
- Defined by buffer type
- Handles data acquisition or generation
- Examples:
ContainerBuffer: reads from file/stream
NodeBuffer: evaluates a node
FeedbackBuffer: mixes current + previous buffer
AudioBuffer: none (generic accumulator)
Stage 2: Processing Chain (runs second)
- Your custom processors
- Handles data transformation
- Examples: filters, waveshaping, logic, etc.
Execution flow:
1. Buffer's default processor runs (fills buffer with data)
2. Processing chain runs (transforms that data)
3. Result goes to speakers
When you add processors via create_processor(), they go into Stage 2 (the chain).
The default processor is fixed per buffer type. You can replace it, but usually you don't need to—the chain is where creativity happens.
Try It
auto light = vega.
Polynomial([](
double x) {
return std::tanh(x * 1.5); });
auto heavy = vega.
Polynomial([](
double x) {
return std::tanh(x * 5.0); });
auto fold =
vega.
Polynomial([](
double x) {
return std::sin(x * 3.0); });
chain->add_processor(std::make_shared<PolynomialProcessor>( light), buffer);
chain->add_processor(std::make_shared<PolynomialProcessor>( heavy), buffer);
chain->add_processor(std::make_shared<PolynomialProcessor>( fold), buffer);
auto gate =
vega.
Logic(LogicOperator::THRESHOLD, 0.2);
chain->add_processor(std::make_shared<PolynomialProcessor>( light), buffer);
chain->add_processor(std::make_shared<LogicProcessor>( gate), buffer);
chain->add_processor(std::make_shared<PolynomialProcessor>( heavy), buffer);
buffer->set_processing_chain(chain);
Tutorial: Various Buffer Types
Generating from Nodes (NodeBuffer)
The Next Pattern
So far: buffers read from files, nodes affect buffer processing. Now: buffers generate from nodes.
void compose() {
auto sine = vega.
Sine(440.0);
auto node_buffer = vega.
NodeBuffer(0, 512, sine)[0] | Audio;
auto distortion = vega.
Polynomial([](
double x) {
return x * x * x; });
MayaFlux::create_processor<PolynomialProcessor>(node_buffer, distortion);
}
auto NodeBuffer(Args &&... args) -> CreationHandle< MayaFlux::Buffers::NodeBuffer >
Run this. You hear a 440 Hz sine wave with cubic distortion.
No file loaded. The buffer generates audio by evaluating the node 512 times per cycle.
Expansion 1: What NodeBuffer Does
Click to expand: Nodes → Buffers Bridge
NodeBuffer connects the node system (sample-by-sample evaluation) to the buffer system (block-based processing).
Default processor: NodeSourceProcessor
Each cycle:
- Node is evaluated 512 times:
node->process_sample()
- Results fill the buffer
- Processing chain runs (your custom processors)
- Buffer outputs to speakers
Why this matters:
Nodes are mathematical expressions—infinite generators. Buffers are temporal accumulators—finite chunks.
NodeBuffer bridges the two: continuous expression → discrete blocks.
Without NodeBuffer, you'd manually call node->process_sample() 512 times and copy results into a buffer. NodeBuffer automates this.
Expansion 2: The <tt>clear_before_process</tt> Parameter
Click to expand: Accumulation vs. Replacement
NodeBuffer has a flag: clear_before_process
auto node_buffer = vega.
NodeBuffer(0, 512, sine,
true);
true (default): Buffer is zeroed, then filled with node output
false: Node output is added to existing buffer content
- Result: node output + previous buffer state
Why use false?
- Layering: Multiple nodes contributing to the same buffer
- Feedback: Previous cycle's output influences current cycle
- Additive synthesis: Mix multiple generators
Example (layering):
auto sine = vega.
Sine(440.0);
auto buffer = vega.
NodeBuffer(0, 512, sine,
true)[0] | Audio;
auto noise_buffer = vega.
NodeBuffer(0, 512, noise,
false)[0] | Audio;
Result: sine + noise.
Expansion 3: NodeSourceProcessor Mix Parameter
Click to expand: Interpolation Between Existing and Incoming Data
NodeSourceProcessor has a mix parameter (default: 0.5):
auto processor = std::make_shared<NodeSourceProcessor>(node, 0.7f);
Mix = 0.0: Preserve existing buffer content (node output ignored) Mix = 0.5: Equal blend of existing + node output Mix = 1.0: Replace with node output (existing content overwritten)
This is a cross-fade between what's in the buffer and what the node generates.
Use case: Smoothly transition between sources, or create feedback loops where node output gradually replaces decaying buffer content.
Most of the time, you'll use the default (1.0 via clear_before_process=true). But for creative effects, mix is powerful.
Try It
auto fund = vega.
Sine(220.0);
auto harm2 = vega.
Sine(440.0);
auto harm3 = vega.
Sine(660.0);
auto buffer = vega.
NodeBuffer(0, 512, fund,
true)[0] | Audio;
auto sine = vega.
Sine(110.0);
auto buffer2 = vega.
NodeBuffer(0, 512, sine)[1] | Audio;
auto waveshape = vega.
Polynomial([](
double x) {
return std::tanh(x * 10.0); });
MayaFlux::create_processor<PolynomialProcessor>(buffer2, waveshape);
FeedbackBuffer (Recursive Audio)
The Pattern
Buffers that remember their previous state.
void compose() {
auto feedback_buf = vega.
FeedbackBuffer(0, 512, 0.7f, 512)[0] | Audio;
vega.
NodeBuffer(0, 512, impulse,
false)[0] | Audio;
}
auto Impulse(Args &&... args) -> CreationHandle< MayaFlux::Nodes::Generator::Impulse >
auto FeedbackBuffer(Args &&... args) -> CreationHandle< MayaFlux::Buffers::FeedbackBuffer >
Run this. You hear: repeating echoes, each 70% of the previous amplitude.
The buffer **feeds back into itself**—output becomes input next cycle.
Expansion 1: What FeedbackBuffer Does
Click to expand: Recursive Temporal Processing
Default processor: FeedbackProcessor
Each cycle:
- Current buffer content:
buffer[n]
- Previous buffer content:
previous_buffer[n-1]
- Output:
buffer[n] + (feedback_amount * previous_buffer[n-1])
- Store output as next cycle's "previous"
This is a simple delay line with feedback.
Parameters:
feedback_amount: 0.0–1.0 (how much previous state contributes)
feed_samples: Delay length in samples
Example: FeedbackBuffer(0, 512, 0.7, 512) creates:
- 512-sample delay (~10.6 ms at 48 kHz)
- 70% feedback (echoes decay to 0.7 → 0.49 → 0.343 → ...)
Stability: Keep feedback_amount < 1.0 or output will grow unbounded.
Expansion 2: FeedbackBuffer Limitations
Click to expand: What FeedbackBuffer Cannot Do
FeedbackBuffer is simple—intentionally. It implements one specific recursive algorithm: linear feedback delay.
Limitations:
- Fixed feedback coefficient: Can't modulate feedback amount per sample (it's buffer-wide)
- No filtering in loop: Can't insert lowpass/highpass in the feedback path
- No cross-channel feedback: Single-channel only
- No time-varying delay: Delay length is fixed at creation
Why these limitations?
FeedbackBuffer is a building block, not a complete reverb/delay effect.
For complex feedback systems:
- Use
PolynomialProcessor in RECURSIVE mode (per-sample nonlinear feedback)
- Use
BufferPipeline to route buffers back to themselves with processing
- Build custom feedback networks with multiple buffers
FeedbackBuffer is for **simple echoes and resonances**—quick and efficient.
Expansion 3: When to Use FeedbackBuffer
Click to expand: Use Cases and Alternatives
Use FeedbackBuffer when:
- You need a simple delay line with fixed feedback
- Building Karplus-Strong string synthesis
- Creating rhythmic echoes
- Implementing comb filters
Use PolynomialProcessor(RECURSIVE) when:
- You need nonlinear feedback (saturation, distortion in loop)
- Feedback amount varies per sample
- Building filters with arbitrary feedback functions
Use BufferPipeline when:
- You need complex routing (buffer A → process → buffer B → back to A)
- Multi-buffer feedback networks
- Cross-channel feedback
Example: Filtered feedback (requires multiple approaches):
[](const std::deque<double>& history) {
double fb = 0.7 * history[0];
return fb * 0.5 + history[1] * 0.5;
},
PolynomialMode::RECURSIVE,
2
);
Try It
auto feedback_buf = vega.
FeedbackBuffer(0, 512, 0.996f, 100)[0] | Audio;
auto feedback_buf2 = vega.
FeedbackBuffer(0, 512, 0.95f, 50)[1] | Audio;
auto input = vega.
Sine(220.0);
StreamWriteProcessor (Capturing Audio)
The Pattern
Processors that write buffer data somewhere (instead of transforming it).
void compose() {
auto sound = vega.
read_audio(
"path/to/file.wav") | Audio;
auto capture_stream = std::make_shared<DynamicSoundStream>(48000, 2);
auto writer = std::make_shared<StreamWriteProcessor>(capture_stream);
auto chain = buffer->get_processing_chain();
chain->add_processor(writer);
}
Run this. The file plays and is written to capture_stream every cycle.
After playback, capture_stream contains a copy of the entire file (processed through any other processors in the chain before the writer).
Expansion 1: What StreamWriteProcessor Does
Click to expand: Buffers → Containers Bridge
StreamWriteProcessor is the inverse of ContainerBuffer:
- ContainerBuffer: reads from container → fills buffer (source)
- StreamWriteProcessor: reads from buffer → writes to container (sink)
Each cycle:
- Extract 512 samples from the buffer
- Write them to the
DynamicSoundStream at the current write position
- Increment write position by 512
The stream grows dynamically as data arrives. No pre-allocation needed (though you can for performance).
Use cases:
- Record processed audio to memory
- Capture intermediate processing stages for analysis
- Build delay lines / loopers
- Create feedback paths (buffer → stream → buffer)
Expansion 2: Channel-Aware Writing
Click to expand: Multi-Channel Capture
StreamWriteProcessor respects buffer channel IDs:
auto left_buffer = buffers[0];
auto right_buffer = buffers[1];
auto stream = std::make_shared<DynamicSoundStream>(48000, 2);
auto writer_L = std::make_shared<StreamWriteProcessor>(stream);
auto writer_R = std::make_shared<StreamWriteProcessor>(stream);
left_buffer->get_processing_chain()->add_processor(writer_L);
right_buffer->get_processing_chain()->add_processor(writer_R);
Result: Stereo file captured to stereo stream—channels preserved.
Critical: Buffer's channel_id determines which stream channel receives data. Mismatch = warning + skip.
Expansion 3: Position Management
Click to expand: Write Position Control
StreamWriteProcessor tracks where it's writing:
writer->set_write_position(0);
writer->set_write_position(48000);
writer->reset_position();
writer->set_write_position_time(2.5);
uint64_t pos = writer->get_write_position();
double time = writer->get_write_position_time();
Why control position?
- Overdubbing: Write new audio over existing content
- Looping: Reset position to create cyclic recording
- Multi-pass recording: Capture different takes at different positions
Default behavior: append at end. Position auto-increments.
Expansion 4: Circular Mode
Click to expand: Fixed-Size Circular Buffers
DynamicSoundStream can operate in circular mode:
auto stream = std::make_shared<DynamicSoundStream>(48000, 2);
stream->enable_circular_buffer(48000);
auto writer = std::make_shared<StreamWriteProcessor>(stream);
Behavior:
When write position reaches capacity, it wraps to 0. Old data is overwritten.
Use cases:
- Delay lines: Fixed-length delays for effects
- Loopers: Record N seconds, then loop
- Rolling analysis: Keep only the most recent N seconds
Without circular mode, the stream grows unbounded—useful for full recording, problematic for long-running systems.
Try It
auto sound = vega.
read_audio(
"path/to/file.wav") | Audio;
auto stream = std::make_shared<DynamicSoundStream>(48000, 1);
stream->ensure_capacity(48000 * 5);
auto writer = std::make_shared<StreamWriteProcessor>(stream);
buffer->get_processing_chain()->add_processor(writer, buffer);
auto stream2 = std::make_shared<DynamicSoundStream>(48000, 1);
stream2->enable_circular_buffer(48000);
auto writer2 = std::make_shared<StreamWriteProcessor>(stream);
buffer->get_processing_chain()->add_processor(writer, buffer);
Closing: The Buffer Ecosystem
You now understand:
Buffer Types:
AudioBuffer: Generic accumulator
ContainerBuffer: Reads from files/streams (default: ContainerToBufferAdapter)
NodeBuffer: Generates from nodes (default: NodeSourceProcessor)
FeedbackBuffer: Recursive delay (default: FeedbackProcessor)
Processor Types:
PolynomialProcessor: Waveshaping, filters, recursive math
LogicProcessor: Decisions, gates, triggers
StreamWriteProcessor: Capture to containers
Processing Flow:
Default Processor (acquire/generate data)
↓
Processing Chain (transform data)
↓
Output (speakers/containers/other buffers)
Next: Buffer routing, cloning, and supply mechanics—how to send processed buffers to multiple channels/domains.
Tutorial: Audio Input, Routing, and Multi-Channel Distribution
Tutorial: Capturing Audio Input
The Simplest Path
So far: buffers read from files or generate from nodes. Now: capture from your microphone.
void settings() {
stream.input.enabled = true;
stream.input.channels = 1;
}
void compose() {
auto distortion = vega.
Polynomial([](
double x) {
return std::tanh(x * 3.0); });
MayaFlux::create_processor<PolynomialProcessor>(mic_buffer, distortion);
}
Core::GlobalStreamInfo & get_global_stream_info()
Gets the stream configuration from the default engine.
std::shared_ptr< Buffers::AudioBuffer > create_input_listener_buffer(uint32_t channel, bool add_to_output)
Creates a new AudioBuffer for input listening.
Run this. Speak into your microphone. You hear yourself with distortion applied in real-time.
Expansion 1: What <tt>create_input_listener_buffer()</tt> Does
Click to expand: Input System Architecture
MayaFlux has a dedicated input subsystem parallel to the output system.
Architecture:
Hardware (Microphone)
↓
Audio Driver (RtAudio)
↓
BufferManager::process_input()
↓
InputAudioBuffer (per input channel)
↓
InputAccessProcessor (dispatches to listeners)
↓
Your listener buffers
When you call create_input_listener_buffer(channel, add_to_output):
- Creates a new
AudioBuffer
- Registers it with
InputAudioBuffer[channel] as a listener
- If
add_to_output=true: Also registers it with output channel (so it plays back)
Each audio cycle:
- Driver captures microphone data
InputAudioBuffer receives it
InputAccessProcessor copies data to all registered listeners
- Your buffer gets fresh input every cycle
Key insight: InputAudioBuffer is a hub. Multiple buffers can listen to the same input channel simultaneously.
Expansion 2: Manual Input Registration
Click to expand: Fine-Grained Control
create_input_listener_buffer() is convenience. You can do it manually:
void detach_from_audio_input(const std::shared_ptr< Buffers::AudioBuffer > &buffer, uint32_t channel)
Stops reading audio data from the default input source.
void read_from_audio_input(const std::shared_ptr< Buffers::AudioBuffer > &buffer, uint32_t channel)
Reads audio data from the default input source into a buffer.
When to use manual registration:
- You already have a buffer (don't want to create a new one)
- You want to dynamically start/stop listening (e.g., record button)
- You need finer control over buffer lifecycle
Example: Record button
The buffer continues to exist and process, but stops receiving new input.
Expansion 3: Input Without Playback
Click to expand: Silent Capture
Often you want to capture input without playing it back:
auto stream = std::make_shared<DynamicSoundStream>(48000, 1);
auto writer = std::make_shared<StreamWriteProcessor>(stream);
mic_capture->get_processing_chain()->add_processor(writer);
Result: Microphone data is captured to stream, but you don't hear it.
Use cases:
- Recording without monitoring
- Voice analysis (pitch detection, speech recognition)
- Trigger detection (clap to start/stop)
- Level metering / VU display
Try It
auto pitch_shift = vega.
Polynomial([](
double x) {
return x * 1.5; });
auto gate = vega.
Logic(LogicOperator::THRESHOLD, 0.05);
MayaFlux::create_processor<PolynomialProcessor>(mic, pitch_shift);
MayaFlux::create_processor<LogicProcessor>(mic, gate);
auto stream = std::make_shared<DynamicSoundStream>(48000, 1);
auto writer = std::make_shared<StreamWriteProcessor>(stream);
mic->get_processing_chain()->add_processor(writer, mic);
auto trigger = vega.
Logic(LogicOperator::EDGE);
trigger->set_edge_detection(EdgeType::RISING, 0.3);
auto trigger_proc = MayaFlux::create_processor<LogicProcessor>(mic_silent, trigger);
Tutorial: Buffer Supply (Routing to Multiple Channels)
The Pattern
One buffer, multiple output channels.
void compose() {
auto sine = vega.
Sine(440.0);
auto buffer = vega.
NodeBuffer(0, 512, sine)[0] | Audio;
}
void supply_buffer_to_channels(const std::shared_ptr< Buffers::AudioBuffer > &buffer, const std::vector< uint32_t > &channels, double mix)
Supplies a buffer to multiple channels with mixing.
Run this. You hear the same 440 Hz sine on all three channels (left, center, right in surround setup).
The buffer processes once, but outputs to three channels.
Expansion 1: What "Supply" Means
Click to expand: The Difference Between Registration and Supply
Registration (vega.AudioBuffer()[0] | Audio):
- Adds buffer as a child of
RootAudioBuffer[0]
- Buffer processes during channel 0's cycle
- Output accumulates into channel 0
Supply (supply_buffer_to_channels):
- Adds buffer's output to other channels
- Buffer still processes in its original channel
- Output is copied to supplied channels
Analogy:
- Registration = "This buffer lives in channel 0"
- Supply = "After processing in channel 0, send copies to channels 1 and 2"
Architecture:
Buffer processes in channel 0
↓
Output goes to RootAudioBuffer[0]
↓
MixProcessor copies output to RootAudioBuffer[1]
↓
MixProcessor copies output to RootAudioBuffer[2]
Key: The buffer only processes once. Supply is a routing operation, not a duplication of processing.
Expansion 2: Mix Levels
Click to expand: Controlling Supply Amplitude
The mix parameter controls how much of the buffer's output is sent:
void supply_buffer_to_channel(const std::shared_ptr< Buffers::AudioBuffer > &buffer, uint32_t channel, double mix)
Supplies a buffer to a single channel with mixing.
Use case: Stereo width control
auto mono_source = vega.
Sine(440.0);
auto buffer = vega.
NodeBuffer(0, 512, mono_source)[0] | Audio;
Use case: Send effects
auto dry = vega.
NodeBuffer(0, 512, sine)[0] | Audio;
Mix is additive. If channel already has content, supply adds to it.
Expansion 3: Removing Supply
Click to expand: Dynamic Routing Changes
You can remove supply relationships:
auto buffer = vega.
NodeBuffer(0, 512, sine)[0] | Audio;
void remove_supplied_buffer_from_channels(const std::shared_ptr< Buffers::AudioBuffer > &buffer, const std::vector< uint32_t > &channels)
Removes a supplied buffer from multiple channels.
void remove_supplied_buffer_from_channel(const std::shared_ptr< Buffers::AudioBuffer > &buffer, const uint32_t channel)
Removes a supplied buffer from multiple channels.
Use case: Mute individual sends
- Buffer still processes
- Output still goes to its registered channel
- Supplied channels no longer receive it
Use case: Dynamic routing matrices
if (user_pressed_button_A) {
} else {
}
Try It
auto source = vega.
Sine(220.0);
auto buffer = vega.
NodeBuffer(0, 512, source)[0] | Audio;
auto guitar = vega.
NodeBuffer(0, 512, source)[0] | Audio;
Tutorial: Buffer Cloning
The Pattern
One buffer specification, multiple independent instances.
void compose() {
auto sine = vega.
Sine(440.0);
}
std::vector< std::shared_ptr< Buffers::AudioBuffer > > clone_buffer_to_channels(const std::shared_ptr< Buffers::AudioBuffer > &buffer, const std::vector< uint32_t > &channels)
Clones a buffer to multiple channels.
Run this. You hear three independent sine waves on three channels.
Each clone processes **independently**—they don't share data.
Expansion 1: Clone vs. Supply
Click to expand: When to Use Each
Supply:
- One buffer processes once
- Output is copied to multiple channels
- Processing cost: 1× processing
- Memory: One buffer
- Use when: Same signal needs to go to multiple places
Clone:
- Multiple buffers process independently
- Each has its own data, state, processing chain
- Processing cost: N× processing (N = number of clones)
- Memory: N buffers
- Use when: Similar buffers need independent processing
Example: Supply use case
Example: Clone use case
Expansion 2: Cloning Preserves Structure
Click to expand: What Gets Cloned
When you clone a buffer, each clone receives:
- Same buffer type (NodeBuffer, FeedbackBuffer, etc.)
- Same default processor configuration
- Same processing chain (all added processors)
- Independent data (not shared—each clone has its own samples)
- Independent state (feedback buffers have separate history)
Example: Clone a processed buffer
auto sine = vega.
Sine(440.0);
auto distortion = vega.
Polynomial([](
double x) {
return std::tanh(x * 2.0); });
MayaFlux::create_processor<PolynomialProcessor>(buffer, distortion);
Each clone has its own instance of the distortion processor. They don't share state.
Expansion 3: Post-Clone Modification
Click to expand: Differentiating Clones After Creation
After cloning, you can modify individual clones:
std::vector<double> coeffs_a_2 = { 0.2, 0.3, 0.2 };
std::vector<double> coeffs_b_2 = { 1.0, -0.7 };
auto filter1 =
vega.
IIR(coeffs_a_1, coeffs_b_1);
auto filter2 =
vega.
IIR(coeffs_a_2, coeffs_b_2);
MayaFlux::create_processor<FilterProcessor>(cloned_buffers[0], filter1);
MayaFlux::create_processor<FilterProcessor>(cloned_buffers[1], filter2);
auto IIR(Args &&... args) -> CreationHandle< MayaFlux::Nodes::Filters::IIR >
Use case: Stereo decorrelation (same source, slightly different processing per channel)
Try It
auto lfo = vega.
Sine(0.5);
Closing: The Routing Ecosystem
You now understand:
Input Capture:
InputAudioBuffer: Hardware input hub
InputAccessProcessor: Dispatches to listeners
create_input_listener_buffer(): Quick setup
read_from_audio_input() / detach_from_audio_input(): Manual control
Buffer Supply:
supply_buffer_to_channel(): Route one buffer to multiple outputs
- Mix levels: Control send amounts
- Efficiency: Process once, output many times
remove_supplied_buffer_from_channel(): Dynamic routing changes
Buffer Cloning:
clone_buffer_to_channels(): Create independent copies
- Preserves structure: Type, processors, chains
- Independent state: Each clone processes separately
- Post-clone modification: Differentiate behavior after creation
Mental Model:
Input (Microphone)
↓
InputAudioBuffer → Listener buffers (capture)
↓
Processing chains (transform)
↓
Supply (route to multiple channels)
OR
Clone (create independent instances)
↓
RootAudioBuffer (mix per channel)
↓
Output (Speakers)
Next: BufferPipeline (declarative multi-stage workflows with temporal control)