In MayaFlux, data isn’t something you use -> it’s something you shape. Sound, light, numbers, all share the same substrate. You don’t generate a waveform; you sculpt a pattern of information and let it move. These tutorials begin with the smallest gesture (loading a file) and expand until you can construct entire temporal architectures. The point is not playback. The point is agency over time.
Every example you run produces real sound, but the goal is not sound itself — the goal is to understand the movement of information.
Each section in this series introduces one idea:
Together, they form the foundation of digital composition — not in the musical sense, but in the computational one.
What you’ll do here:
Eventually, build declarative pipelines that describe complete computational events
What you won’t do here:
Everything here is real code: The same logic that runs inside the MayaFlux engine. You’ll read it, modify it, and run it directly.
Each step is designed to teach you how the system thinks, so that later, when you invent something new, you can do so fluently without waiting for someone else to provide the building blocks.
Run this code. The file is loaded into memory.
// In your src/user_project.hpp compose() function:
void compose() {
auto sound_container = vega.read_audio("path/to/your/file.wav");
}Replace "path/to/your/file.wav" with an actual path to a
.wav file.
Run the program. You’ll see console output showing what loaded:
✓ Loaded: path/to/your/file.wav
Channels: 2
Frames: 2304000
Sample Rate: 48000 Hz
Nothing plays yet. That’s intentional—and important. The rest of this section shows you what just happened.
You have:
The file is loaded. Ready. Waiting.
When you call vega.read_audio(), you’re not just reading
bytes from disk and forgetting them. You’re creating a
Container—a structure that holds:
The difference: A file is inert. A Container is active creative material. It knows its own shape. It can tell you about regions within itself. It can be queried, transformed, integrated with other Containers.
When vega.read_audio("file.wav") runs, MayaFlux:
SoundFileReader and initializes FFmpegSoundFileContainer objectContiguousAccessProcessor (the Container’s
default processor, which knows how to feed data to buffers
chunk-by-chunk)The Container is now your interface to that audio data. It’s ready to be routed, processed, analyzed, transformed.
As you know, raw audio data can be large. MayaFlux allocates and manages it safely through smart pointers.
At a lower, machine-level (in programming parlance), the user is expected to manage memory manually: instantiate objects, bind them, handle transfers, and delete when done. Any misalignment among these steps can cause crashes or undefined behavior. MayaFlux doesn’t expect you to handle these manually—unless you choose to.
MayaFlux uses smart pointers—a C++11 feature that automatically tracks how many parts of your program are using a Container. When the last reference disappears, the memory is freed automatically.
When you write:
auto sound_container = vega.read_audio("file.wav");What’s actually happening is:
std::shared_ptr<MayaFlux::Kakshya::SoundFileContainer> sound_container =
/* vega.read_audio() internally creates and returns a shared_ptr */;You don’t see std::shared_ptr. You see
auto. But MayaFlux is using it. This means:
delete the
Container. It handles itself.This is why vega.read_audio() is safe. The complexity of
memory management exists—it’s just not your problem.
vega?vega is a fluent interface—a
convenience layer that takes MayaFlux’s power and hides the verbosity
without hiding the machinery. Grappling with complexity generally yields
expressive, and often well-reasoned, implementations. However, many find
it hard to parse the wall of code that results from such grappling,
partly because machine-level languages tend to prioritize other aspects
of coding over user experience (UX).
Making complex logic less verbose can be a good way to encourage more people to explore.
If you didn’t have vega, loading a file would look like
this:
// Without vega - explicit, showing every step
auto reader = std::make_unique<MayaFlux::IO::SoundFileReader>();
MayaFlux::IO::SoundFileReader::initialize_ffmpeg();
if (!reader->can_read("file.wav")) {
std::cerr << "Cannot read file\n";
return;
}
reader->set_target_sample_rate(MayaFlux::Config::get_sample_rate());
reader->set_target_bit_depth(64);
reader->set_audio_options(MayaFlux::IO::AudioReadOptions::DEINTERLEAVE);
MayaFlux::IO::FileReadOptions options = MayaFlux::IO::FileReadOptions::EXTRACT_METADATA;
if (!reader->open("file.wav", options)) {
MF_ERROR(Journal::Component::API, Journal::Context::FileIO, "Failed to open file: {}", reader->get_last_error());
return;
}
auto container = reader->create_container();
auto sound_container = std::dynamic_pointer_cast<Kakshya::SoundFileContainer>(container);
if (!reader->load_into_container(sound_container)) {
MF_ERROR(Journal::Component::API, Journal::Context::Runtime, "Failed to load audio data: {}", reader->get_last_error());
return;
}
auto processor = std::dynamic_pointer_cast<Kakshya::ContiguousAccessProcessor>(
sound_container->get_default_processor());
if (processor) {
std::vector<uint64_t> output_shape = {
MayaFlux::Config::get_buffer_size(),
sound_container->get_num_channels()
};
processor->set_output_size(output_shape);
processor->set_auto_advance(true);
}
// Now you have sound_containerDepending on your exposure to programming, this can either feel complex or liberating. Lacking the facilities to be explicit about memory management or allocation can be limiting:
However, the above code snippet is verbose for something so simple.
vega says: “You just want to load a file? Say so.”
auto sound_container = vega.read_audio("file.wav");Same machinery underneath. Same FFmpeg integration. Same resampling. Same deinterleaving. Same processor setup. Same safety.
What vega does:
What vega doesn’t do:
The short syntax is convenience. The long syntax is control. MayaFlux gives you both.
Use vega because you value fluency, not because you fear
the machinery.
The Container you just created isn’t just a data holder. It has a default processor—a piece of machinery attached to it that knows how to feed data to buffers.
This processor (ContiguousAccessProcessor) does crucial
work:
When you later connect this Container to buffers (in the next section), the processor is what actually feeds the data—it’s the active mechanism.
vega.read_audio() configures this processor
automatically:
This is why StreamContainers (that
SoundFileContainer inherits from) are more than
data—they’re active, with built-in logic for how they should be
consumed.
.read_audio() Does NOT DoThis is important:
.read_audio() does NOT:
.read_audio() DOES:
The Container sits in memory, ready to be used. But “ready to be used” means you decide what happens next: process it, analyze it, route it to output or visual processing, feed it into a machine-learning pipeline, anything.
That’s the power of this design: loading is separate from routing. You can load a file and immediately send it to hardware, or spend the next 20 lines building a complex processing pipeline before ever playing it.
In the next section, we’ll connect this Container to buffers and route it to your speakers. And you’ll see why this two-step design—load, then connect—is more powerful than one-step automatic playback.
You have a Container loaded. Now you need to send it somewhere.
auto sound_container = vega.read_audio("path/to/file.wav");
auto buffers = MayaFlux::hook_sound_container_to_buffers(sound_container);Run this code. Your file plays.
The Container + the hook call—together they form the path from disk to speakers. This section shows you what that connection does.
A Buffer is a temporal accumulator—a space where data gathers until it’s ready to be released, then it resets and gathers again.
Buffers don’t store your entire file. They store chunks. At your project’s sample rate (48 kHz), a typical buffer might hold 512 or 4096 samples: a handful of milliseconds of audio.
Here’s why this matters:
Your audio interface (speakers, headphones) has a fixed callback rate. It says: “Give me 512 samples of audio, and do it every 10 milliseconds. Repeat forever until playback stops.”
Buffers are the industry standard method to meet this demand.
This cycle repeats thousands of times per minute. Buffers make that possible.
Without buffers, you’d have to manually manage these chunks yourself. With buffers, MayaFlux handles the cycle. Your Container’s processor feeds data into them. The buffers exhale it to your ears.
A stereo file has 2 channels. A multichannel file might have 4 or 8 channels. MayaFlux doesn’t merge them into one buffer.
Instead, it creates one buffer per channel.
Why? Because channels are independent processing domains. A stereo file’s left channel and right channel:
When you hook a stereo Container to buffers, MayaFlux creates:
Each buffer:
This per-channel design is why you can later insert processing on a per-channel basis. Insert a filter on channel 0? The first channel gets filtered. Leave channel 1 alone? The second channel plays unprocessed. This flexibility is only possible because channels are architecturally separate at the buffer level.
MayaFlux has a buffer manager—a central system that creates, tracks, and coordinates all buffers in your program.
When you call hook_sound_container_to_buffers(), here’s
what happens:
auto buffer_manager = MayaFlux::get_buffer_manager();
uint32_t num_channels = container->get_num_channels();
for (uint32_t channel = 0; channel < num_channels; ++channel) {
auto container_buffer = buffer_manager->create_audio_buffer<ContainerBuffer>(
ProcessingToken::AUDIO_BACKEND,
channel,
container,
channel);
container_buffer->initialize();
}Step by step:
ContainerBuffer (a
buffer that reads from a Container)AUDIO_BACKEND (more on this in Expansion
5)Now the buffer manager knows:
When the audio callback fires (every 10ms at 48 kHz), the buffer
manager wakes up all its AUDIO_BACKEND buffers and says:
“Time for the next chunk. Fill yourselves.”
Each buffer asks its Container’s processor: “Give me 512 samples from your channel.”
The processor pulls from the Container, advances its position, and hands back a chunk.
The buffer receives it and passes it to the audio interface.
Repeat forever.
You created a ContainerBuffer, not just a generic
Buffer. Why the distinction?
A Buffer is abstract—it’s a temporal accumulator. But abstract things don’t know where their data comes from.
A ContainerBuffer is specific—it’s a buffer that knows:
When the callback fires, the ContainerBuffer doesn’t generate samples. It asks: “Container, give me the next 512 samples from your channel 0.”
The Container’s processor (remember
ContiguousAccessProcessor from Section 1?) handles this.
It:
The ContainerBuffer receives it. Done.
This is the architecture: Buffers don’t generate or transform. They request and relay. The Container’s processor does the work. The buffer coordinates timing with hardware.
Later, when you add processing nodes or attach processing chains, you’ll insert them between the Container’s output and the buffer’s input. The buffer still doesn’t transform—it still just relays. But what it relays will have been processed first.
In the buffer creation code:
auto container_buffer = buffer_manager->create_audio_buffer<ContainerBuffer>(
ProcessingToken::AUDIO_BACKEND,
channel,
container,
channel);Notice ProcessingToken::AUDIO_BACKEND. This is a
token—a semantic marker that tells MayaFlux:
Tokens are how MayaFlux coordinates different processing domains without confusion. Later, you might have:
AUDIO_BACKEND buffers - connected to speakers (hardware
real-time)AUDIO_PARALLEL buffers - internal processing (process
chains, analysis, etc.)GRAPHICS_BACKEND buffers - visual domain (frame-rate,
not sample-rate)Each token tells the system what timing, synchronization, and backend this buffer belongs to.
For now: AUDIO_BACKEND means “this buffer is feeding
your ears directly. It must keep real-time pace with the audio
interface.”
When you call vega.read_audio() | Audio, MayaFlux
creates the buffers internally. But now, with the ability to get those
buffers back, you have access to them:
auto sound_container = vega.read_audio("path/to/file.wav");
auto buffers = MayaFlux::get_last_created_container_buffers();
// Now you have the buffers as a vector:
// buffers[0] → channel 0
// buffers[1] → channel 1 (if stereo)
// etc.Why is this useful? Because buffers own processing chains. And processing chains are where you’ll insert processes, analysis, transformations - everything that turns passive playback into active processing.
Each buffer has a method:
auto chain = buffers[0]->get_processing_chain();This gives you access to the chain that currently handles that buffer’s data. Right now, the chain just reads from the Container and writes to the hardware. But you can modify that chain.
This is the foundation for Section 3. You load a file, get the buffers, access their chains, and inject processing into those chains.
##vega.read_audio("path/to/file.wav") | Audio;This single line does all of the above: creates a Container, creates
per-channel buffers, hooks them to the audio hardware, and starts
playback. No file plays until the | Audio operator, which
is when the connection happens.
auto sound_container = vega.read_audio("path/to/file.wav");
auto buffers = MayaFlux::get_last_created_container_buffers();
// File is loaded, buffers exist, but no connection to hardware yet
// Buffers have chains, but nothing is using them
// To actually play, you'd need to ensure they're registered
// (vega.read_audio() | Audio does this automatically)Understanding the difference:
| Audio) triggers buffer creation
and hardware connectionvoid compose() {
vega.read_audio("path/to/your/file.wav") | Audio;
// File plays
}Replace "path/to/your/file.wav" with an actual path.
You have:
No code running during playback—just the callback cycle doing its work, thousands of times per minute.
In the next section, we’ll modify these buffers’ processing chains. We’ll insert a filter processor and hear how it changes the sound. This is where MayaFlux’s power truly shines—transforming passive playback into active, real-time audio processing.
You have buffers. You can modify what flows through them.
auto sound_container = vega.read_audio("path/to/file.wav") | Audio;
auto buffers = MayaFlux::get_last_created_container_buffers();
auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);Run this code. Your file plays with a low-pass filter applied.
The filter smooths the audio—reduces high frequencies. Listen to the difference.
That’s it. Three lines of code: load, get buffers, insert filter. The rest of this section shows you what just happened.
vega.IIR()?vega.IIR() creates a filter node—a computation unit that
processes audio samples one at a time.
An IIR filter (Infinite Impulse Response) is a mathematical operation that transforms samples based on feedback coefficients. The two parameters are:
{0.1, 0.2, 0.1} - how the current and past input samples
contribute{1.0, -0.6} -
how past output samples contributeYou don’t need to understand the math. Just know: this creates a filter that smooths audio.
vega is the fluent interface—it subsumes verbosity.
Without it:
// Without vega - explicit
auto filter = std::make_shared<Nodes::Filters::IIR>(
std::vector<double>{0.1, 0.2, 0.1},
std::vector<double>{1.0, -0.6}
);With vega:
auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});Same filter. Same capabilities. Vega just hides the verbosity.
MayaFlux::create_processor()?A node (like vega.IIR()) is a
computational unit—it processes one sample at a time.
This processor is a buffer-aware wrapper around that node. It knows:
create_processor() wraps your filter node in a processor
and attaches it to a buffer’s processing chain.
auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);What this does:
FilterProcessor that knows how to apply that
node to buffer databuffers[0]’s processing chain
(implicit—this happens automatically)The buffer now has this processor in its chain. Each cycle, the buffer runs the processor, which applies the filter to all samples in that cycle.
Each buffer owns a processing chain—an ordered sequence of processors that transform data.
Your buffer’s default processor was:
When create_processor() adds your FilterProcessor, the
chain becomes:
Each cycle:
Data flows: Container → [filtered] → Speakers
The chain is ordered. Processors run in sequence. Output of one becomes input to next.
Your stereo file has two channels. Right now, only channel 0 is filtered.
You can add the same processor to channel 1:
auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto fp0 = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);
auto fp1 = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[1], filter);Or more simply, add the existing processor to another buffer:
auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);
MayaFlux::add_processor(filter_processor, buffers[1], MayaFlux::Buffers::ProcessingToken::AUDIO_BACKEND);add_processor() adds an existing processor to a buffer’s
chain.
create_processor() creates a processor and adds it
implicitly.
Both do the same underlying thing—they add the processor to the
buffer’s chain. create_processor() just combines creation
and addition in one call.
Now both channels are filtered by the same IIR node. Different channel buffers can share the same processor or have independent ones—your choice.
When you call:
auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);MayaFlux does this:
// 1. Create a new FilterProcessor wrapping your filter node
auto processor = std::make_shared<MayaFlux::Buffers::FilterProcessor>(filter);
// 2. Get the buffer's processing chain
auto chain = buffers[0]->get_processing_chain();
// 3. Add the processor to the chain
chain->add_processor(processor, buffers[0]);
// 4. Return the processor
return processor;When add_processor() is called separately:
MayaFlux::add_processor(filter_processor, buffers[1], MayaFlux::Buffers::ProcessingToken::AUDIO_BACKEND);MayaFlux does this:
// Get the buffer manager
auto buffer_manager = MayaFlux::get_buffer_manager();
// Get channel 1's buffer for AUDIO_BACKEND token
auto buffer = buffer_manager->get_buffer(ProcessingToken::AUDIO_BACKEND, 1);
// Get its processing chain
auto chain = buffer->get_processing_chain();
// Add the processor
chain->add_processor(processor, buffer);The machinery is consistent: processors are added to chains, chains are owned by buffers, buffers execute chains each cycle.
You don’t need to write this explicitly—the convenience functions handle it. But this is what’s happening.
A processor is a building block. Once created, it can be:
Example: two channels with the same filter:
auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);
MayaFlux::add_processor(processor, buffers[1]);Example: stacking processors (requires understanding of chains, shown later):
auto filter1 = vega.IIR(...);
auto fp1 = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter1);
auto filter2 = vega.IIR(...); // Different filter
auto fp2 = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter2);Now buffers[0] has two FilterProcessors in its chain. Data flows through both sequentially.
Processors are the creative atoms of MayaFlux. Everything builds from them.
void compose() {
auto sound_container = vega.read_audio("path/to/your/file.wav") | Audio;
auto buffers = MayaFlux::get_last_created_container_buffers();
auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);
}Replace "path/to/your/file.wav" with an actual path.
Run the program. Listen. The audio is filtered.
Now try modifying the coefficients:
auto filter = vega.IIR({0.05, 0.3, 0.05}, {1.0, -0.8});Listen again. Different sound. You’re sculpting the filter response.
You’ve just inserted a processor into a buffer’s chain and heard the result. That’s the foundation for everything that follows.
In the next section, we’ll interrupt this passive playback. We’ll insert a processing node between the Container and the buffers. And you’ll see why this architecture—buffers as relays, not generators—enables powerful real-time transformation.
For a comprehensive tutorial on buffer processors and related concepts, visit the Buffer Processors Tutorial.
What you’ve done so far is simple and powerful:
auto sound_container = vega.read_audio("path/to/file.wav") | Audio;
auto buffers = MayaFlux::get_last_created_container_buffers();
auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto fp = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);This flow is designed for full-file playback:
Clean. Direct. No timing control.
That’s intentional.
There are other features—region looping, seeking, playback control, but they don’t fit this tutorial. These sections are purely for: file → buffers → output, uninterrupted.
Here’s what the next section enables:
auto pipeline = MayaFlux::create_buffer_pipeline();
pipeline->with_strategy(ExecutionStrategy::PHASED); // Execute each phase fully before next op
pipeline
>> BufferOperation::capture_file_from("path/to/file.wav", 0) // From channel 0
.for_cycles(20) // Process 20 buffer cycles
>> BufferOperation::transform([](auto& data, uint32_t cycle) {
// Data now has 20 buffer cycles of audio from the file
// i.e 20 x 512 samples if buffer size is 512
auto zero_crossings = MayaFlux::zero_crossings(data);
std::cout << "Zero crossings at indices:\n";
for (const auto& sample : zero_crossings) {
std::cout << sample << "\t";
}
std::cout << "\n";
return data;
});
pipeline->execute_buffer_rate(); // Schedule and runThis processes exactly 20 buffer cycles from the file (with any process you want), accumulates the result in a stream, and executes the pipeline.
The file isn’t playing to speakers. It’s being captured, processed, and stored in a stream. Timing is under your control.: You decide how many buffer cycles to process. This section builds the foundation for buffer pipelines. Understanding the architecture below explains why the code snippet works.
In this section, we will introduce the machinery for everything beyond simplicity. We’re not building code that has audio yet. We’re establishing the architecture that enables timing control, streaming, capture, and composition.
A Container (like SoundFileContainer) holds all data upfront:
This works perfectly for “play the whole file”. It also works for as yet unexpored controls over the same timeline, such as looping, seeking positions, jumping to regions, etc.
But it doesn’t work for:
For these use cases, you need a different data structure.
A DynamicSoundStream is a child class of
SignalSourceContainer much like
SoundFileContainer that we have been using. It has the same
interface as SoundFileContainer (channels, frames,
metadata, regions). But it has different semantics:
Think of it as:
DynamicSoundStream has powerful capabilities:
You don’t create DynamicSoundStream directly (yet). It’s managed implicitly by other systems. But understanding what it is explains everything that follows.
You’ve seen BufferProcessors like
FilterProcessor that transform data in place.
But StreamWriteProcessor is more general. It can write
buffer data to any DynamicSoundStream, not
just locally to attached buffers (or from hardware: hitherto unexplored
InputListenerProcessor).
When a processor runs each buffer cycle:
FilterProcessor, for example)StreamWriteProcessor writes the (now-processed) samples
to a DynamicSoundStreamThe DynamicSoundStream accumulates these writes:
After N cycles, the DynamicSoundStream contains N × 512
samples of processed audio.
This is how you capture buffer data. Not by sampling the buffer once, by continuously writing it to a stream through a processor.
StreamWriteProcessor is the bridge between buffers
(which live in real-time) and streams (which accumulate).
FileBridgeBuffer is a specialized buffer that orchestrates reading from a file and writing to a stream, with timing control through buffer cycles.
Internally, FileBridgeBuffer creates a processing chain:
SoundFileContainer (source file)
↓
ContainerToBufferAdapter (reads from file, advances position)
↓
[Your processors here: filters, etc.]
↓
StreamWriteProcessor (writes to internal DynamicSoundStream)
↓
DynamicSoundStream (accumulates output)
The key difference from your simple load/play flow:
FileBridgeBuffer represents: “Read from this file, process through this chain, accumulate result in this stream, for exactly this many cycles.”
This gives you timing control. You don’t play the whole file. You process exactly N cycles, then stop.
The architecture separates concerns:
Each layer is independent:
This is why FileBridgeBuffer is powerful: it composes these layers without forcing you to wire them manually.
And it’s why understanding this section matters: the next tutorial (BufferOperation) builds on top of this composition, adding temporal coordination and pipeline semantics.
A cycle is one complete buffer processing round:
At 48 kHz, one cycle is 512 ÷ 48000 ≈ 10.67 milliseconds of audio.
When you say “process this file for 20 cycles,” you mean:
Timing control is expressed in cycles, not time. This is intentional:
FileBridgeBuffer lets you say: “Process this file for exactly N cycles,” then accumulate the result in a stream.
This is the foundation for everything BufferOperation does—it extends this cycle-based thinking to composition and coordination.
At this point, understand:
DynamicSoundStream: A container that grows dynamically, can operate in circular mode, designed to accumulate data from processors
StreamWriteProcessor: The processor that writes buffer data sequentially to a DynamicSoundStream
FileBridgeBuffer: A buffer that creates a chain (reader → your processors → writer), and lets you control how many buffer cycles run
These three concepts enable timing control. You’re no longer at the mercy of real-time callbacks. You can process exactly N cycles, accumulate results, and move on.
This is intentional. The concepts here are essential, and expose the architecture behind everything that follows. It is also a hint at the fact that modal output is not the only use case for MayaFlux.
The next tutorial introduces BufferOperation, which wraps these concepts into high-level, composable patterns:
BufferOperation::capture_file() - wrap
FileBridgeBuffer, accumulate N cycles, return the streamBufferOperation::file_to_stream() - connect file
reading to stream writing, with cycle controlBufferOperation::route_to_container() - send processor
output to a streamOnce you understand FileBridgeBuffer, DynamicSoundStream, and cycle-based timing, BufferOperation will feel natural. It’s just syntactic sugar on top of this architecture.
For now: internalize the architecture. The next section shows how to use it.
This is the mental model for everything that follows. Pipelines, capture, routing—they all build on this foundation.
Everything you’ve learned so far processes data in isolation: load a file, add a processor, output to hardware.
But what if you want to:
That’s what buffer pipelines do.
void compose() {
// Create an empty audio buffer (will hold captured data)
auto capture_buffer = vega.AudioBuffer()[1] | Audio;
// Create a pipeline
auto pipeline = MayaFlux::create_buffer_pipeline();
// Set strategy to streaming (process as data arrives)
pipeline->with_strategy(ExecutionStrategy::STREAMING);
// Declare the flow:
pipeline
>> BufferOperation::capture_file_from("path/to/audio/.wav", 0)
.for_cycles(1) // Essential for streaming
>> BufferOperation::route_to_buffer(capture_buffer) // Route captured data to our buffer
>> BufferOperation::modify_buffer(capture_buffer, [](std::shared_ptr<AudioBuffer> buffer) {
for (auto& sample : buffer->get_data()) {
sample *= MayaFlux::get_uniform_random(-0.5, 0.5); // random "texture" between 0 and 0.5
}
});
// Execute: runs continuously at buffer rate
pipeline->execute_buffer_rate();
}Run this. You’ll hear the file play back at with noisy texture. But the file never played to speakers directly: it was captured, processed, accumulated, then routed.
A pipeline is a declarative sequence of buffer operations that compose to form a complete computational event.
Unlike the previous sections where you manually:
…a pipeline lets you describe the entire flow in one statement:
pipeline
>> Operation1
>> Operation2
>> Operation3;The >> operator chains operations. The pipeline
executes them in order, handling all the machinery (cycles, buffer
management, timing) invisibly.
This is why you’ve been learning the foundation first: pipelines are just syntactic sugar over FileBridgeBuffer, DynamicSoundStream, StreamWriteProcessor, and buffer cycles.
Understanding the previous sections makes this section obvious. You’re not learning new concepts—you’re composing concepts you already understand.
BufferOperation is a toolkit. Common operations:
Each operation is a building block. Pipeline chains them together.
The full set of operations is the subject of its own tutorial. This section just shows the pattern.
on_capture_processing PatternNotice in the example:
>> BufferOperation::modify([](auto& data, uint32_t cycle) {
// Called every cycle as data accumulates
for (auto& sample : data) {
sample *= 0.5;
}
})The modify operation runs each
cycle—meaning:
This is on_capture_processing: your custom logic runs as
data arrives, not automated by external managers.
Automatic mode simply expects buffer manager to handle the processing of attached processors. On Demand mode expects users to provide callback timing logic.
For now: understand that pipelines let you hook custom logic into the capture/process/route flow.
Before pipelines, your workflow was:
With pipelines, your workflow is:
The key difference: determinism. You know exactly what will happen because you’ve declared the entire flow.
This is the foundation for everything beyond this tutorial:
All of it starts with this pattern: declare → execute → observe.
The full Buffer Pipelines tutorial is its own comprehensive guide. It covers:
This section is just the proof-of-concept: “Here’s what becomes possible when everything you’ve learned composes.”
The code above will run if you have:
.wav file at "path/to/file.wav"If you want to experiment, use a real file path and run it.
But the main point is: understand what’s happening, not just make it work.
This is real composition. Not playback. Not presets. Declarative data transformation.
You’ve now seen the complete stack:
Each layer builds on the previous. None is magic. All are composable.
This is how MayaFlux thinks about computation: as layered, declarative, composable building blocks.
Pipelines are where that thinking becomes powerful. They’re not a special feature—they’re just the final layer of composition.
When you’re ready, the standalone “Buffer Pipelines” tutorial dives deep into:
For now: you’ve seen the teaser. Everything you’ve learned so far is the foundation for that depth.
You understand how information flows. Pipelines just let you declare that flow elegantly.