MayaFlux Tutorial: Sculpting Data Part I

In MayaFlux, data isn’t something you use -> it’s something you shape. Sound, light, numbers, all share the same substrate. You don’t generate a waveform; you sculpt a pattern of information and let it move. These tutorials begin with the smallest gesture (loading a file) and expand until you can construct entire temporal architectures. The point is not playback. The point is agency over time.

Every example you run produces real sound, but the goal is not sound itself — the goal is to understand the movement of information.

Each section in this series introduces one idea:

Together, they form the foundation of digital composition — not in the musical sense, but in the computational one.

What you’ll do here:

Eventually, build declarative pipelines that describe complete computational events

What you won’t do here:

Everything here is real code: The same logic that runs inside the MayaFlux engine. You’ll read it, modify it, and run it directly.

Each step is designed to teach you how the system thinks, so that later, when you invent something new, you can do so fluently without waiting for someone else to provide the building blocks.

The Simplest First Step

Run this code. The file is loaded into memory.

// In your src/user_project.hpp compose() function:

void compose() {
    auto sound_container = vega.read_audio("path/to/your/file.wav");
}

Replace "path/to/your/file.wav" with an actual path to a .wav file.

Run the program. You’ll see console output showing what loaded:

✓ Loaded: path/to/your/file.wav
  Channels: 2
  Frames: 2304000
  Sample Rate: 48000 Hz

Nothing plays yet. That’s intentional—and important. The rest of this section shows you what just happened.

You have:

The file is loaded. Ready. Waiting.


Expansion 1: What Is a Container?

When you call vega.read_audio(), you’re not just reading bytes from disk and forgetting them. You’re creating a Container—a structure that holds:

The difference: A file is inert. A Container is active creative material. It knows its own shape. It can tell you about regions within itself. It can be queried, transformed, integrated with other Containers.

When vega.read_audio("file.wav") runs, MayaFlux:

  1. Creates a SoundFileReader and initializes FFmpeg
  2. Checks if the file is readable
  3. Resamples to your project’s sample rate (configurable)
  4. Converts to 64-bit depth (high precision)
  5. Deinterleaves the audio (separates channels into independent arrays—more efficient for processing)
  6. Creates a SoundFileContainer object
  7. Loads all the audio data into memory
  8. Configures a ContiguousAccessProcessor (the Container’s default processor, which knows how to feed data to buffers chunk-by-chunk)
  9. Returns the Container to you

The Container is now your interface to that audio data. It’s ready to be routed, processed, analyzed, transformed.


Expansion 2: Memory, Ownership, and Smart Pointers

As you know, raw audio data can be large. MayaFlux allocates and manages it safely through smart pointers.

At a lower, machine-level (in programming parlance), the user is expected to manage memory manually: instantiate objects, bind them, handle transfers, and delete when done. Any misalignment among these steps can cause crashes or undefined behavior. MayaFlux doesn’t expect you to handle these manually—unless you choose to.

MayaFlux uses smart pointers—a C++11 feature that automatically tracks how many parts of your program are using a Container. When the last reference disappears, the memory is freed automatically.

When you write:

auto sound_container = vega.read_audio("file.wav");

What’s actually happening is:

std::shared_ptr<MayaFlux::Kakshya::SoundFileContainer> sound_container =
    /* vega.read_audio() internally creates and returns a shared_ptr */;

You don’t see std::shared_ptr. You see auto. But MayaFlux is using it. This means:

This is why vega.read_audio() is safe. The complexity of memory management exists—it’s just not your problem.


Expansion 3: What is vega?

vega is a fluent interface—a convenience layer that takes MayaFlux’s power and hides the verbosity without hiding the machinery. Grappling with complexity generally yields expressive, and often well-reasoned, implementations. However, many find it hard to parse the wall of code that results from such grappling, partly because machine-level languages tend to prioritize other aspects of coding over user experience (UX).

Making complex logic less verbose can be a good way to encourage more people to explore.

If you didn’t have vega, loading a file would look like this:

// Without vega - explicit, showing every step
auto reader = std::make_unique<MayaFlux::IO::SoundFileReader>();
MayaFlux::IO::SoundFileReader::initialize_ffmpeg();

if (!reader->can_read("file.wav")) {
    std::cerr << "Cannot read file\n";
    return;
}

reader->set_target_sample_rate(MayaFlux::Config::get_sample_rate());
reader->set_target_bit_depth(64);
reader->set_audio_options(MayaFlux::IO::AudioReadOptions::DEINTERLEAVE);

MayaFlux::IO::FileReadOptions options = MayaFlux::IO::FileReadOptions::EXTRACT_METADATA;
if (!reader->open("file.wav", options)) {
        MF_ERROR(Journal::Component::API, Journal::Context::FileIO, "Failed to open file: {}", reader->get_last_error());
    return;
}

auto container = reader->create_container();
auto sound_container = std::dynamic_pointer_cast<Kakshya::SoundFileContainer>(container);

if (!reader->load_into_container(sound_container)) {
        MF_ERROR(Journal::Component::API, Journal::Context::Runtime, "Failed to load audio data: {}", reader->get_last_error());
    return;
}

auto processor = std::dynamic_pointer_cast<Kakshya::ContiguousAccessProcessor>(
    sound_container->get_default_processor());
if (processor) {
    std::vector<uint64_t> output_shape = {
        MayaFlux::Config::get_buffer_size(),
        sound_container->get_num_channels()
    };
    processor->set_output_size(output_shape);
    processor->set_auto_advance(true);
}

// Now you have sound_container

Depending on your exposure to programming, this can either feel complex or liberating. Lacking the facilities to be explicit about memory management or allocation can be limiting:

However, the above code snippet is verbose for something so simple.

vega says: “You just want to load a file? Say so.”

auto sound_container = vega.read_audio("file.wav");

Same machinery underneath. Same FFmpeg integration. Same resampling. Same deinterleaving. Same processor setup. Same safety.

What vega does:

What vega doesn’t do:

The short syntax is convenience. The long syntax is control. MayaFlux gives you both.

Use vega because you value fluency, not because you fear the machinery.


Expansion 4: The Container’s Processor

The Container you just created isn’t just a data holder. It has a default processor—a piece of machinery attached to it that knows how to feed data to buffers.

This processor (ContiguousAccessProcessor) does crucial work:

  1. Understands the memory layout - how the Container’s audio data is organized
  2. Knows the buffer size - how many samples to chunk at a time (typically 512 or 4096)
  3. Tracks position - where in the file you are (auto-advance means it moves forward each time data is requested)
  4. Deinterleaves access - gives channels separately (crucial for processing, as you can transform each channel independently)

When you later connect this Container to buffers (in the next section), the processor is what actually feeds the data—it’s the active mechanism.

vega.read_audio() configures this processor automatically:

This is why StreamContainers (that SoundFileContainer inherits from) are more than data—they’re active, with built-in logic for how they should be consumed.


Expansion 5: What .read_audio() Does NOT Do

This is important:

.read_audio() does NOT:

.read_audio() DOES:

The Container sits in memory, ready to be used. But “ready to be used” means you decide what happens next: process it, analyze it, route it to output or visual processing, feed it into a machine-learning pipeline, anything.

That’s the power of this design: loading is separate from routing. You can load a file and immediately send it to hardware, or spend the next 20 lines building a complex processing pipeline before ever playing it.


In the next section, we’ll connect this Container to buffers and route it to your speakers. And you’ll see why this two-step design—load, then connect—is more powerful than one-step automatic playback.


Tutorial: Connect to Buffers

The Next Step

You have a Container loaded. Now you need to send it somewhere.

auto sound_container = vega.read_audio("path/to/file.wav");
auto buffers = MayaFlux::hook_sound_container_to_buffers(sound_container);

Run this code. Your file plays.

The Container + the hook call—together they form the path from disk to speakers. This section shows you what that connection does.


Expansion 1: What Are Buffers?

Click to expand: Understanding Buffers

A Buffer is a temporal accumulator—a space where data gathers until it’s ready to be released, then it resets and gathers again.

Buffers don’t store your entire file. They store chunks. At your project’s sample rate (48 kHz), a typical buffer might hold 512 or 4096 samples: a handful of milliseconds of audio.

Here’s why this matters:

Your audio interface (speakers, headphones) has a fixed callback rate. It says: “Give me 512 samples of audio, and do it every 10 milliseconds. Repeat forever until playback stops.”

Buffers are the industry standard method to meet this demand.

  1. Gathers - accumulates samples from your Container (via its processor)
  2. Holds - keeps those samples temporarily
  3. Releases - sends them to hardware
  4. Resets - becomes empty and ready for the next chunk

This cycle repeats thousands of times per minute. Buffers make that possible.

Without buffers, you’d have to manually manage these chunks yourself. With buffers, MayaFlux handles the cycle. Your Container’s processor feeds data into them. The buffers exhale it to your ears.


Expansion 2: Why Per-Channel Buffers?

Click to expand: Stereo, Mono, and Channel Architecture

A stereo file has 2 channels. A multichannel file might have 4 or 8 channels. MayaFlux doesn’t merge them into one buffer.

Instead, it creates one buffer per channel.

Why? Because channels are independent processing domains. A stereo file’s left channel and right channel:

When you hook a stereo Container to buffers, MayaFlux creates:

Each buffer:

This per-channel design is why you can later insert processing on a per-channel basis. Insert a filter on channel 0? The first channel gets filtered. Leave channel 1 alone? The second channel plays unprocessed. This flexibility is only possible because channels are architecturally separate at the buffer level.


Expansion 3: The Buffer Manager and Buffer Lifecycle

Click to expand: How Buffers Are Created and Managed

MayaFlux has a buffer manager—a central system that creates, tracks, and coordinates all buffers in your program.

When you call hook_sound_container_to_buffers(), here’s what happens:

auto buffer_manager = MayaFlux::get_buffer_manager();
uint32_t num_channels = container->get_num_channels();

for (uint32_t channel = 0; channel < num_channels; ++channel) {
    auto container_buffer = buffer_manager->create_audio_buffer<ContainerBuffer>(
        ProcessingToken::AUDIO_BACKEND,
        channel,
        container,
        channel);
    container_buffer->initialize();
}

Step by step:

  1. Get the buffer manager - a global system that owns all buffers
  2. Ask the Container: how many channels? - determines the loop count
  3. For each channel:
    • Create an audio buffer of type ContainerBuffer (a buffer that reads from a Container)
    • Tag it with AUDIO_BACKEND (more on this in Expansion 5)
    • Tell it which channel matrix the buffer should belongs to
    • Tell it which channel in the Container to read from
    • Initialize it (prepare it for the callback cycle)

Now the buffer manager knows:

When the audio callback fires (every 10ms at 48 kHz), the buffer manager wakes up all its AUDIO_BACKEND buffers and says: “Time for the next chunk. Fill yourselves.”

Each buffer asks its Container’s processor: “Give me 512 samples from your channel.”

The processor pulls from the Container, advances its position, and hands back a chunk.

The buffer receives it and passes it to the audio interface.

Repeat forever.


Expansion 4: ContainerBuffer—The Bridge

Click to expand: How Buffers Know Their Source

You created a ContainerBuffer, not just a generic Buffer. Why the distinction?

A Buffer is abstract—it’s a temporal accumulator. But abstract things don’t know where their data comes from.

A ContainerBuffer is specific—it’s a buffer that knows:

When the callback fires, the ContainerBuffer doesn’t generate samples. It asks: “Container, give me the next 512 samples from your channel 0.”

The Container’s processor (remember ContiguousAccessProcessor from Section 1?) handles this. It:

The ContainerBuffer receives it. Done.

This is the architecture: Buffers don’t generate or transform. They request and relay. The Container’s processor does the work. The buffer coordinates timing with hardware.

Later, when you add processing nodes or attach processing chains, you’ll insert them between the Container’s output and the buffer’s input. The buffer still doesn’t transform—it still just relays. But what it relays will have been processed first.


Expansion 5: Processing Token—AUDIO_BACKEND

Click to expand: Tokens, Domains, and Hardware Destinations

In the buffer creation code:

auto container_buffer = buffer_manager->create_audio_buffer<ContainerBuffer>(
    ProcessingToken::AUDIO_BACKEND,
    channel,
    container,
    channel);

Notice ProcessingToken::AUDIO_BACKEND. This is a token—a semantic marker that tells MayaFlux:

Tokens are how MayaFlux coordinates different processing domains without confusion. Later, you might have:

Each token tells the system what timing, synchronization, and backend this buffer belongs to.

For now: AUDIO_BACKEND means “this buffer is feeding your ears directly. It must keep real-time pace with the audio interface.”


Expansion 6: Accessing the Buffers

Click to expand: What You Can Do With the Buffers

When you call vega.read_audio() | Audio, MayaFlux creates the buffers internally. But now, with the ability to get those buffers back, you have access to them:

auto sound_container = vega.read_audio("path/to/file.wav");
auto buffers = MayaFlux::get_last_created_container_buffers();

// Now you have the buffers as a vector:
// buffers[0] → channel 0
// buffers[1] → channel 1 (if stereo)
// etc.

Why is this useful? Because buffers own processing chains. And processing chains are where you’ll insert processes, analysis, transformations - everything that turns passive playback into active processing.

Each buffer has a method:

auto chain = buffers[0]->get_processing_chain();

This gives you access to the chain that currently handles that buffer’s data. Right now, the chain just reads from the Container and writes to the hardware. But you can modify that chain.

This is the foundation for Section 3. You load a file, get the buffers, access their chains, and inject processing into those chains.

##

The Fluent vs. Explicit Comparison

Fluent (What happens behind the scenes)

vega.read_audio("path/to/file.wav") | Audio;

This single line does all of the above: creates a Container, creates per-channel buffers, hooks them to the audio hardware, and starts playback. No file plays until the | Audio operator, which is when the connection happens.

Explicit (What’s actually happening)

auto sound_container = vega.read_audio("path/to/file.wav");
auto buffers = MayaFlux::get_last_created_container_buffers();
// File is loaded, buffers exist, but no connection to hardware yet
// Buffers have chains, but nothing is using them

// To actually play, you'd need to ensure they're registered
// (vega.read_audio() | Audio does this automatically)

Understanding the difference:


Try It

void compose() {
    vega.read_audio("path/to/your/file.wav") | Audio;

    // File plays
}

Replace "path/to/your/file.wav" with an actual path.

You have:

No code running during playback—just the callback cycle doing its work, thousands of times per minute.

In the next section, we’ll modify these buffers’ processing chains. We’ll insert a filter processor and hear how it changes the sound. This is where MayaFlux’s power truly shines—transforming passive playback into active, real-time audio processing.

Tutorial: Buffers Own Chains

The Simplest Path

You have buffers. You can modify what flows through them.

auto sound_container = vega.read_audio("path/to/file.wav") | Audio;
auto buffers = MayaFlux::get_last_created_container_buffers();

auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);

Run this code. Your file plays with a low-pass filter applied.

The filter smooths the audio—reduces high frequencies. Listen to the difference.

That’s it. Three lines of code: load, get buffers, insert filter. The rest of this section shows you what just happened.


Expansion 1: What Is vega.IIR()?

Click to expand: Creating Filter Nodes

vega.IIR() creates a filter node—a computation unit that processes audio samples one at a time.

An IIR filter (Infinite Impulse Response) is a mathematical operation that transforms samples based on feedback coefficients. The two parameters are:

You don’t need to understand the math. Just know: this creates a filter that smooths audio.

vega is the fluent interface—it subsumes verbosity. Without it:

// Without vega - explicit
auto filter = std::make_shared<Nodes::Filters::IIR>(
    std::vector<double>{0.1, 0.2, 0.1},
    std::vector<double>{1.0, -0.6}
);

With vega:

auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});

Same filter. Same capabilities. Vega just hides the verbosity.


Expansion 2: What Is MayaFlux::create_processor()?

Click to expand: Wrapping Nodes in Processors

A node (like vega.IIR()) is a computational unit—it processes one sample at a time.

This processor is a buffer-aware wrapper around that node. It knows:

create_processor() wraps your filter node in a processor and attaches it to a buffer’s processing chain.

auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);

What this does:

  1. Takes your filter node
  2. Creates a FilterProcessor that knows how to apply that node to buffer data
  3. Adds the processor to buffers[0]’s processing chain (implicit—this happens automatically)
  4. Returns the processor so you can reference it later if needed

The buffer now has this processor in its chain. Each cycle, the buffer runs the processor, which applies the filter to all samples in that cycle.


Expansion 3: What Is a Processing Chain?

Click to expand: How Buffers Execute Processors

Each buffer owns a processing chain—an ordered sequence of processors that transform data.

Your buffer’s default processor was:

When create_processor() adds your FilterProcessor, the chain becomes:

  1. Default processor: ContainerToBufferAdapter (reads from Container)
  2. FilterProcessor (applies your filter) ← You just added this
  3. Other processors you might add later (e.g., Writer to send to hardware)

Each cycle:

Data flows: Container → [filtered] → Speakers

The chain is ordered. Processors run in sequence. Output of one becomes input to next.


Expansion 4: Adding Processor to Another Channel (Optional)

Click to expand: Multi-Channel Processing

Your stereo file has two channels. Right now, only channel 0 is filtered.

You can add the same processor to channel 1:

auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto fp0 = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);
auto fp1 = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[1], filter);

Or more simply, add the existing processor to another buffer:

auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);
MayaFlux::add_processor(filter_processor, buffers[1], MayaFlux::Buffers::ProcessingToken::AUDIO_BACKEND);

add_processor() adds an existing processor to a buffer’s chain.

create_processor() creates a processor and adds it implicitly.

Both do the same underlying thing—they add the processor to the buffer’s chain. create_processor() just combines creation and addition in one call.

Now both channels are filtered by the same IIR node. Different channel buffers can share the same processor or have independent ones—your choice.


Expansion 5: What Happens Inside

Click to expand: The Machinery Under the Hood

When you call:

auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);

MayaFlux does this:

// 1. Create a new FilterProcessor wrapping your filter node
auto processor = std::make_shared<MayaFlux::Buffers::FilterProcessor>(filter);

// 2. Get the buffer's processing chain
auto chain = buffers[0]->get_processing_chain();

// 3. Add the processor to the chain
chain->add_processor(processor, buffers[0]);

// 4. Return the processor
return processor;

When add_processor() is called separately:

MayaFlux::add_processor(filter_processor, buffers[1], MayaFlux::Buffers::ProcessingToken::AUDIO_BACKEND);

MayaFlux does this:

// Get the buffer manager
auto buffer_manager = MayaFlux::get_buffer_manager();

// Get channel 1's buffer for AUDIO_BACKEND token
auto buffer = buffer_manager->get_buffer(ProcessingToken::AUDIO_BACKEND, 1);

// Get its processing chain
auto chain = buffer->get_processing_chain();

// Add the processor
chain->add_processor(processor, buffer);

The machinery is consistent: processors are added to chains, chains are owned by buffers, buffers execute chains each cycle.

You don’t need to write this explicitly—the convenience functions handle it. But this is what’s happening.


Expansion 6: Processors Are Reusable Building Blocks

Click to expand: Composition and Flexibility

A processor is a building block. Once created, it can be:

Example: two channels with the same filter:

auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);
MayaFlux::add_processor(processor, buffers[1]);

Example: stacking processors (requires understanding of chains, shown later):

auto filter1 = vega.IIR(...);
auto fp1 = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter1);

auto filter2 = vega.IIR(...); // Different filter
auto fp2 = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter2);

Now buffers[0] has two FilterProcessors in its chain. Data flows through both sequentially.

Processors are the creative atoms of MayaFlux. Everything builds from them.


Try It

void compose() {
    auto sound_container = vega.read_audio("path/to/your/file.wav") | Audio;
    auto buffers = MayaFlux::get_last_created_container_buffers();

    auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
    auto filter_processor = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);
}

Replace "path/to/your/file.wav" with an actual path.

Run the program. Listen. The audio is filtered.

Now try modifying the coefficients:

auto filter = vega.IIR({0.05, 0.3, 0.05}, {1.0, -0.8});

Listen again. Different sound. You’re sculpting the filter response.

You’ve just inserted a processor into a buffer’s chain and heard the result. That’s the foundation for everything that follows.

In the next section, we’ll interrupt this passive playback. We’ll insert a processing node between the Container and the buffers. And you’ll see why this architecture—buffers as relays, not generators—enables powerful real-time transformation.

For a comprehensive tutorial on buffer processors and related concepts, visit the Buffer Processors Tutorial.

Tutorial: Timing, Streams, and Bridges

The Current Continous Flow

What you’ve done so far is simple and powerful:

auto sound_container = vega.read_audio("path/to/file.wav") | Audio;
auto buffers = MayaFlux::get_last_created_container_buffers();
auto filter = vega.IIR({0.1, 0.2, 0.1}, {1.0, -0.6});
auto fp = MayaFlux::create_processor<MayaFlux::Buffers::FilterProcessor>(buffers[0], filter);

This flow is designed for full-file playback:

Clean. Direct. No timing control.

That’s intentional.

There are other features—region looping, seeking, playback control, but they don’t fit this tutorial. These sections are purely for: file → buffers → output, uninterrupted.


Where We’re Going

Here’s what the next section enables:

auto pipeline = MayaFlux::create_buffer_pipeline();
pipeline->with_strategy(ExecutionStrategy::PHASED); // Execute each phase fully before next op

pipeline
    >> BufferOperation::capture_file_from("path/to/file.wav", 0)  // From channel 0
    .for_cycles(20)  // Process 20 buffer cycles
    >> BufferOperation::transform([](auto& data, uint32_t cycle) {
        // Data now has 20 buffer cycles of audio from the file
        // i.e 20 x 512 samples if buffer size is 512
        auto zero_crossings = MayaFlux::zero_crossings(data);

        std::cout << "Zero crossings at indices:\n";
        for (const auto& sample : zero_crossings) {
            std::cout << sample << "\t";
        }
        std::cout << "\n";

        return data;
    });

pipeline->execute_buffer_rate();  // Schedule and run

This processes exactly 20 buffer cycles from the file (with any process you want), accumulates the result in a stream, and executes the pipeline.

The file isn’t playing to speakers. It’s being captured, processed, and stored in a stream. Timing is under your control.: You decide how many buffer cycles to process. This section builds the foundation for buffer pipelines. Understanding the architecture below explains why the code snippet works.

In this section, we will introduce the machinery for everything beyond simplicity. We’re not building code that has audio yet. We’re establishing the architecture that enables timing control, streaming, capture, and composition.


Expansion 1: The Architecture of Containers

Click to expand: Why We Need Something Else

A Container (like SoundFileContainer) holds all data upfront:

This works perfectly for “play the whole file”. It also works for as yet unexpored controls over the same timeline, such as looping, seeking positions, jumping to regions, etc.

But it doesn’t work for:

For these use cases, you need a different data structure.


Expansion 2: Enter DynamicSoundStream

Click to expand: A Container That Grows

A DynamicSoundStream is a child class of SignalSourceContainer much like SoundFileContainer that we have been using. It has the same interface as SoundFileContainer (channels, frames, metadata, regions). But it has different semantics:

Think of it as:

DynamicSoundStream has powerful capabilities:

You don’t create DynamicSoundStream directly (yet). It’s managed implicitly by other systems. But understanding what it is explains everything that follows.


Expansion 3: StreamWriteProcessor

Click to expand: Writing Buffer Data to Streams

You’ve seen BufferProcessors like FilterProcessor that transform data in place.

But StreamWriteProcessor is more general. It can write buffer data to any DynamicSoundStream, not just locally to attached buffers (or from hardware: hitherto unexplored InputListenerProcessor).

When a processor runs each buffer cycle:

  1. Buffer gets filled with 512 samples (from Container or elsewhere)
  2. Processors run (your FilterProcessor, for example)
  3. StreamWriteProcessor writes the (now-processed) samples to a DynamicSoundStream

The DynamicSoundStream accumulates these writes:

After N cycles, the DynamicSoundStream contains N × 512 samples of processed audio.

This is how you capture buffer data. Not by sampling the buffer once, by continuously writing it to a stream through a processor.

StreamWriteProcessor is the bridge between buffers (which live in real-time) and streams (which accumulate).


Expansion 4: FileBridgeBuffer—Controlled Flow

Click to expand: The Reading-Writing Bridge

FileBridgeBuffer is a specialized buffer that orchestrates reading from a file and writing to a stream, with timing control through buffer cycles.

Internally, FileBridgeBuffer creates a processing chain:

SoundFileContainer (source file)
    ↓
ContainerToBufferAdapter (reads from file, advances position)
    ↓
[Your processors here: filters, etc.]
    ↓
StreamWriteProcessor (writes to internal DynamicSoundStream)
    ↓
DynamicSoundStream (accumulates output)

The key difference from your simple load/play flow:

FileBridgeBuffer represents: “Read from this file, process through this chain, accumulate result in this stream, for exactly this many cycles.”

This gives you timing control. You don’t play the whole file. You process exactly N cycles, then stop.


Expansion 5: Why This Architecture?

Click to expand: Decoupling Reading, Processing, and Output

The architecture separates concerns:

Each layer is independent:

This is why FileBridgeBuffer is powerful: it composes these layers without forcing you to wire them manually.

And it’s why understanding this section matters: the next tutorial (BufferOperation) builds on top of this composition, adding temporal coordination and pipeline semantics.


Expansion 6: From File to Cycle

Click to expand: “Cycles” as Timing Units

A cycle is one complete buffer processing round:

At 48 kHz, one cycle is 512 ÷ 48000 ≈ 10.67 milliseconds of audio.

When you say “process this file for 20 cycles,” you mean:

Timing control is expressed in cycles, not time. This is intentional:

FileBridgeBuffer lets you say: “Process this file for exactly N cycles,” then accumulate the result in a stream.

This is the foundation for everything BufferOperation does—it extends this cycle-based thinking to composition and coordination.


The Three Key Concepts

At this point, understand:

  1. DynamicSoundStream: A container that grows dynamically, can operate in circular mode, designed to accumulate data from processors

  2. StreamWriteProcessor: The processor that writes buffer data sequentially to a DynamicSoundStream

  3. FileBridgeBuffer: A buffer that creates a chain (reader → your processors → writer), and lets you control how many buffer cycles run

These three concepts enable timing control. You’re no longer at the mercy of real-time callbacks. You can process exactly N cycles, accumulate results, and move on.


Why This Section Has No Audio Code

This is intentional. The concepts here are essential, and expose the architecture behind everything that follows. It is also a hint at the fact that modal output is not the only use case for MayaFlux.

The next tutorial introduces BufferOperation, which wraps these concepts into high-level, composable patterns:

Once you understand FileBridgeBuffer, DynamicSoundStream, and cycle-based timing, BufferOperation will feel natural. It’s just syntactic sugar on top of this architecture.

For now: internalize the architecture. The next section shows how to use it.


What You Should Internalize

This is the mental model for everything that follows. Pipelines, capture, routing—they all build on this foundation.

Tutorial: Buffer Pipelines (Teaser)

The Next Level

Everything you’ve learned so far processes data in isolation: load a file, add a processor, output to hardware.

But what if you want to:

That’s what buffer pipelines do.


A Taste

void compose() {
    // Create an empty audio buffer (will hold captured data)
    auto capture_buffer = vega.AudioBuffer()[1] | Audio;
    // Create a pipeline
    auto pipeline = MayaFlux::create_buffer_pipeline();
    // Set strategy to streaming (process as data arrives)
    pipeline->with_strategy(ExecutionStrategy::STREAMING);

    // Declare the flow:
    pipeline
        >> BufferOperation::capture_file_from("path/to/audio/.wav", 0)
               .for_cycles(1) // Essential for streaming
        >> BufferOperation::route_to_buffer(capture_buffer) // Route captured data to our buffer
        >> BufferOperation::modify_buffer(capture_buffer, [](std::shared_ptr<AudioBuffer> buffer) {
            for (auto& sample : buffer->get_data()) {
                sample *= MayaFlux::get_uniform_random(-0.5, 0.5); // random "texture" between 0 and 0.5
            }
        });

    // Execute: runs continuously at buffer rate
    pipeline->execute_buffer_rate();
}

Run this. You’ll hear the file play back at with noisy texture. But the file never played to speakers directly: it was captured, processed, accumulated, then routed.


Expansion 1: What Is a Pipeline?

Click to expand: Declarative Processing Chains

A pipeline is a declarative sequence of buffer operations that compose to form a complete computational event.

Unlike the previous sections where you manually:

  1. Load a file
  2. Get buffers
  3. Create processors
  4. Add to chains

…a pipeline lets you describe the entire flow in one statement:

pipeline
    >> Operation1
    >> Operation2
    >> Operation3;

The >> operator chains operations. The pipeline executes them in order, handling all the machinery (cycles, buffer management, timing) invisibly.

This is why you’ve been learning the foundation first: pipelines are just syntactic sugar over FileBridgeBuffer, DynamicSoundStream, StreamWriteProcessor, and buffer cycles.

Understanding the previous sections makes this section obvious. You’re not learning new concepts—you’re composing concepts you already understand.


Expansion 2: BufferOperation Types

Click to expand: What Operations Exist

BufferOperation is a toolkit. Common operations:

Each operation is a building block. Pipeline chains them together.

The full set of operations is the subject of its own tutorial. This section just shows the pattern.


Expansion 3: The on_capture_processing Pattern

Click to expand: Processing Each Cycle

Notice in the example:

>> BufferOperation::modify([](auto& data, uint32_t cycle) {
    // Called every cycle as data accumulates
    for (auto& sample : data) {
        sample *= 0.5;
    }
})

The modify operation runs each cycle—meaning:

This is on_capture_processing: your custom logic runs as data arrives, not automated by external managers.

Automatic mode simply expects buffer manager to handle the processing of attached processors. On Demand mode expects users to provide callback timing logic.

For now: understand that pipelines let you hook custom logic into the capture/process/route flow.


Expansion 4: Why This Matters

Click to expand: Composability and Control

Before pipelines, your workflow was:

  1. Load file (Container)
  2. Get buffers
  3. Add processors to buffers
  4. Play to hardware
  5. Everything was real-time

With pipelines, your workflow is:

  1. Declare capture (file, cycle count)
  2. Declare processing (what to do each cycle)
  3. Declare output (where result goes)
  4. Execute (all at once, deterministic, no real-time constraints)

The key difference: determinism. You know exactly what will happen because you’ve declared the entire flow.

This is the foundation for everything beyond this tutorial:

All of it starts with this pattern: declare → execute → observe.


What Happens Next

The full Buffer Pipelines tutorial is its own comprehensive guide. It covers:

This section is just the proof-of-concept: “Here’s what becomes possible when everything you’ve learned composes.”


Try It (Optional)

The code above will run if you have:

If you want to experiment, use a real file path and run it.

But the main point is: understand what’s happening, not just make it work.

This is real composition. Not playback. Not presets. Declarative data transformation.


The Philosophy

You’ve now seen the complete stack:

  1. Containers hold data (load files)
  2. Buffers coordinate cycles (chunk processing)
  3. Processors transform data (effects, analysis)
  4. Chains order processors (sequence operations)
  5. Pipelines compose chains (declare complete flows)

Each layer builds on the previous. None is magic. All are composable.

This is how MayaFlux thinks about computation: as layered, declarative, composable building blocks.

Pipelines are where that thinking becomes powerful. They’re not a special feature—they’re just the final layer of composition.


Next: The Full Pipeline Tutorial

When you’re ready, the standalone “Buffer Pipelines” tutorial dives deep into:

For now: you’ve seen the teaser. Everything you’ve learned so far is the foundation for that depth.

You understand how information flows. Pipelines just let you declare that flow elegantly.