MayaFlux Technical Documentation

ADC25 Virtual Poster Presentation - Technical Deep Dive | Independent Developer | Early-Stage Architectural Research

Ranjith Hegde

October 2025

Copyright © 2025 Ranjith Hegde / MayaFlux Project
Licensed under GPL-3.0 | View License

Note: This document describes architectural patterns developed over 6 months of research. Implementation details are simplified for clarity. Full source code available at github.com/MayaFlux/MayaFlux (private during development).


Abstract

MayaFlux demonstrates C++20-enabled unified multimedia processing through complete architectural composability. The framework implements lock-free atomic synchronization, coroutine-based temporal coordination, grammar-driven computation, and N-dimensional data abstractions that treat audio, video/graphics, and arbitrary data streams as unified numerical transformations rather than separate domains constrained by analog metaphors.

This document presents the architectural foundations, implementation strategies, and paradigm shifts that enable truly digital-first creative computation.


Table of Contents

  1. The Digital-First Paradigm
  2. Core Architectural Foundations
  3. Lock-Free Processing Architecture
  4. Coroutine Temporal Coordination
  5. Clock Systems: Passive and Active Temporal Drivers
  6. ComputationGrammar: Declarative Operation Matching
  7. Yantra Pipeline: Mathematical and Temporal Transformations
  8. NDData: Unified Cross-Modal Processing
  9. Window Management as Coroutines
  10. Domain Composition and Processing Tokens
  11. Current Implementation Status
  12. Future Vision and Expansion

The Digital-First Paradigm

Moving Beyond Analog Metaphors

Traditional audio software treats digital processing as a simulation of analog hardware. This creates artificial constraints:

MayaFlux embraces true digital paradigms:

Core Philosophy: Data Transformation as Creative Medium

Rather than separating “programming” from “composing,” MayaFlux treats data transformation as the fundamental creative act. Mathematical relationships become creative decisions. Temporal coordination becomes compositional structure. Multi-dimensional data access becomes creative material selection.

This isn’t just about efficiency—it’s about enabling creative workflows that cannot exist in analog-inspired systems.


Core Architectural Foundations

MayaFlux is built on five interconnected paradigms:

1. Nodes: Unit-by-Unit Transformation Precision

Nodes provide single-sample transformation where mathematical relationships become creative decisions. Each node operates at unit precision with lock-free atomic registration.

Key characteristics:

2. Buffers: Temporal Gathering Spaces

Buffers accumulate individual moments into collective expressions. Unlike traditional buffers that “store” data, MayaFlux buffers are transient collectors that gather → release → await.

Key characteristics:

3. Coroutines: Time as Compositional Material

C++20 coroutines transform time into creative material, enabling complex temporal coordination impossible with traditional callbacks.

Key characteristics:

4. Containers: Multi-Dimensional Data Architecture

Containers organize data as compositional material through region-based access with metadata organization.

Key characteristics:

5. Compute Matrix: Declarative and composable computation engine

Compute Matrix allow sequencing or composing different ComputeOperations

Key characteristics:


Lock-Free Processing Architecture

The Challenge

Real-time multimedia processing requires:

The Solution: Atomic State Guards

MayaFlux implements wait-free registration and atomic accumulation patterns:

RootNode Lock-Free Coordination

Each processing domain (per channel, per token) has a RootNode that acts as the central coordinator:

// RootNode provides lock-free node registration
class RootNode {
    std::atomic<ProcessingState> m_processing_state;
    std::vector<std::shared_ptr<Node>> m_nodes;

    bool register_node(shared_ptr<Node> node) {
        if (m_is_processing.load(std::memory_order_acquire)) {
        if (m_Nodes.end() != std::ranges::find(m_Nodes, node)) {
            uint32_t state = node->m_state.load();
            if (state & Utils::NodeState::INACTIVE) {
                atomic_remove_flag(node->m_state, Utils::NodeState::INACTIVE);
                atomic_add_flag(node->m_state, Utils::NodeState::ACTIVE);
            }
            return;
        }

        for (auto& m_pending_op : m_pending_ops) {
            bool expected = false;
            if (m_pending_op.active.compare_exchange_strong(
                    expected, true,
                    std::memory_order_acquire,
                    std::memory_order_relaxed)) {
                m_pending_op.node = node;
                atomic_remove_flag(node->m_state, Utils::NodeState::ACTIVE);
                atomic_add_flag(node->m_state, Utils::NodeState::INACTIVE);
                m_pending_count.fetch_add(1, std::memory_order_relaxed);
                return;
            }
        }

        while (m_is_processing.load(std::memory_order_acquire)) {
            m_is_processing.wait(true, std::memory_order_acquire);
        }
    }

    m_Nodes.push_back(node);
    uint32_t state = node->m_state.load();
    atomic_add_flag(node->m_state, Utils::NodeState::ACTIVE);
    }

    void process_sample() {
        if (!preprocess())
        return 0.;

    auto sample = 0.;

    for (auto& node : m_Nodes) {

        uint32_t state = node->m_state.load();
        if (!(state & Utils::NodeState::PROCESSED)) {
            auto generator = std::dynamic_pointer_cast<Nodes::Generator::Generator>(node);
            if (generator && generator->should_mock_process()) {
                generator->process_sample(0.);
            } else {
                sample += node->process_sample(0.);
            }
            atomic_add_flag(node->m_state, Utils::NodeState::PROCESSED);
        } else {
            sample += node->get_last_output();
        }
    }

    postprocess();

    return sample;
    }
};

Key insights:

NodeGraphManager Coordination

The NodeGraphManager coordinates multiple RootNodes across channels and tokens:

// Each token/channel combination has its own RootNode
std::unordered_map<ProcessingToken, std::vector<std::unique_ptr<RootNode>>> m_root_nodes;

// Concurrent registration across domains
void register_node_to_channel(shared_ptr<Node> node, uint32_t channel) {
    auto token = node->get_processing_token();
    auto& root = m_root_nodes[token][channel];

    // Atomic registration - defers if currently processing
    while (!root->register_node(node)) {
        std::this_thread::yield();
    }

    // Update channel bitmask atomically
    uint32_t mask = node->m_channel_mask.load();
    node->m_channel_mask.store(mask | (1 << channel));
}

This pattern extends to buffers (BufferManager) and coroutines (TaskScheduler), providing consistent lock-free coordination across all processing systems.


Coroutine Temporal Coordination

Vruta: Scheduling Infrastructure

Vruta provides the foundational scheduling system for coroutine coordination:

TaskScheduler Architecture

class TaskScheduler {
    // Clock systems for different temporal domains
    std::unordered_map<ProcessingToken, std::unique_ptr<Clock>> m_clocks;

    // Task hierarchies per domain
    std::unordered_map<ProcessingToken, std::vector<shared_ptr<Routine>>> m_tasks;

    void add_task(shared_ptr<Routine> routine) {
        // Extract token from awaiter type
        auto token = routine->get_processing_token();

        // Register with appropriate clock
        m_tasks[token].push_back(routine);
        m_clocks[token]->register_listener(routine);
    }

    void process_token(ProcessingToken token, uint64_t units) {
        // Advance clock
        m_clocks[token]->tick(units);

        // Process suspended coroutines
        for (auto& task : m_tasks[token]) {
            if (task->should_resume()) {
                task->resume();
            }
        }
    }
};

Sample-Accurate Coordination

// Sample-accurate metro pattern
auto metro_routine = Kriya::metro(*scheduler, 0.25, []() {
    trigger_event();
});

// Internally uses SampleDelay awaiter:
struct SampleDelay {
    uint64_t samples_remaining;

    bool await_ready() { return samples_remaining == 0; }

    void await_suspend(coroutine_handle<> handle) {
        // Register with SampleClock
        scheduler->register_delay(this, samples_remaining);
    }
};

Kriya: Creative Temporal Patterns

Kriya builds expressive temporal constructs on Vruta’s foundation:

EventChains for Sequential Composition

auto event_chain = MayaFlux::create_event_chain()
    .then([]() { start_process(); }, 0.0)
    .then([]() { modulate_filter(); }, 0.125)
    .then([]() { trigger_release(); }, 0.5);

event_chain.start();

// Internally creates coroutine with timed suspension:
Routine execute_chain() {
    co_await SampleDelay(Utils::seconds_to_samples(0.125));
    start_process();

    co_await BufferDelay(4);
    modulate_filter();

    co_await FrameDelay(60);
    trigger_release();
}

Buffer Capture Mechanisms

void batch_accumulation_pipeline() {
    auto scheduler = MayaFlux::get_scheduler();
    auto buffer_manager = scheduler->get_buffer_manager();

    auto pipeline = Kriya::BufferPipeline::create(*scheduler, buffer_manager)
        ->with_strategy(Kriya::ExecutionStrategy::PHASED);

    *pipeline
        >> Kriya::BufferOperation::capture_from(audio_buffer)
            .for_cycles(20)
        >> Kriya::BufferOperation::transform([](Kakshya::DataVariant& data, uint32_t cycle) {
            const auto& accumulated = std::get<std::vector<double>>(data);
            return process_batch(accumulated);
        })
        >> Kriya::BufferOperation::route_to_container(output_stream);

    pipeline->execute_buffer_rate(10);
}

void streaming_buffer_modification() {
    auto scheduler = MayaFlux::get_scheduler();
    auto buffer_manager = scheduler->get_buffer_manager();

    auto pipeline = Kriya::BufferPipeline::create(*scheduler, buffer_manager)
        ->with_strategy(Kriya::ExecutionStrategy::STREAMING);

    *pipeline
        >> Kriya::BufferOperation::capture_from(audio_buffer)
            .for_cycles(1)
        >> Kriya::BufferOperation::modify_buffer(audio_buffer, [](std::shared_ptr<Buffers::AudioBuffer> buf) {
            auto& samples = buf->get_data();
            for (auto& s : samples) {
                s *= 0.9;  // Simple gain reduction
            }
        }).as_streaming();

    pipeline->execute_buffer_rate();
}

Timer Operations

// One-shot timer
Timer timer(*scheduler);
timer.schedule(2.0, []() {
    std::cout << "Two seconds elapsed" << std::endl;
});

// Node temporal control
NodeTimer node_timer(*scheduler, *graph_manager);
node_timer.play_for(sine_node, 2.0);  // Play for 2 seconds

// Timed action with start/stop
TimedAction action(*scheduler);
action.execute(
    []() { std::cout << "Starting" << std::endl; },
    []() { std::cout << "Ending" << std::endl; },
    3.0
);

Clock Systems: Passive and Active Temporal Drivers

MayaFlux implements two fundamental clock types with different coordination philosophies:

SampleClock: Passive Temporal Tracking

The SampleClock is passive—it doesn’t drive processing, but is updated by the audio backend and notifies listeners:

class SampleClock : public Clock {
    std::atomic<uint64_t> m_current_sample{0};
    uint64_t m_sample_rate;

public:
    // Called by audio backend during callback
    void tick(uint64_t samples) {
        m_current_sample.fetch_add(samples, std::memory_order_release);
        notify_listeners(samples);
    }

    uint64_t current_position() const {
        return m_current_sample.load(std::memory_order_acquire);
    }

    double current_time() const {
        return static_cast<double>(m_current_sample) / m_sample_rate;
    }
};

Key characteristics:

Usage pattern:

// Audio backend updates SampleClock
void audio_callback(float* buffer, uint32_t frames) {
    // Process nodes/buffers
    process_audio_graph(buffer, frames);

    // Update temporal state
    sample_clock->tick(frames);

    // Coroutines suspended on SampleDelay are notified
}

FrameClock: Active Temporal Driver

The FrameClock is active—it drives the GPU/Vulkan processing thread at target FPS:

class FrameClock : public Clock {
    std::atomic<uint64_t> m_current_frame{0};
    uint32_t m_target_fps;
    std::chrono::steady_clock::time_point m_next_frame_time;
    std::chrono::nanoseconds m_frame_duration;

public:
    // Drives graphics rendering loop
    void tick(uint64_t forced_frames = 0) {
        auto now = std::chrono::steady_clock::now();

        uint64_t frames_to_advance = forced_frames > 0 ? forced_frames : calculate_elapsed_frames(now);

        if (frames_to_advance > 0) {
            m_current_frame.fetch_add(frames_to_advance, std::memory_order_release);
            update_fps_measurement(now);
            m_last_tick_time = now;
            m_next_frame_time = now + m_frame_duration;
            notify_listeners(frames_to_advance)
        }
    }

    std::chrono::nanoseconds time_until_next_frame() const {
        auto now = std::chrono::steady_clock::now();
        auto until_next = m_next_frame_time - now;

        if (until_next.count() < 0) {
            return std::chrono::nanoseconds(0);
        }

        return std::chrono::duration_cast<std::chrono::nanoseconds>(until_next);
    }

    bool is_frame_late() const {
        return std::chrono::steady_clock::now() > m_next_frame_time;
    }
};

Key characteristics:

Usage pattern:

// Graphics thread driven by FrameClock
void graphics_loop() {
    while (running) {
        frame_clock->tick();

        // Process visual nodes
        process_visual_graph();

        // Render frame
        render_to_swapchain();

        // Sleep until next frame
        auto sleep_duration = frame_clock->time_until_next_frame();
        std::this_thread::sleep_for(sleep_duration);
    }
}

Clock Coordination Philosophy

This dual-clock architecture reflects a fundamental design insight:

By implementing clocks with different ownership models, MayaFlux enables natural coordination between these temporal domains while maintaining their distinct characteristics.

Coroutines registered to SAMPLE_ACCURATE tokens listen to SampleClock updates, while FRAME_ACCURATE coroutines listen to FrameClock. This allows audio-visual synchronization through coordinated temporal notification without forcing one domain to match the other’s timing model.


ComputationGrammar: Declarative Operation Matching

The Problem

Traditional DSP frameworks require explicit operation selection:

// Traditional approach - manual operation selection
if (is_audio_data(input)) {
    apply_fft(input);
} else if (is_spectral_data(input)) {
    apply_ifft(input);
}

This becomes unwieldy for complex pipelines with multiple data modalities, contexts, and transformation requirements.

The Solution: Rule-Based Computation

ComputationGrammar enables declarative operation matching based on input characteristics:

class ComputationGrammar {
    struct Rule {
        std::string name;
        UniversalMatcher matcher;
        ExecutionFunction executor;
        ComputationContext context;
        int priority;
    };

    std::vector<Rule> m_rules;

public:
    // Define rule declaratively
    RuleBuilder create_rule(const std::string& name) {
        return RuleBuilder(*this, name);
    }

    // Execute matching rule
    std::any execute(const std::any& input, const ExecutionContext& ctx) {
        for (auto& rule : m_rules) {
            if (rule.matcher.matches(input, ctx) && rule.context == ctx.context) {
                return rule.executor(input, ctx);
            }
        }
        throw std::runtime_error("No matching rule found");
    }
};

Rule Definition Examples

Simple Type-Based Matching

grammar.create_rule("normalize_audio")
    .with_context(ComputationContext::MATHEMATICAL)
    .with_priority(100)
    .matches_type<std::vector<double>>()
    .executes([](const std::any& input, const ExecutionContext& ctx) {
        auto data = std::any_cast<std::vector<double>>(input);
        double max_val = *std::max_element(data.begin(), data.end());

        for (auto& sample : data) {
            sample /= max_val;
        }

        return data;
    })
    .build();

Complex Matcher Combinations

auto spectral_matcher = UniversalMatcher::combine_and({
    UniversalMatcher::create_type_matcher<std::vector<DataVariant>>(),
    UniversalMatcher::create_context_matcher(ComputationContext::SPECTRAL),
    UniversalMatcher::create_parameter_matcher("frequency_range", "audio")
});

grammar.create_rule("spectral_filter")
    .matches_custom(spectral_matcher)
    .executes([](const std::any& input, const ExecutionContext& ctx) {
        // Apply frequency-domain filtering
        auto data = std::any_cast<std::vector<DataVariant>>(input);
        return apply_spectral_filter(data, ctx);
    })
    .build();

Automatic Transformer Integration

// Integrate existing transformers as grammar rules
grammar.add_operation_rule<MathematicalTransformer<>>(
    "auto_normalize",
    ComputationContext::MATHEMATICAL,
    UniversalMatcher::create_type_matcher<std::vector<DataVariant>>(),
    {{"operation", "normalize"}, {"target_peak", 1.0}},
    75  // priority
);

Adaptive Pipeline Construction

ComputationGrammar enables context-aware operation chains:

auto pipeline = ComputationPipeline(grammar);

// Pipeline automatically selects operations based on input
auto result = pipeline
    .add_input(audio_data)
    .with_context(ComputationContext::TEMPORAL)
    .with_parameter("window_size", 1024)
    .execute();

The grammar system matches rules in priority order, enabling hierarchical decision-making and exception handling patterns.


Yantra Pipeline: Mathematical and Temporal Transformations

Yantra provides the transformation infrastructure that works with ComputationGrammar. While the specific algorithms are still being developed, the architecture is solid and enables declarative composition.

Transformer Hierarchy

// Base transformer interface
template <ComputeData InputType, ComputeData OutputType>
class UniversalTransformer {
public:
    virtual TransformationType get_transformation_type() const = 0;
    virtual IO<OutputType> apply_operation(const IO<InputType>& input) = 0;

    // Chainable composition
    template <typename NextTransformer>
    auto chain(NextTransformer&& next) {
        return ComposedTransformer(*this, std::forward<NextTransformer>(next));
    }
};

// Concrete implementations
enum class TransformationType {
    MATHEMATICAL,  // Normalization, scaling, polynomial
    TEMPORAL,      // Reversal, time-stretch, delay
    SPECTRAL,      // FFT, filtering, phase manipulation
    STRUCTURAL     // Reshaping, dimensional transforms
};

Mathematical Transformations

class MathematicalTransformer : public UniversalTransformer<...> {
    enum class MathOperation {
        NORMALIZE, SCALE, CLAMP, POLYNOMIAL,
        INTERPOLATE, SMOOTH
    };

    IO<OutputType> apply_operation(const IO<InputType>& input) override {
        switch (m_operation) {
            case MathOperation::NORMALIZE:
                return normalize_data(input);
            case MathOperation::SCALE:
                return scale_data(input, m_params["scale_factor"]);
            // ... other operations
        }
    }
};

// Usage
auto normalizer = MathematicalTransformer(MathOperation::NORMALIZE);
auto scaled = normalizer.apply_operation(audio_input);

Temporal Transformations

class TemporalTransformer : public UniversalTransformer<...> {
    enum class TemporalOperation {
        TIME_REVERSE, TIME_STRETCH, DELAY,
        FADE_IN_OUT, SLICE, INTERPOLATE
    };

    IO<OutputType> apply_operation(const IO<InputType>& input) override {
        switch (m_operation) {
            case TemporalOperation::TIME_REVERSE:
                return reverse_temporal_order(input);
            case TemporalOperation::TIME_STRETCH:
                return stretch_time(input, m_params["stretch_factor"]);
            // ... other operations
        }
    }
};

Helper Functions with C++20 Ranges

Yantra leverages C++20 ranges for efficient data manipulation:

// Time reversal using ranges (in-place)
template <OperationReadyData DataType>
DataType transform_time_reverse(DataType& input) {
    auto [target_data, structure_info] = OperationHelper::extract_structured_double(input);

    for (auto& span : target_data) {
        std::ranges::reverse(span);
    }

    auto reconstructed = target_data
        | std::views::transform([](const auto& span) {
              return std::vector<double>(span.begin(), span.end());
          })
        | std::ranges::to<std::vector>();

    return OperationHelper::reconstruct_from_double<DataType>(reconstructed, structure_info);
}

// Overlap-add processing with ranges
template <typename TransformFunc>
std::vector<double> process_overlap_add(
    const std::span<const double>& data,
    uint32_t window_size,
    uint32_t hop_size,
    TransformFunc transform_func)
{
    const size_t num_windows = (data.size() - window_size) / hop_size + 1;
    std::vector<double> output(data.size(), 0.0);

    std::ranges::for_each(std::views::iota(size_t{0}, num_windows), [&](size_t win) {
        size_t start_idx = win * hop_size;
        auto window_data = data.subspan(start_idx,
            std::min(window_size, data.size() - start_idx));

        auto transformed = transform_func(window_data);

        for (size_t i = 0; i < transformed.size(); ++i) {
            output[start_idx + i] += transformed[i];
        }
    });

    return output;
}

Declarative Composition

Transformers compose naturally:

// Chain multiple transformations
auto pipeline = normalizer
    .chain(time_reverser)
    .chain(spectral_filter)
    .chain(fade_envelope);

auto result = pipeline.apply_operation(input_data);

// Or use grammar-based automatic selection
auto adaptive_pipeline = ComputationPipeline(grammar)
    .add_input(input_data)
    .with_context(ComputationContext::TEMPORAL)
    .execute();

The architecture enables adding new transformers without modifying existing code, maintaining the open/closed principle while providing declarative composition.


NDData: Unified Cross-Modal Processing

The Problem

Traditional multimedia systems treat different data types as separate:

Each requires different APIs, different processing pipelines, different transformation logic.

The Solution: Dimensional Abstraction

NDData provides unified dimensional access regardless of data modality:

namespace Kakshya {

// Unified data type
using DataVariant = std::variant<
    std::vector<double>,
    std::vector<uint32_t>,
    std::vector<glm::vec3>
>;

// Dimensional descriptor
struct DataDimension {
    std::string name;           // "time", "frequency", "x", "y", "channel"
    uint64_t size;              // Number of elements
    uint64_t stride;            // Memory stride
    DimensionType type;         // TEMPORAL, SPATIAL, SPECTRAL, CHANNEL
    std::optional<double> scale; // Physical units conversion
};

}

Data Modality Definitions

enum class DataModality {
    AUDIO_1D,              // Single channel audio
    AUDIO_MULTICHANNEL,    // Multi-channel audio
    SPECTRAL_2D,           // Frequency x Time
    IMAGE_2D,              // Height x Width
    IMAGE_RGB,             // Height x Width x 3
    VIDEO_3D,              // Height x Width x Time
    TENSOR_ND              // Arbitrary dimensions
};

Creating Dimensional Data

// Audio: 48000 samples
auto audio_module = NDData::create_audio_1d<double>(48000);

// Multi-channel audio: 48000 samples x 2 channels
auto stereo_module = NDData::create_audio_multichannel<double>(48000, 2);

// Spectral data: 128 time windows x 1024 frequency bins
auto spectral_module = NDData::create_spectral_2d<double>(128, 1024);

// Image: 1920x1080 pixels
auto image_module = NDData::create_image_2d<double>(1080, 1920);

// Generic N-dimensional tensor
auto tensor_module = NDData::create_for_modality<double>(
    DataModality::TENSOR_ND,
    {256, 256, 64},  // shape
    0.0,             // default value
    MemoryLayout::ROW_MAJOR
);

Unified Transformation Interface

The same transformation operations work across modalities:

// Mathematical operations work on any modality
auto normalized = MathematicalTransformer(MathOperation::NORMALIZE)
    .apply_operation(audio_data);

auto normalized_image = MathematicalTransformer(MathOperation::NORMALIZE)
    .apply_operation(image_data);

// Temporal operations on audio
auto reversed_audio = TemporalTransformer(TemporalOperation::TIME_REVERSE)
    .apply_operation(audio_data);

// Temporal operations on video (time dimension)
auto reversed_video = TemporalTransformer(TemporalOperation::TIME_REVERSE)
    .apply_operation(video_data);

Region-Based Access

Regions enable precise data selection across dimensions:

// Temporal region (audio)
auto intro = Region::audio_span(0, 6000, 0);  // First 0.125s, channel 0

// Spatial region (image)
auto top_left = Region::image_rect(0, 0, 100, 100);  // 100x100 pixel region

// Spectral region
auto bass_range = Region::spectral_range(0, 100, 0, 512);  // Low frequencies

// Multi-dimensional region (video)
auto clip = Region::video_clip(0, 100, 50, 150, 0, 30);  // Spatial crop + temporal clip

Container Integration

NDimensionalContainers use regions for creative data organization:

class NDimensionalContainer {
public:
    // Store region with metadata
    void add_region(const std::string& name,
                   const Region& region,
                   const RegionMetadata& metadata);

    // Extract data from region
    DataVariant get_region_data(const Region& region);

    // Modify region in place
    void set_region_data(const Region& region, const DataVariant& data);

    // Query regions by attributes
    std::vector<Region> find_regions_with_attribute(
        const std::string& key,
        const std::any& value);
};

Cross-Modal Workflow Example

// Load audio file
auto container = SoundFileContainer::from_file("audio.wav");

// Define analysis region
auto intro = Region::audio_span(0, 12000, 0, 1);  // Both channels

// Extract spectral content
auto spectral_transformer = SpectralTransformer(SpectralOperation::FFT);
auto spectral_data = spectral_transformer.apply_operation(
    container->get_region_data(intro)
);

// Use spectral data to modulate visual parameters
auto visual_modulator = [&](const SpectralData& spectrum) {
    // Map frequency energy to visual parameters
    double bass_energy = spectrum.energy_in_range(20, 200);
    double mid_energy = spectrum.energy_in_range(200, 2000);

    // Control visual buffer
    visual_buffer->set_parameter("hue_shift", bass_energy * 360.0);
    visual_buffer->set_parameter("brightness", mid_energy * 2.0);
};

// Coordinate via coroutine
auto coordination = Kriya::metro(*scheduler, 0.1, [&]() {
    auto current_spectrum = analyze_current_frame(audio_buffer);
    visual_modulator(current_spectrum);
});

This demonstrates true cross-modal coordination—audio analysis directly controlling visual parameters through unified data abstractions.


Window Management as Coroutines

MayaFlux reimagines windowing events as temporal coroutine patterns rather than traditional callback spaghetti.

Traditional Windowing: Callback Hell

// Traditional GLFW approach - fragmented callback hell
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods) {
    if (action == GLFW_PRESS) {
        handle_key_press(key);
    }
}

void mouse_callback(GLFWwindow* window, double xpos, double ypos) {
    handle_mouse_motion(xpos, ypos);
}

void resize_callback(GLFWwindow* window, int width, int height) {
    handle_resize(width, height);
}

// Separate callback registration
glfwSetKeyCallback(window, key_callback);
glfwSetCursorPosCallback(window, mouse_callback);
glfwSetWindowSizeCallback(window, resize_callback);

Problems:

MayaFlux Approach: Window Events as EventSource

Each window has an EventSource that signals events to coroutines:

class GlfwWindow : public IWindow {
    Vruta::EventSource m_event_source;  // Coroutine event signaling
    WindowEventCallback m_event_callback;  // Traditional callback (optional)

    // GLFW callbacks signal EventSource
    static void glfw_key_callback(GLFWwindow* window, int key, int scancode,
                                   int action, int mods) {
        auto* win = static_cast<GlfwWindow*>(glfwGetWindowUserPointer(window));

        WindowEvent event;
        event.type = (action == GLFW_PRESS) ? WindowEventType::KEY_PRESSED
                                            : WindowEventType::KEY_RELEASED;
        event.timestamp = glfwGetTime();
        event.data = WindowEvent::KeyData{key, scancode, mods};

        // Signal coroutines waiting on this event
        win->m_event_source.signal(event);

        // Optional: call traditional callback
        if (win->m_event_callback) {
            win->m_event_callback(event);
        }
    }
};

Coroutine Event Patterns

Awaiting Specific Events

// Wait for specific window event
Event handle_user_input(GlfwWindow& window) {
    while (true) {
        // Await next keyboard event
        auto event = co_await window.event_source().next_event(
            WindowEventType::KEY_PRESSED
        );

        auto key_data = std::get<WindowEvent::KeyData>(event.data);

        if (key_data.key == GLFW_KEY_SPACE) {
            trigger_audio_event();
        } else if (key_data.key == GLFW_KEY_ESCAPE) {
            break;
        }
    }
}

Event Sequencing

// Wait for event sequence
Event gesture_recognition(GlfwWindow& window) {
    // Wait for mouse button press
    auto press_event = co_await window.event_source().next_event(
        WindowEventType::MOUSE_BUTTON_PRESSED
    );

    auto start_time = press_event.timestamp;
    auto start_pos = std::get<WindowEvent::MouseButtonData>(press_event.data);

    // Track mouse motion
    std::vector<Vector2> gesture_path;

    while (true) {
        auto event = co_await window.event_source().next_event();

        if (event.type == WindowEventType::MOUSE_MOTION) {
            auto pos = std::get<WindowEvent::MousePosData>(event.data);
            gesture_path.push_back({pos.x, pos.y});
        }
        else if (event.type == WindowEventType::MOUSE_BUTTON_RELEASED) {
            // Analyze complete gesture
            recognize_gesture(gesture_path);
            break;
        }
    }
}

Event-Driven Audio-Visual Coordination

// Coordinate audio triggering with window focus
Event focus_aware_processing(GlfwWindow& window) {
    while (true) {
        auto event = co_await window.event_source().next_event();

        if (event.type == WindowEventType::WINDOW_FOCUS_GAINED) {
            // Resume audio processing
            audio_graph_manager->set_processing_enabled(true);
            visual_renderer->set_render_quality(RenderQuality::HIGH);
        }
        else if (event.type == WindowEventType::WINDOW_FOCUS_LOST) {
            // Pause audio processing
            audio_graph_manager->set_processing_enabled(false);
            visual_renderer->set_render_quality(RenderQuality::LOW);
        }
        else if (event.type == WindowEventType::WINDOW_RESIZED) {
            auto resize = std::get<WindowEvent::ResizeData>(event.data);
            handle_viewport_resize(resize.width, resize.height);
        }
    }
}

Multi-Window Coordination

// Coordinate events across multiple windows
GraphicsRoutine multi_window_sync(GlfwWindow& main_window, GlfwWindow& control_window) {
    while (true) {
        // Wait for events from either window
        auto event = co_await EventSource::any_of({
            main_window.event_source(),
            control_window.event_source()
        });

        // Synchronize state across windows
        if (event.type == WindowEventType::KEY_PRESSED) {
            // Broadcast key event to both visual contexts
            main_window.dispatch_event(event);
            control_window.dispatch_event(event);
        }
    }
}

EventSource Architecture

namespace Vruta {

class EventSource {
    std::queue<WindowEvent> m_event_queue;
    std::vector<std::coroutine_handle<>> m_waiting_coroutines;
    std::mutex m_mutex;

public:
    // Signal event from callback
    void signal(const WindowEvent& event) {
        m_event_queue.push(event);

        // Resume all waiting coroutines
        for (auto& handle : m_waiting_coroutines) {
            handle.resume();
        }
        m_waiting_coroutines.clear();
    }

    // Awaitable for next event
    auto next_event(std::optional<WindowEventType> filter = std::nullopt) {
        struct EventAwaiter {
            EventSource& source;
            std::optional<WindowEventType> filter;
            WindowEvent result;

            bool await_ready() {
                if (source.m_event_queue.empty()) {
                    return false;
                }

                // Check if event matches filter
                if (filter && source.m_event_queue.front().type != *filter) {
                    return false;
                }

                result = source.m_event_queue.front();
                source.m_event_queue.pop();
                return true;
            }

            void await_suspend(std::coroutine_handle<> handle) {
                source.m_waiting_coroutines.push_back(handle);
            }

            WindowEvent await_resume() { return result; }
        };

        return EventAwaiter{*this, filter};
    }
};

}

Window Processing Token

Windows have their own processing token for domain coordination:

// Domain definition for windowing, same as graphics
Domain::GRAPHICS =
    (Nodes::ProcessingToken::VISUAL_RATE << 32) |
    (Buffers::ProcessingToken::GRAPHICS_BACKEND << 16) |
    (Vruta::ProcessingToken::EVENT);

This architecture transforms window management from callback spaghetti into expressive temporal coordination, enabling patterns like gesture recognition, multi-event sequences, and cross-window synchronization through natural coroutine control flow.


Domain Composition and Processing Tokens

The Token System

MayaFlux uses bitfield-composed tokens to specify processing characteristics:

// Node processing tokens
namespace Nodes {
    enum class ProcessingToken {
        AUDIO_RATE,    // Process at audio sample rate
        VISUAL_RATE,   // Process at visual frame rate
        CUSTOM_RATE    // User-defined processing rate
    };
}

// Buffer processing tokens (bitfield)
namespace Buffers {
    enum ProcessingToken : uint32_t {
        // Rate tokens
        SAMPLE_RATE = 1 << 0,
        FRAME_RATE = 1 << 1,

        // Device tokens
        CPU_PROCESS = 1 << 8,
        GPU_PROCESS = 1 << 9,

        // Concurrency tokens
        SEQUENTIAL = 1 << 16,
        PARALLEL = 1 << 17,

        // Backend combinations
        AUDIO_BACKEND = SAMPLE_RATE | CPU_PROCESS | SEQUENTIAL,
        GRAPHICS_BACKEND = FRAME_RATE | GPU_PROCESS | PARALLEL,
        AUDIO_PARALLEL = SAMPLE_RATE | GPU_PROCESS | PARALLEL,
        WINDOW_EVENTS = FRAME_RATE | CPU_PROCESS | SEQUENTIAL
    };
}

// Coroutine processing tokens
namespace Vruta {
    enum class ProcessingToken {
        SAMPLE_ACCURATE,  // Sample-level temporal precision
        FRAME_ACCURATE,   // Frame-level temporal precision
        EVENT_DRIVEN,     // Sporadic event processing
        MULTI_RATE,       // Adapt between multiple rates
        ON_DEMAND,        // Execute when explicitly called
        CUSTOM
    };
}

Domain Composition

Domains combine tokens into unified computational contexts:

enum class Domain : uint64_t {
    // Audio processing with sample-accurate coordination
    AUDIO = (Nodes::ProcessingToken::AUDIO_RATE << 32) |
            (Buffers::ProcessingToken::AUDIO_BACKEND << 16) |
            (Vruta::ProcessingToken::SAMPLE_ACCURATE),

    // Graphics with frame-accurate synchronization
    GRAPHICS = (Nodes::ProcessingToken::VISUAL_RATE << 32) |
               (Buffers::ProcessingToken::GRAPHICS_BACKEND << 16) |
               (Vruta::ProcessingToken::FRAME_ACCURATE),

    // GPU-accelerated audio processing
    AUDIO_GPU = (Nodes::ProcessingToken::AUDIO_RATE << 32) |
                (Buffers::ProcessingToken::AUDIO_PARALLEL << 16) |
                (Vruta::ProcessingToken::MULTI_RATE),

    // Windowing without rendering
    WINDOWING = (Nodes::ProcessingToken::VISUAL_RATE << 32) |
                (Buffers::ProcessingToken::EVENTS << 16) |
                (Vruta::ProcessingToken::FRAME_ACCURATE),

    // Input event processing
    INPUT = (Nodes::ProcessingToken::VISUAL_RATE << 32) |
            (Buffers::ProcessingToken::FRAME_RATE << 16) |
            (Vruta::ProcessingToken::EVENT_DRIVEN)
};

Domain Decomposition

Extract individual tokens from composed domains:

// Helper functions for token extraction
inline Nodes::ProcessingToken get_node_token(Domain domain) {
    return static_cast<Nodes::ProcessingToken>((uint64_t)domain >> 32);
}

inline Buffers::ProcessingToken get_buffer_token(Domain domain) {
    return static_cast<Buffers::ProcessingToken>(((uint64_t)domain >> 16) & 0xFFFF);
}

inline Vruta::ProcessingToken get_task_token(Domain domain) {
    return static_cast<Vruta::ProcessingToken>((uint64_t)domain & 0xFFFF);
}

// Create custom domain from individual tokens
Domain compose_domain(Nodes::ProcessingToken node_token,
                     Buffers::ProcessingToken buffer_token,
                     Vruta::ProcessingToken task_token) {
    return static_cast<Domain>(
        (static_cast<uint64_t>(node_token) << 32) |
        (static_cast<uint64_t>(buffer_token) << 16) |
        static_cast<uint64_t>(task_token)
    );
}

Cross-Domain Coordination

// Audio-visual synchronization domain
auto sync_domain = Domain::AUDIO_VISUAL_SYNC;

// Create nodes in synchronized domain
auto spectral_node = vega.fft() | sync_domain;
auto visual_buffer = graphics_buffer | sync_domain;

// Data flows between temporal contexts with coordination
spectral_node >> visual_buffer;

// Coroutine coordinates across domains
auto coordination_routine = Kriya::metro(*scheduler, 0.1, [&]() {
    auto spectral_data = spectral_node->get_current_output();
    visual_buffer->apply_modulation(spectral_data);
});

Processing Handle Access

SubsystemManager provides token-scoped access:

class SubsystemManager {
    // Get processing handle for specific domain
    SubsystemProcessingHandle get_processing_handle(Domain domain) {
        auto node_token = get_node_token(domain);
        auto buffer_token = get_buffer_token(domain);
        auto task_token = get_task_token(domain);

        return SubsystemProcessingHandle{
            .nodes = m_node_manager->get_token_processor(node_token),
            .buffers = m_buffer_manager->get_token_processor(buffer_token),
            .tasks = m_scheduler->get_token_processor(task_token)
        };
    }
};

// Usage
auto handle = subsystem_manager->get_processing_handle(Domain::AUDIO);
handle.nodes.process_channel(0, 512);  // Process channel 0, 512 samples
handle.buffers.process_all_buffers(512);
handle.tasks.process(512);

This token composition architecture enables flexible domain creation while maintaining type-safe processing coordination across nodes, buffers, and coroutines.


Current Implementation Status

Fully Functional & Tested

Lock-Free Processing

Coroutine Infrastructure

Audio Backend

Windowing System

Node System

Buffer System

ComputationGrammar Foundation

NDData Abstractions

In Active Development

Graphics Pipeline

ComputationGrammar Stress Testing

Yantra Transformers

Inter-Component Integration

Planned Expansions

Live Coding Integration

Game Engine Plugins


Future Vision and Expansion

The Pluggable Philosophy

MayaFlux is designed for complete extensibility:

Beyond Traditional DSP

MayaFlux enables computational patterns impossible in analog-inspired systems:

Recursive Processing

// Recursive feedback that modifies its own structure
auto recursive_node = vega.custom([](double input, NodeState& state) {
    if (state.recursion_depth < 5) {
        return recursive_node->process_sample() * 0.5 + input;
    }
    return input;
});

Grammar-Defined Pipelines

// Adaptive pipeline based on input characteristics
auto adaptive = ComputationPipeline(grammar)
    .add_input(unknown_data)
    .auto_select_operations()  // Grammar determines optimal path
    .execute();

Ahead-of-Time Transformations

// Pre-calculate complex transformations
auto precomputed = container->transform_region(
    large_audio_region,
    [](auto data) {
        return apply_expensive_convolution(data);
    }
);

// Use precomputed results in real-time
realtime_node->set_lookup_table(precomputed);

Cross-Modal Synthesis

// Visual parameters generate audio
auto visual_to_audio = vega.custom([&](double input) {
    auto pixel_brightness = visual_buffer->sample_at(x, y);
    auto frequency = map_range(pixel_brightness, 0, 255, 20, 2000);
    return sine_wave(frequency);
});

// Audio parameters generate visuals
auto audio_to_visual = [&](const SpectralData& spectrum) {
    for (int freq_bin = 0; freq_bin < 1024; ++freq_bin) {
        auto energy = spectrum[freq_bin];
        visual_buffer->set_pixel_color(freq_bin, map_to_color(energy));
    }
};

Research Questions

MayaFlux is presented as an architectural experiment seeking community validation:

  1. Does C++20 enable genuinely unified multimedia processing?
  2. Is grammar-based computation viable for real-time creative work?
  3. Can cross-modal abstractions remain performant?
  4. What creative workflows emerge from digital-first thinking?

Seeking: Adversarial Testing and Community Validation

This project needs:

MayaFlux is not presented as a finished tool, but as a paradigm proposal seeking validation or refutation through community scrutiny.

The core systems work. The tests pass. But 6 months of solo development cannot validate real-world usage patterns, edge cases, or creative applicability.

This is an honest call for collaboration - to either prove this paradigm has merit or expose its fundamental limitations through adversarial testing and creative exploration.


Contact & Resources

ADC25 Virtual Presentation

Project Status

Seeking


MayaFlux represents not just a new framework, but a fundamental rethinking of creative computation - moving from analog simulation to true digital-first paradigms where data transformation becomes the primary creative medium.