MayaFlux

Digital-First Multimedia Processing Architecture – ADC25 Poster Summary

MayaFlux proposes a unified model for digital media computation built in modern C++20. It treats all media streams as numerical transformations within a single composable architecture. The system demonstrates how digital processes, rather than analog metaphors, can become the foundation for creative practice.

Core Idea

Instead of viewing audio, video, and control data as separate domains, MayaFlux defines them as interchangeable dimensions of computation. Every signal—temporal, spatial, or spectral—occupies the same numerical field. This allows cross-modal interaction without translation layers or clock mismatches: sound may drive image transformations as directly as arithmetic operations. Every component—nodes, buffers, schedulers, backends—remains composable while maintaining concurrent operation.

Grammar-Driven Computation

The ComputationGrammar framework introduces rule-based matching for operations. Instead of selecting algorithms manually, transformations are declared in terms of data characteristics and context. Pipelines construct themselves adaptively, producing context-sensitive computation that remains fully deterministic yet adaptive.

Current Implementation

The framework presently includes lock-free node graphs, coroutine schedulers, a complete audio backend (RtAudio), region-based data containers, and over 700 component tests. A fully-functional LLVM21-backed live coding environment (Lila) enables C++20 code compilation and execution within a single buffer cycle. Ongoing development extends to Vulkan integration, grammar stress testing, and embedding (Lua via sol2, WASM, Java FFI planned). The system is already capable of sample-accurate cross-modal processing on CPU, with GPU pipelines in active development.

Architectural Principles

  • Nodes: Sample-accurate, lock-free transformation units. Mathematical relationships form creative structure.
  • Buffers: Temporal collectors that gather, release, and await data without allocation or blocking.
  • Coroutines: C++20 primitives that turn time into compositional material through suspension and resumption.
  • Containers (NDData): Multi-dimensional data abstractions unifying audio, video, and tensor forms.
  • Compute Matrix: Declarative operation pipelines—mathematical, temporal, spectral analysis, extractors, transforms—that compose and chain across data modalities.

Together these systems establish complete architectural composability: nodes, buffers, and coroutines operate concurrently yet remain substitutable at any level. Processing domains are defined by bit-field tokens that describe rate, backend, and temporal behavior, enabling type-safe cross-domain coordination.

Temporal Coordination

Audio and visual domains are synchronized through dual clock systems: the SampleClock (passive, callback-driven) and the FrameClock (active, self-timed). Coroutines subscribe to these clocks, achieving sample- or frame-level precision without external synchronization threads. This allows truly unified real-time behaviour across sound, image, and interaction.

Research Context

MayaFlux asks whether modern C++ can support a truly digital model of multimedia computation—one that no longer imitates hardware but composes time, data, and transformation as primary creative material. The live coding environment validates this paradigm by enabling real-time algorithmic composition—not module patching but recursive coroutine authorship within buffer cycles.

Treating computation as primary creative material requires that code itself become malleable. MayaFlux demonstrates this through sub-buffer JIT latency and coroutine-driven temporal coordination. The work seeks community validation through live experimentation and adversarial testing.

The Digital-First Paradigm Existing creative coding frameworks choose between real-time audio processing or flexible visual control—never both with shared temporal precision. MayaFlux eliminates this trade-off. Sample-accurate audio coordinates directly with GPU compute shaders. Recursive algorithms operate across modalities. Grammar-defined pipelines adapt to data characteristics at runtime. The live coding environment proves these patterns are inexpressible when code is static and domains are architecturally isolated—but achievable when composition becomes computational discourse itself.
📄 Technical Docs