MayaFlux proposes a unified model for digital media computation built in modern C++20. It treats all media streams as numerical transformations within a single composable architecture. The system demonstrates how digital processes, rather than analog metaphors, can become the foundation for creative practice.
Instead of viewing audio, video, and control data as separate domains, MayaFlux defines them as interchangeable dimensions of computation. Every signal—temporal, spatial, or spectral—occupies the same numerical field. This allows cross-modal interaction without translation layers or clock mismatches: sound may drive image transformations as directly as arithmetic operations. Every component—nodes, buffers, schedulers, backends—remains composable while maintaining concurrent operation.
The ComputationGrammar framework introduces rule-based matching for operations. Instead of selecting algorithms manually, transformations are declared in terms of data characteristics and context. Pipelines construct themselves adaptively, producing context-sensitive computation that remains fully deterministic yet adaptive.
The framework presently includes lock-free node graphs, coroutine schedulers, a complete audio backend (RtAudio), region-based data containers, and over 700 component tests. A fully-functional LLVM21-backed live coding environment (Lila) enables C++20 code compilation and execution within a single buffer cycle. Ongoing development extends to Vulkan integration, grammar stress testing, and embedding (Lua via sol2, WASM, Java FFI planned). The system is already capable of sample-accurate cross-modal processing on CPU, with GPU pipelines in active development.
Together these systems establish complete architectural composability: nodes, buffers, and coroutines operate concurrently yet remain substitutable at any level. Processing domains are defined by bit-field tokens that describe rate, backend, and temporal behavior, enabling type-safe cross-domain coordination.
Audio and visual domains are synchronized through dual clock systems: the SampleClock (passive, callback-driven) and the FrameClock (active, self-timed). Coroutines subscribe to these clocks, achieving sample- or frame-level precision without external synchronization threads. This allows truly unified real-time behaviour across sound, image, and interaction.
MayaFlux asks whether modern C++ can support a truly digital model of multimedia computation—one that no longer imitates hardware but composes time, data, and transformation as primary creative material. The live coding environment validates this paradigm by enabling real-time algorithmic composition—not module patching but recursive coroutine authorship within buffer cycles.
Treating computation as primary creative material requires that code itself become malleable. MayaFlux demonstrates this through sub-buffer JIT latency and coroutine-driven temporal coordination. The work seeks community validation through live experimentation and adversarial testing.