MayaFlux

Unified Multimedia DSP: Moving Beyond Audio-Centric Thinking
The Problem: Every creative coding framework asks you to choose—real-time audio precision or flexible visual programming, never both with unified timing. Audio tools ignore graphics. Graphics frameworks sacrifice audio accuracy. MayaFlux eliminates this false choice.

What Becomes Possible

When audio, visual, and control data share a single computational substrate:

  • Direct cross-modal flow: Audio features feed compute shaders and render pipelines without translation layers.
  • Live algorithmic authorship: Modify audio and visual algorithms while they process, sub-buffer latency. +Live coding via LLVM21 JIT
  • Recursive composition: Treat time as creative material via C++20 coroutines (impossible in traditional DSP).
  • Sample-accurate coordination: Audio ticks at sample rate, graphics at frame rate, both within unified scheduling. 100% lock free synchronization.
  • Adaptive pipelines: Algorithms self-configure based on data characteristics at runtime.

Architecture: Not Feature Lists, But Principles

Five composable paradigms replace analog-inspired thinking:

  • Nodes: Unit-by-unit transformation precision, maintaining mathematical relationships as creative decisions.
  • Buffers: Temporal gathering spaces that accumulate data without blocking allocation.
  • Coroutines
  • Containers: Multi-dimensional data unifying audio, visual, and tensor representations.
  • Compute Matrix: Composable and expresssive semantic pipelines to analyze, sort, extract and transform NDData .

All components remain composable and concurrent. Processing domains are encoded via bit-field tokens, enabling type-safe cross-modal coordination.

Current Implementation Status

✓ Production-Ready: (Live Coding) C++20 JIT compilation (Lila, LLVM21), lock-free audio node graphs (sample-accurate), comprehensive testing (700+ tests), 100,000+ lines of infrastructure, developed independently since March 2025.
✓ Proof-of-Concept: Vulkan graphics pipeline (CPU → GPU unified data flow), cross-domain synchronization, NDData containers feeding GPU rendering.
→ In Development: GPU compute shader integration, complex ND visual pipelines, full audio-visual feedback loops, advanced scheduling.

The system already demonstrates the paradigm at audio scale. Graphics POC validates that the architecture scales across domains.

Why This Matters

Existing tools inherited assumptions from analog hardware: separate clocks, translation layers between domains, UI-first rather than computation-first.

MayaFlux asks: What if we started digital? Not simulating hardware, but embracing computational possibilities that only exist in the digital realm: recursion, data-driven pipelines, real-time code modification, unified cross-modal processing.

This isn't iteration on existing paradigms. It's a different computational substrate.

For Researchers & Developers

If you're interested in:

  • How to actually implement real-time DSP without sacrificing flexibility
  • Why coroutines enable new compositional paradigms
  • What happens when you treat GPU and CPU as unified processing
  • Building production infrastructure for algorithmic composition

This is the foundational implementation. Everything is open source.

The Paradigm Shift in One Sentence Instead of asking "how do I optimize this for audio?" or "how do I make graphics precise?", MayaFlux asks: "What if audio, graphics, and algorithmic composition were just different modalities of the same computational material?"
📄 Technical Docs 🧩 Source Repo! (pre-alpha)