|
MayaFlux 0.1.0
Digital-First Multimedia Processing Framework
|
MayaFlux represents a fundamental rethinking of creative computation—moving beyond analog-inspired metaphors and file-based workflows into true digital-first paradigms. Rather than simulating hardware or accommodating legacy constraints, MayaFlux treats data transformation as the primary creative medium.
This shift requires new ways of thinking about creative processes. Instead of "programming" versus "composing," MayaFlux recognizes that data transformation is creative expression. Mathematical relationships become creative decisions, temporal coordination becomes compositional structure, and multi-dimensional data access becomes creative material selection.
Critically, MayaFlux moves beyond the artificial disciplinary separation of audio and visual processing that dominates contemporary creative software. Audio samples at 48 kHz, pixels at 60 Hz, and compute results all flow through identical transformation primitives—because they are all just data. The only meaningful difference is the timing context (domain) in which they're processed. A node that transforms one unit at a time operates identically whether that unit is a float sample, a pixel, or a compute result. A buffer that accumulates moments until release works identically for audio, visual, or spectral data. The paradigms that follow are domain-agnostic by design.
The architecture emerges from four foundational paradigms that work compositionally rather than separately:
Each paradigm operates with computational precision while remaining expressively flexible. They compose naturally across audio, visual, and compute domains, creating complex multi-modal creative workflows from simple, well-defined transformation primitives.
This document explores each paradigm through practical examples that demonstrate both their individual capabilities and their compositional relationships across computational domains. The focus remains on digital thinking—embracing computational possibilities that have no analog equivalent, and treating all data (whether it sounds, displays, or computes) through unified transformation infrastructure.
Nodes represent moments of transformation in the digital reality—points where information becomes something new. Each node processes one unit at a time, creating precise, intentional change in the flow of data.
Think of nodes not as devices or tools, but as creative decisions made manifest. They can craft new information (generators) or reshape existing streams (processors). Any concept that can be expressed as unit-by-unit transformation can become a Node.
Nodes embody mathematical relationships through polynomials, make logical decisions through boolean operations, channel chaos through stochastic patterns, sculpt existing flows through filtering, or manifest familiar synthesis paradigms like sine waves, phasors, and impulses.
The rate of processing i.e how often individual units evaluate and transition to the next—is governed by the concept of Domains. A node can operate in the Audio domain (processing at sample rate), the Graphics domain (processing at frame rate), the Compute domain (processing at GPU compute rate), or Custom domains (user-defined rates). The fundamental transformation logic remains identical; only the timing context.
Nodes can be defined using the fluent mechanism, API level convenience wrappers or via directly creating a modern c++ shared pointers.
Each path serves different creative moments—fluid thinking, structured creation, or precise manipulation. precise manipulation. The key insight: the node definition syntax is identical across audio, visual, and compute domains; only the semantic content (what the node does) changes.
Nodes connect through >> creating streams of transformation. Each connection point represents a decision about how information should evolve. Critically, these streams work across domains:
The flow doesn't just pass data—it creates relationships where each transformation influences and is influenced by its neighbors.
Since nodes operate with unit-by-unit precision, they become temporal anchors places where you can attach other creative processes to the exact rhythm of transformation:
This turns computational precision into creative opportunity every processing moment becomes a potential point of creative intervention.
Buffers are temporal gatherers – they don't store data, they accumulate moments until they reach fullness, then release their contents and await the next collection.
Think of them as breathing spaces in the data flow, where individual units gather into collective expressions. Unlike nodes that transform unit-by-unit, buffers work with accumulated time – they collect until they have enough information to pass along as a temporal block.
A buffer's life cycle is simple: gather → release → await → gather. This cycle creates the temporal chunking that makes certain kinds of transformation possible - operations that need to see patterns across time rather than individual moments. This lifecycle is identical whether the buffer collects audio samples, pixels, or compute results.
The transient collectors paradigm of buffers allow them to work with more than just float or long float (double) data types. They can work with images, textures, text (string) basically any simple data type accommodated by default in C++. However, we will focus only on AudioBuffers in this document.
The default audio buffer simply creates a buffer that works with high precision decimals or doubles. MayaFlux also provides other buffer types derived from AudioBuffers And much like nodes, buffers have multiple ways of creating them.
Unlike nodes that are essentially mathematical expressions evaluated unit by unit, buffers are simply temporal accumulators. However, there is little expressive or computational value in simply holding on to data for a short period, do nothing, and pass it along.
Buffers work with BufferProcessors that are attached to them to handle operations on the accumulated data.
It might help to visualize buffers as facilitators where processing happens to them, in place of them handling any transformations themselves.
Every buffer carries a default processor—its natural way of handling accumulated data. While this is an external tool applied to the buffer; it is the buffer's inherent behavior for each cycle The NodeBuffer from earlier example uses NodeProcessor that evaluates the specified node unit by unit, until buffer size is complete. The stream buffer has a default processor AccessProcessor that "acquires" concurrent 512 samples from the stream source container The basic audio buffer has no default processor but it exposes methods to attach one.
The default processor defines the buffer's natural expression—what it does with accumulated data when left to its own behavior, or how it accumulates data to begin with.
BufferProcessors are standalone features that require a buffer to process, but they are not specific to a single buffer. Each processor can be attached to multiple buffers, be it the same instance of the processor, or several instances of the same class to different buffers. More information on buffers and processors can be found here
On the other hand, buffers themselves accommodate multiple processors using BufferProcessingChain
It is important to note that the order of execution of processing chains are sequential, determined by the order of registration.
Apart from the available list of default processors, it is very trivial to create your own processors.
If you are unfamiliar with creating C++ classes or your overall exposure to programming does not yet accommodate such explicit creation paradigm, MayaFlux offers several convenience methods to achieve the same.
Quick processors
A quick process can by any function that accepts a buffer as input, and applies any desired operation on the buffer's data. These quick processes allow using any previously available value/data inside the function. The specifics of how they work will require an understanding of lamdas in programming, but suffice to say that […] these braces allow you to mention any data you want to use;
```cpp
auto noise = vega.Random(UNIFORM); auto my_num = 3.16f;
MayaFlux::attach_quick_process([noise, my_num](auto buf){ // Actual processing auto& data = buffer->get_data(); for (auto& sample : data) { sample *= noise->process_sample(my_num); }}; ) ```
Quick processes can be attached to individual buffers, to RootAudioBuffer i.e all buffers in a single channel, or to all channels via pre/post process globals.
```cpp MayaFlux::attach_quick_process(func, buffer);
MayaFlux::attach_quick_process_to_channel(func, channel_id);
MayaFlux::register_process_hook(func, PREPROCESS); ```
Coroutines represent a fundamental shift in how we think about time in digital creation. They're not "sequencers" that trigger events, but temporal coordination primitives that let different processes develop their own relationship with time while remaining connected to the larger creative flow. Think of coroutines as living temporal expressions – each one can pause, wait, coordinate with others, and resume according to its own creative logic. They transform time from a linear constraint into a malleable creative material that you can shape, stretch, layer, and weave together. In MayaFlux, coroutines exist in two interconnected namespaces: Vruta (the scheduling infrastructure) and Kriya (the creative temporal patterns). Together, they create a system where time becomes compositional.
Unlike traditional sequencing, coroutines let each process maintain its own sense of time while coordinating with others:
Each coroutine coawaits different temporal conditions, creating multimodal time where multiple temporal flows coexist and interact.
Coroutines exist in two interconnected namespaces: Vruta (scheduling infrastructure) and Kriya (creative temporal patterns). Together they transform time from constraint into compositional material.
Much like Buffers and Nodes, coroutines have fluent flow methods, global convenience API and explicit definition support. The primary difference is vega does not have any coroutine representation as vega enables simple non-verbose creation and not time manipulate at creation.
Coroutines coordinate across computational domains through specialized awaiters that understand each context's timing characteristics:
Kriya provides creative temporal primitives that capture common temporal relationships:
EventChains create temporal choreography through fluent composition:
Kriya coordinates with buffer processing for temporal data capture and analysis:
This creates temporal architecture where time becomes malleable creative material, enabling coordination across nodes, buffers, and computational domains through sample-accurate timing and compositional temporal relationships.
Containers represent a paradigm shift in creative data management—moving beyond traditional file-based workflows into dynamic, multi-dimensional data structures that treat information as compositional material. Unlike conventional audio tools that separate "files" from "processing," containers unify data storage, transformation, and access into coherent creative units.
This requires forethought into creative data management because existing digital tools rarely provide infrastructure for general creative data categorization. Containers in MayaFlux recognize that creative data exists in multiple modalities (audio, visual, spectral), dimensions (time, space, frequency), and organizational structures (regions, groups, attributes) that should be accessible through unified interfaces rather than disparate format-specific tools.
Containers work with ****DataVariant**** (unified type storage), ****DataDimensions**** (structural descriptors), ****Regions**** (spatial/temporal selections), and ****RegionGroups**** (organized collections) to create a coherent data architecture that scales from simple audio files to complex multi-modal creative datasets.
Containers organize data through dimensional thinking rather than format constraints:
Containers automatically detect and adapt to different data modalities based on dimensional structure:
SoundFileContainer demonstrates the container paradigm through practical audio file handling:
Regions provide precise, multi-dimensional data selection without copying entire datasets:
RegionGroups organize multiple regions with rich metadata for creative workflow management:
This creates creative data architecture where information becomes compositional material that can be selected, transformed, and organized through precise multi-dimensional access patterns, enabling workflows that treat data as a creative medium rather than static files.
These four paradigms—Nodes, Buffers, Coroutines, and Containers—form the foundational data transformation architecture of MayaFlux. Together, they create a unified computational environment where information flows through precise transformations rather than being constrained by analog metaphors or format boundaries.
****Nodes**** provide unit-by-unit transformation precision, turning mathematical relationships into creative decisions. ****Buffers**** create temporal gathering spaces where individual moments accumulate into collective expressions. ****Coroutines**** transform time itself into compositional material, enabling complex temporal coordination across multiple processing domains. ****Containers**** organize multi-dimensional data as creative material, supporting everything from simple audio files to complex cross-modal datasets.
The power emerges from their compositional relationships:
This architecture enables digital-first creative thinking where computational precision becomes creative opportunity. Unit-by-unit transformations coordinate with temporal accumulation, while coroutines orchestrate complex timing relationships across multi-dimensional data structures.
Rather than separating "programming" from "composing," MayaFlux treats data transformation as the fundamental creative act. Mathematical relationships, temporal coordination, and dimensional data access become unified aspects of a single creative expression.
The next stage of this digital paradigm involves Domains and Control — how these transformation primitives coordinate across different computational contexts and how creative control emerges from the interaction between precise computational timing and expressive creative intent.