|
MayaFlux 0.1.0
Digital-First Multimedia Processing Framework
|
Digital creative systems require more than individual transformation units—they need computational contexts that coordinate timing, resource allocation, and execution strategies across different processing requirements. MayaFlux introduces Domains as unified computational environments where Nodes, Buffers, and Coroutines operate with shared understanding of temporal precision, execution location, and coordination patterns.
Rather than forcing all processes into a single temporal framework, Domains enable multi-modal computational thinking where audio-rate precision, visual-frame coordination, and custom temporal patterns coexist and interact naturally. Each Domain represents a complete processing configuration that spans all three subsystems, creating coherent computational environments for different creative requirements.
Each subsystem defines its processing characteristics through ProcessingTokens: computational identities that specify how information should be handled within that domain:
Considering the unit by unit processing nature of Nodes, the domains pertain to the rate at which each unit is processed. Hence, Nodes support the following ProcessingTokens
As the processors attached to buffers operate on the entire data collection, the domain system for Buffers require different methodologies and accommodate new features. It is simply not limited to rate of processing but also the device onto which the processing frame can be offloaded to. Batch processing also affords features such as sequential vs parallel processing. Buffers::ProcessingToken contain following set of bitfield composition to specify execution characteristics:
There are following combined tokens:
SAMPLE_RATE + CPU_PROCESS + SEQUENTIALFRAME_RATE + GPU_PROCESS + PARALLELSAMPLE_RATE + GPU_PROCESS + PARALLELCoroutines need similar processing tokens as Nodes, i.e tick rate accuracy. Coroutines also benefit from being available to suspend, resume or restart on demand.
Routines configured via Vruta (and Scheduler) can be configured to use the following tokens:
Domains combine these tokens into coherent computational contexts using bitfield composition. Each Domain represents a complete processing configuration:
This composition enables domain decomposition where complex computational requirements can be broken into constituent processing characteristics and recombined as needed:
Domains enable cross-modal coordination where different temporal patterns interact naturally:
MayaFlux operates on a default automation with expressive override philosophy. The engine provides intelligent automation for common creative workflows while enabling precise user control when specific computational patterns are required. This is also tightly coupled with the philosophy that every practical aspect of the API should yield itself to overrides, substitution or disabling.
By default, operations use the engine's managed systems for optimal performance and coordination: This pertains to Nodes, Buffers and Coroutines. Containers, due to their non cyclical nature do not have enforced engine defaults, but can.
Engine management happens through multiple systems and managers for each paradigm, and at every step they require explicit domain specification (often automated to defaults via API wrappers).
Here is a breakdown of each component flow in engine management and examples for overriding with user control.
The Engine class that functions as the default coordinator and lifecyle manager for Backends and Subsystems also manages the central node coordinator called NodeGraphManager.
While the aforementioned backends, subsystems and Engine itself can be untangled from central management and replaced with different systems, that is a conversation for a different time.
The engine's NodeGraphManager automatically:
When initializing using vega, vega.Sine()[0] | Audio, the instruction is to create node -> get default NodeGraphManger from engine -> register it for Channel 0's root -> at Domain::Audio, which resolves to Nodes::ProcessingToken::AUDIO_RATE
The same node can be registered directly with NodeGraphManager::add_to_root(shared_ptr(node), ProcessingToken, channel)
Calling MayaFlux::create_node is functionally identical to vega, except the Domain is implicitly initialized to Audio by default.
Every aspect of Node management can be controlled explicitly for precise computational patterns:
The control is not just limited NodeGraphManager internals. It is possible to replace the Engine's default node graph manager. get_context()->get_node_graph_manger() = std::make_shared<Nodes::NodeGraphManager>(args)
When a node is registered to a channel in NodeGraphManger, it is being added to a RootNode. There is only one root node per processing token per channel as it acts as the central registry and lock-free processing stage manager for nodes.
RootNode exposes process_sample() and process_batch(num_samples) which can be called externally. The process callback checks channel registration-processing state of each node, handles processing of each node, requests node state reset for the channel RootNode is operating on and sums all samples.
RootNode itself does not operate based on ProcessingTokens, but one is required at construction to facilitate Engine integration. When NodeGraphManager is initialized by the Engine, it automatically sets up RootNodes based on Token and number of channels.
Root Nodes can be used outside of the channel context (or outside of NodeGraphManger context -> Engine Context), as RootNode still provides the most optimal and lock-free way of coordinating process() of multiple nodes.
Use RootNode::register_node(shared_ptr node) to add a node to Root. The registration triggers a guarded atomic operation that checks for current processing state, and adds the node only when it is safe. RootNode::unregister_node behaves the same for removing a node.
Note: As RootNode only handles its own graph, it is unaware of registration across channels beyond processing state check. So, adding or removing from root does not update the channel registration status (bitmask) of the node.
Nodes need not be added to RootNode or NodeGraphManager to enable processing. Calling node->process_sample() or node->process_batch(num_samples) evaluates the same as any automated procedure.
The examples shown previously node1 >> node2 or node1 * node2 are fluent methods for chaining, and Engine registration occurs implicitly.
The first example is facilitated using a type of node called ChainNode. When using >> overload, ChainNode::initialize() is called which registers the nodes with Engine methods.
The second example creates a type of node called BinaryOpNode that handles a binary operation on the node's output as registered by a callback handle. And like ChainNode, the fluent * or + calls BinaryOpNode::initialize() for engine registration
Similar to NodeGraphManager, Engine also handles lifecyle and visibility management of buffers via BufferManager. The role of the buffer manger is to:
When using fluent structure vega.AudioBuffer[0] | Parallel, the instruction is to create AudioBuffer -> set it to channel 0, get default buffer manager from engine -> register to AUDIO_PARALLEL token.
Using MayaFlux::create_buffer or ::create_any_buffer_namespace_method, it internally evaluates to creating the specified entity and handling default registration procedure with Engine controlled BufferManager.
Direct creation methods for the above:
Much like NodeGraphManager, it is possible to create custom processing functions
Similar to RootNode in nodes, when adding a buffer to a channel in BufferManager, it is added to that channel's RootBuffer. As this document focuses on audio, RootAudioBuffer will be used as the exploration point.
When BufferManager is initialized, it automatically creates one RootAudioBuffer per audio channel per token. RootAudioBuffer works much the same way as RootNode where:
Buffers can be directly added to a RoodAudioBuffer via manager->get_root_audio_buffer(token, channel)->add_child_buffer(buffer). Similar to RootNode, a buffer is registered only when it is safe.
The default processor of the RootAudioBuffer handles most of the features listed above, whereas the FinalProcessor of handles limiting and normalizing.
Buffers and Processors can exist outside of the BufferManager context. Buffer is an interface class that AudioBuffer inherits from. auto buffer = std::make_shared<AudioBuffer>(0, 512);
The only default property of concern is default_processor, which was introduced in the previous document. But that can be overridden with AudioBuffer::set_default_processor()
Buffers also accept a BufferProcessingChain that allows attaching a series of BufferProcessors that evaluate in order of processor registration;
Sharing data between buffers can still be accommodated outside of BufferManager.
The methods for extending processors themselves was introduced in the previous document, so its skipped here.
Temporal coordination in MayaFlux operates through two interconnected namespaces: Vruta (scheduling infrastructure) and Kriya (creative temporal patterns). Engine manages coroutine coordination through TaskScheduler, similar to how it handles nodes and buffers.
The Engine provides central lifecycle management for coroutines via TaskScheduler, which coordinates temporal processing across different domains
TaskScheduler's responsibilities:
When using MayaFlux::schedule_metro, using internal awaiter SampleDelay it constructs a Vruta::SoundRoutine frame -> store it in a shared_ptr, calls TaskScheduler::add_task which extracts token based on awaiter and adds it to the graph.
When using temporal fluent operations like node >> Time(2.f), the instruction creates a coroutine -> gets default TaskScheduler from engine -> registers it for the appropriate domain -> implicitly creates Kriya::NodeTimer and registers one-shot time operation;
Kriya namespaces contains a variety of coroutine designs for fluent and expressive usage of coroutines, beyond simple timing orchestration.
Kriya::metro, Kriya::schedule, Kriya::pattern and Kriya::line have already been introduced previously, which need not be created using API wrappers such as MayaFlux::schedule_metro.
However, unlike Nodes and Buffers they have no process callback, their procedure and state management are orchestrated by internal clock mechanism (Read, for more information).
Kriya also exposes one-shot timers, timed events, timed data capture mechanisms
Each of these operations allow expressive routing of not just data but also procedure. And the nature MayaFlux's coroutine frame allows wrapping any coroutine inside recursive coroutines. Each of the methods already wrap different coroutines based on the chained operation
When a coroutine is registered to a domain in TaskScheduler, it operates within that domain's Clock system. The actual clock implementations are:
Clock systems expose tick(units), current_position(), current_time(), rate(), and reset(). The TaskScheduler's process_token() method handles temporal state advancement, processing unit calculation, and coroutine suspension/resumption coordination.
Clocks are automatically created when a processing token is first used through the ensure_domain() method:
Coroutines can be created directly using the SoundRoutine API and managed through the TaskScheduler:
The actual awaiter implementations available for coroutine control:
This architecture enables computational thinking as creative expression—where the choice between automatic coordination and explicit control becomes part of the creative decision-making process.
Domain composition allows creators to think in terms of unified computational environments while maintaining the flexibility to optimize for specific creative requirements.
For advanced and architecture level presentation of the same topic, please refer to Advanced Context Control