ADC25 Virtual Poster Presentation - Technical Deep Dive | Independent Developer | Early-Stage Architectural Research
November 2025
Copyright © 2025 Ranjith Hegde / MayaFlux
Project
Licensed under GPL-3.0 | View
License
Note: This document describes architectural patterns developed over 8 months of research. Implementation details are simplified for clarity. Full source code available at github.com/MayaFlux/MayaFlux (pre alpha state until January 2026).
Explore the first MayaFlux tutorial here: Sculpting Data Part I
MayaFlux demonstrates C++20-enabled unified multimedia processing through complete architectural composability. The framework implements lock-free atomic synchronization, coroutine-based temporal coordination, grammar-driven computation, and N-dimensional data abstractions that treat audio, graphics, and arbitrary data streams as unified numerical transformations rather than separate domains constrained by analog metaphors.
This document presents the architectural foundations, implementation strategies, and paradigm shifts that enable truly digital-first creative computation across both audio and visual processing domains.
Traditional multimedia software treats digital processing as a simulation of analog hardware. This creates artificial constraints:
MayaFlux embraces true digital paradigms:
Rather than separating “programming” from “composing” or “designing,” MayaFlux treats data transformation as the fundamental creative act. Mathematical relationships become creative decisions. Temporal coordination becomes compositional structure. Multi-dimensional data access becomes creative material selection. GPU compute shaders process audio spectra. Audio analysis drives visual shader parameters.
This isn’t just about efficiency—it’s about enabling creative workflows that cannot exist in analog-inspired systems. When a spectral analysis from audio processing directly feeds a compute shader that modulates a visual buffer’s texture coordinates, with sample-accurate synchronization via coroutines—that’s a creative pattern analog metaphors cannot express.
MayaFlux is built on five interconnected paradigms that apply equally to audio and graphics:
Nodes provide single-sample or single-frame transformation where mathematical relationships become creative decisions. Each node operates at unit precision with lock-free atomic registration.
Key characteristics:
>>,
*, +)process_sample() or process_frame()
controlBuffers accumulate individual moments into collective expressions. Unlike traditional buffers that “store” data, MayaFlux buffers are transient collectors that gather → release → await. This applies to both AudioBuffers and VKBuffers (GPU buffers).
Key characteristics:
C++20 coroutines transform time into creative material, enabling complex temporal coordination impossible with traditional callbacks. Sample-accurate audio coordination and frame-accurate graphics coordination use the same underlying coroutine infrastructure.
Key characteristics:
Containers organize data as compositional material through region-based access with metadata organization. The same abstractions work for audio samples, spectral data, pixel arrays, and arbitrary tensors.
Key characteristics:
Compute Matrix allows sequencing or composing different
ComputeOperations that work on any data modality through
unified interfaces.
Key characteristics:
Analyzers,
Extractors, Sorters, and
TransformersReal-time multimedia processing requires:
MayaFlux implements wait-free registration and atomic accumulation patterns across both audio and graphics domains:
Each processing domain (per channel, per token) has a
RootNode that acts as the central coordinator:
// RootNode provides lock-free node registration
class RootNode {
std::atomic<ProcessingState> m_processing_state;
std::vector<std::shared_ptr<Node>> m_nodes;
bool register_node(shared_ptr<Node> node) {
if (m_is_processing.load(std::memory_order_acquire)) {
if (m_Nodes.end() != std::ranges::find(m_Nodes, node)) {
uint32_t state = node->m_state.load();
if (state & Utils::NodeState::INACTIVE) {
atomic_remove_flag(node->m_state, Utils::NodeState::INACTIVE);
atomic_add_flag(node->m_state, Utils::NodeState::ACTIVE);
}
return;
}
for (auto& m_pending_op : m_pending_ops) {
bool expected = false;
if (m_pending_op.active.compare_exchange_strong(
expected, true,
std::memory_order_acquire,
std::memory_order_relaxed)) {
m_pending_op.node = node;
atomic_remove_flag(node->m_state, Utils::NodeState::ACTIVE);
atomic_add_flag(node->m_state, Utils::NodeState::INACTIVE);
m_pending_count.fetch_add(1, std::memory_order_relaxed);
return;
}
}
while (m_is_processing.load(std::memory_order_acquire)) {
m_is_processing.wait(true, std::memory_order_acquire);
}
}
m_Nodes.push_back(node);
uint32_t state = node->m_state.load();
atomic_add_flag(node->m_state, Utils::NodeState::ACTIVE);
}
void process_sample() {
if (!preprocess())
return 0.;
auto sample = 0.;
for (auto& node : m_Nodes) {
uint32_t state = node->m_state.load();
if (!(state & Utils::NodeState::PROCESSED)) {
auto generator = std::dynamic_pointer_cast<Nodes::Generator::Generator>(node);
if (generator && generator->should_mock_process()) {
generator->process_sample(0.);
} else {
sample += node->process_sample(0.);
}
atomic_add_flag(node->m_state, Utils::NodeState::PROCESSED);
} else {
sample += node->get_last_output();
}
}
postprocess();
return sample;
}
};Key insights:
The NodeGraphManager coordinates multiple RootNodes
across channels and tokens:
// Each token/channel combination has its own RootNode
std::unordered_map<ProcessingToken, std::vector<std::unique_ptr<RootNode>>> m_root_nodes;
// Concurrent registration across domains
void register_node_to_channel(shared_ptr<Node> node, uint32_t channel) {
auto token = node->get_processing_token();
auto& root = m_root_nodes[token][channel];
// Atomic registration - defers if currently processing
while (!root->register_node(node)) {
std::this_thread::yield();
}
// Update channel bitmask atomically
uint32_t mask = node->m_channel_mask.load();
node->m_channel_mask.store(mask | (1 << channel));
}This pattern extends to buffers (BufferManager with RootAudioBuffer and RootGraphicsBuffer) and coroutines (TaskScheduler), providing consistent lock-free coordination across all processing systems—audio and graphics alike.
Vruta provides the foundational scheduling system for coroutine coordination:
class TaskScheduler {
// Clock systems for different temporal domains
std::unordered_map<ProcessingToken, std::unique_ptr<Clock>> m_clocks;
// Task hierarchies per domain
std::unordered_map<ProcessingToken, std::vector<shared_ptr<Routine>>> m_tasks;
void add_task(shared_ptr<Routine> routine) {
// Extract token from awaiter type
auto token = routine->get_processing_token();
// Register with appropriate clock
m_tasks[token].push_back(routine);
m_clocks[token]->register_listener(routine);
}
void process_token(ProcessingToken token, uint64_t units) {
// Advance clock
m_clocks[token]->tick(units);
// Process suspended coroutines
for (auto& task : m_tasks[token]) {
if (task->should_resume()) {
task->resume();
}
}
}
};// Sample-accurate metro pattern
auto metro_routine = Kriya::metro(*scheduler, 0.25, []() {
trigger_event();
});
// Internally uses SampleDelay awaiter:
struct SampleDelay {
uint64_t samples_remaining;
bool await_ready() { return samples_remaining == 0; }
void await_suspend(coroutine_handle<> handle) {
// Register with SampleClock
scheduler->register_delay(this, samples_remaining);
}
};Kriya builds expressive temporal constructs on Vruta’s foundation:
auto event_chain = MayaFlux::create_event_chain()
.then([]() { start_process(); }, 0.0)
.then([]() { modulate_filter(); }, 0.125)
.then([]() { trigger_release(); }, 0.5);
event_chain.start();
// Internally creates coroutine with timed suspension:
Routine execute_chain() {
co_await SampleDelay{ Utils::seconds_to_samples(0.125) };
start_process();
co_await BufferDelay{ 4 };
modulate_filter();
co_await FrameDelay{ 60 };
trigger_release();
}void batch_accumulation_pipeline() {
auto scheduler = MayaFlux::get_scheduler();
auto buffer_manager = scheduler->get_buffer_manager();
auto pipeline = Kriya::BufferPipeline::create(*scheduler, buffer_manager)
->with_strategy(Kriya::ExecutionStrategy::PHASED);
*pipeline
>> Kriya::BufferOperation::capture_from(audio_buffer)
.for_cycles(20)
>> Kriya::BufferOperation::transform([](Kakshya::DataVariant& data, uint32_t cycle) {
const auto& accumulated = std::get<std::vector<double>>(data);
return process_batch(accumulated);
})
>> Kriya::BufferOperation::route_to_container(output_stream);
pipeline->execute_buffer_rate(10);
}
void streaming_buffer_modification() {
auto scheduler = MayaFlux::get_scheduler();
auto buffer_manager = scheduler->get_buffer_manager();
auto pipeline = Kriya::BufferPipeline::create(*scheduler, buffer_manager)
->with_strategy(Kriya::ExecutionStrategy::STREAMING);
*pipeline
>> Kriya::BufferOperation::capture_from(audio_buffer)
.for_cycles(1)
>> Kriya::BufferOperation::modify_buffer(audio_buffer, [](std::shared_ptr<Buffers::AudioBuffer> buf) {
auto& samples = buf->get_data();
for (auto& s : samples) {
s *= 0.9; // Simple gain reduction
}
}).as_streaming();
pipeline->execute_buffer_rate();
}// One-shot timer
Timer timer(*scheduler);
timer.schedule(2.0, []() {
std::cout << "Two seconds elapsed" << std::endl;
});
// Node temporal control
NodeTimer node_timer(*scheduler, *graph_manager);
node_timer.play_for(sine_node, 2.0); // Play for 2 seconds
// Timed action with start/stop
TimedAction action(*scheduler);
action.execute(
[]() { std::cout << "Starting" << std::endl; },
[]() { std::cout << "Ending" << std::endl; },
3.0
);MayaFlux implements two fundamental clock types with different coordination philosophies:
The SampleClock is passive—it doesn’t
drive processing, but is updated by the audio backend and notifies
listeners:
class SampleClock : public Clock {
std::atomic<uint64_t> m_current_sample{0};
uint64_t m_sample_rate;
public:
// Called by audio backend during callback
void tick(uint64_t samples) {
m_current_sample.fetch_add(samples, std::memory_order_release);
notify_listeners(samples);
}
uint64_t current_position() const {
return m_current_sample.load(std::memory_order_acquire);
}
double current_time() const {
return static_cast<double>(m_current_sample) / m_sample_rate;
}
};Key characteristics:
Usage pattern:
// Audio backend updates SampleClock
void audio_callback(float* buffer, uint32_t frames) {
// Process nodes/buffers
process_audio_graph(buffer, frames);
// Update temporal state
sample_clock->tick(frames);
// Coroutines suspended on SampleDelay are notified
}The FrameClock is active—it drives the
GPU/Vulkan processing thread at target FPS:
class FrameClock : public Clock {
std::atomic<uint64_t> m_current_frame{0};
uint32_t m_target_fps;
std::chrono::steady_clock::time_point m_next_frame_time;
std::chrono::nanoseconds m_frame_duration;
public:
// Drives graphics rendering loop
void tick(uint64_t forced_frames = 0) {
auto now = std::chrono::steady_clock::now();
uint64_t frames_to_advance = forced_frames > 0 ? forced_frames : calculate_elapsed_frames(now);
if (frames_to_advance > 0) {
m_current_frame.fetch_add(frames_to_advance, std::memory_order_release);
update_fps_measurement(now);
m_last_tick_time = now;
m_next_frame_time = now + m_frame_duration;
notify_listeners(frames_to_advance);
}
}
std::chrono::nanoseconds time_until_next_frame() const {
auto now = std::chrono::steady_clock::now();
auto until_next = m_next_frame_time - now;
if (until_next.count() < 0) {
return std::chrono::nanoseconds(0);
}
return std::chrono::duration_cast<std::chrono::nanoseconds>(until_next);
}
bool is_frame_late() const {
return std::chrono::steady_clock::now() > m_next_frame_time;
}
};Key characteristics:
Usage pattern:
// Graphics thread driven by FrameClock
void graphics_loop() {
while (running) {
frame_clock->tick();
// Process visual buffers
process_graphics_buffers();
// Render frame
render_to_swapchain();
// Sleep until next frame
auto sleep_duration = frame_clock->time_until_next_frame();
std::this_thread::sleep_for(sleep_duration);
}
}This dual-clock architecture reflects a fundamental design insight:
By implementing clocks with different ownership models, MayaFlux enables natural coordination between these temporal domains while maintaining their distinct characteristics.
Coroutines registered to SAMPLE_ACCURATE tokens listen to SampleClock updates, while FRAME_ACCURATE coroutines listen to FrameClock. This allows audio-visual synchronization through coordinated temporal notification without forcing one domain to match the other’s timing model.
Traditional DSP frameworks require explicit operation selection:
// Traditional approach - manual operation selection
if (is_audio_data(input)) {
apply_fft(input);
} else if (is_spectral_data(input)) {
apply_ifft(input);
}This becomes unwieldy for complex pipelines with multiple data modalities, contexts, and transformation requirements.
ComputationGrammar enables declarative operation matching based on input characteristics:
class ComputationGrammar {
struct Rule {
std::string name;
UniversalMatcher matcher;
ExecutionFunction executor;
ComputationContext context;
int priority;
};
std::vector<Rule> m_rules;
public:
// Define rule declaratively
RuleBuilder create_rule(const std::string& name) {
return RuleBuilder(*this, name);
}
// Execute matching rule
std::any execute(const std::any& input, const ExecutionContext& ctx) {
for (auto& rule : m_rules) {
if (rule.matcher.matches(input, ctx) && rule.context == ctx.context) {
return rule.executor(input, ctx);
}
}
throw std::runtime_error("No matching rule found");
}
};grammar.create_rule("normalize_audio")
.with_context(ComputationContext::MATHEMATICAL)
.with_priority(100)
.matches_type<std::vector<double>>()
.executes([](const std::any& input, const ExecutionContext& ctx) {
auto data = safe_any_cast<std::vector<double>>(input);
double max_val = *std::max_element(data.begin(), data.end());
for (auto& sample : data) {
sample /= max_val;
}
return data;
})
.build();auto spectral_matcher = UniversalMatcher::combine_and({
UniversalMatcher::create_type_matcher<std::vector<DataVariant>>(),
UniversalMatcher::create_context_matcher(ComputationContext::SPECTRAL),
UniversalMatcher::create_parameter_matcher("frequency_range", "audio")
});
grammar.create_rule("spectral_filter")
.matches_custom(spectral_matcher)
.executes([](const std::any& input, const ExecutionContext& ctx) {
// Apply frequency-domain filtering
auto data = std::any_cast<std::vector<DataVariant>>(input);
return apply_spectral_filter(data, ctx);
})
.build();ComputationGrammar enables context-aware operation chains:
auto pipeline = ComputationPipeline(grammar);
// Pipeline automatically selects operations based on input
auto result = pipeline
.add_input(audio_data)
.with_context(ComputationContext::TEMPORAL)
.with_parameter("window_size", 1024)
.execute();The grammar system matches rules in priority order, enabling hierarchical decision-making and exception handling patterns. This declarative approach is central to MayaFlux’s expressive power—transformations are specified by intent and characteristics rather than explicit procedural logic.
Real-time graphics and audio processing have historically been separate domains with incompatible architectures:
MayaFlux solves this through unified buffer abstractions, domain tokens, and Portal coordination layers that provide seamless CPU ↔︎ GPU data flow while maintaining the performance characteristics of each domain.
The Vulkan backend provides low-level GPU resource management:
class VulkanBackend {
vk::Instance m_instance;
vk::PhysicalDevice m_physical_device;
vk::Device m_device;
// Queue families for different workloads
vk::Queue m_graphics_queue;
vk::Queue m_compute_queue;
vk::Queue m_transfer_queue;
// Resource management
std::unique_ptr<VKMemoryAllocator> m_allocator;
std::unique_ptr<VKCommandPoolManager> m_command_pools;
std::unique_ptr<VKDescriptorManager> m_descriptors;
};Key responsibilities:
VKBuffer is a first-class member of the Buffer
processing chain, parallel to AudioBuffer:
class VKBuffer : public Buffer {
public:
enum class Usage {
STAGING, // Host-visible staging buffer (CPU-writable)
DEVICE, // Device-local GPU-only buffer
COMPUTE, // Storage buffer for compute shaders
VERTEX, // Vertex buffer
INDEX, // Index buffer
UNIFORM // Uniform buffer (host-visible when requested)
};
VKBuffer(size_t size_bytes, Usage usage, Kakshya::DataModality modality);
// Inherited from Buffer - integrates with processing chains
void set_default_processor(std::shared_ptr<BufferProcessor> processor) override;
std::shared_ptr<BufferProcessingChain> get_processing_chain() override;
// GPU-specific access
vk::Buffer get_vulkan_buffer() const;
void* map_memory();
void unmap_memory();
// Semantic layout for vertex data
void set_vertex_layout(const Kakshya::VertexLayout& layout);
std::optional<Kakshya::VertexLayout> get_vertex_layout() const;
};Key insight: VKBuffer carries semantic metadata (modality, dimensions, vertex attributes) alongside Vulkan handles. This enables grammar-based processors to inspect buffer characteristics and select appropriate operations—just like audio buffers.
Usage pattern:
// Create vertex buffer with semantic layout
auto vertex_buffer = std::make_shared<VKBuffer>(
vertex_count * sizeof(Vertex),
VKBuffer::Usage::VERTEX,
Kakshya::DataModality::VERTEX_BUFFER
);
Kakshya::VertexLayout layout;
layout.add_attribute("position", Kakshya::AttributeFormat::FLOAT3);
layout.add_attribute("color", Kakshya::AttributeFormat::FLOAT4);
layout.vertex_count = vertex_count;
vertex_buffer->set_vertex_layout(layout);
// Register with graphics domain
buffer_manager->add_buffer(vertex_buffer, ProcessingToken::GRAPHICS_BACKEND);
// Attach processor for rendering
auto render_processor = std::make_shared<RenderProcessor>(config);
vertex_buffer->get_processing_chain()->add_processor(render_processor);The Portal namespace provides high-level coordination between core Vulkan infrastructure and MayaFlux abstractions. It’s the glue layer that makes GPU processing feel like audio processing—composable, declarative, and integrated with the broader system.
ShaderFoundry manages shader lifecycle, compilation, and
caching:
class ShaderFoundry {
public:
// Load and compile shader
ShaderID load_shader(
const std::string& filepath,
ShaderStage stage = ShaderStage::AUTO_DETECT,
const std::string& entry_point = "main"
);
// Compile from source code
ShaderID compile_from_source(
const std::string& source,
ShaderStage stage,
const std::string& entry_point = "main"
);
// Get compiled shader module
std::shared_ptr<VKShaderModule> get_shader(ShaderID id);
// Hot-reload support
void watch_shader(ShaderID id);
void reload_shader(ShaderID id);
// Reflection data extraction
ShaderReflectionInfo get_reflection_info(ShaderID id);
// Command buffer management (used by processors)
CommandBufferID begin_commands(CommandBufferType type);
vk::CommandBuffer get_command_buffer(CommandBufferID id);
void submit_commands(CommandBufferID id);
void submit_and_present(CommandBufferID id);
};Key capabilities:
.comp, .vert,
.frag)Usage example:
auto& foundry = Portal::Graphics::get_shader_foundry();
// Load compute shader
auto shader_id = foundry.load_shader("shaders/audio_fft.comp");
// Get reflection info
auto reflection = foundry.get_reflection_info(shader_id);
std::cout << "Workgroup size: "
<< reflection.workgroup_size->at(0) << "x"
<< reflection.workgroup_size->at(1) << "x"
<< reflection.workgroup_size->at(2) << std::endl;ComputePress coordinates compute pipeline creation and
dispatch:
class ComputePress {
public:
// Create compute pipeline
ComputePipelineID create_pipeline(const ComputePipelineConfig& config);
// Dispatch compute work
void dispatch(
CommandBufferID cmd_id,
ComputePipelineID pipeline_id,
uint32_t group_count_x,
uint32_t group_count_y = 1,
uint32_t group_count_z = 1
);
// Bind resources
void bind_descriptor_sets(
CommandBufferID cmd_id,
ComputePipelineID pipeline_id,
const std::vector<DescriptorSetID>& sets
);
// Update push constants
void push_constants(
CommandBufferID cmd_id,
ComputePipelineID pipeline_id,
const void* data,
size_t size
);
};Usage in ComputeProcessor:
class ComputeProcessor : public ShaderProcessor {
void processing_function(std::shared_ptr<Buffer> buffer) override {
auto vk_buffer = std::dynamic_pointer_cast<VKBuffer>(buffer);
auto& foundry = Portal::Graphics::get_shader_foundry();
auto& compute = Portal::Graphics::get_compute_press();
// Begin command recording
auto cmd_id = foundry.begin_commands(CommandBufferType::COMPUTE);
// Bind pipeline and resources
compute.bind_pipeline(cmd_id, m_pipeline_id);
compute.bind_descriptor_sets(cmd_id, m_pipeline_id, m_descriptor_sets);
// Push constants (if any)
if (m_has_push_constants) {
compute.push_constants(cmd_id, m_pipeline_id, &m_push_data, sizeof(m_push_data));
}
// Dispatch compute work
uint32_t workgroup_count = (vk_buffer->size() + m_workgroup_size - 1) / m_workgroup_size;
compute.dispatch(cmd_id, m_pipeline_id, workgroup_count, 1, 1);
// Submit to GPU
foundry.submit_commands(cmd_id);
}
};RenderFlow coordinates graphics (not compute) pipeline
creation and rendering:
class RenderFlow {
public:
// Create render pass
RenderPassID create_render_pass(const RenderPassConfig& config);
RenderPassID create_simple_render_pass(); // Default color+depth
// Create graphics pipeline
RenderPipelineID create_pipeline(const RenderPipelineConfig& config);
// Begin/end render pass
void begin_render_pass(CommandBufferID cmd_id, const std::shared_ptr<Window>& window);
void end_render_pass(CommandBufferID cmd_id);
// Bind pipeline and resources
void bind_pipeline(CommandBufferID cmd_id, RenderPipelineID pipeline_id);
void bind_vertex_buffers(CommandBufferID cmd_id, const std::vector<std::shared_ptr<VKBuffer>>& buffers);
void bind_index_buffer(CommandBufferID cmd_id, std::shared_ptr<VKBuffer> buffer);
// Draw commands
void draw(CommandBufferID cmd_id, uint32_t vertex_count, uint32_t instance_count = 1);
void draw_indexed(CommandBufferID cmd_id, uint32_t index_count);
// Window integration
void register_window_for_rendering(std::shared_ptr<Window> window, RenderPassID render_pass);
void present_rendered_image(CommandBufferID cmd_id, std::shared_ptr<Window> window);
};RenderProcessor usage:
class RenderProcessor : public ShaderProcessor {
void processing_function(std::shared_ptr<Buffer> buffer) override {
auto vk_buffer = std::dynamic_pointer_cast<VKBuffer>(buffer);
auto& foundry = Portal::Graphics::get_shader_foundry();
auto& flow = Portal::Graphics::get_render_flow();
// Begin command recording
auto cmd_id = foundry.begin_commands(CommandBufferType::GRAPHICS);
// Begin render pass (automatically gets framebuffer from window)
flow.begin_render_pass(cmd_id, m_target_window);
// Set viewport/scissor
uint32_t width, height;
get_swapchain_extent(m_target_window, width, height);
set_viewport(cmd_id, width, height);
// Bind pipeline and vertex buffer
flow.bind_pipeline(cmd_id, m_render_pipeline_id);
flow.bind_vertex_buffers(cmd_id, {vk_buffer});
// Draw
auto vertex_layout = vk_buffer->get_vertex_layout();
flow.draw(cmd_id, vertex_layout->vertex_count);
// End render pass
flow.end_render_pass(cmd_id);
// Store command buffer for presentation
vk_buffer->set_pipeline_command(m_render_pipeline_id, cmd_id);
}
};Key architectural insight: ComputePress
and RenderFlow provide parallel
abstractions for compute vs graphics workloads, but both
integrate seamlessly with the same VKBuffer and
BufferProcessingChain infrastructure.
MayaFlux supports the full graphics pipeline with multiple shader stages:
// Create multi-stage graphics pipeline
RenderPipelineConfig config;
config.vertex_shader = foundry.load_shader("shaders/vertex.vert");
config.fragment_shader = foundry.load_shader("shaders/fragment.frag");
config.geometry_shader = foundry.load_shader("shaders/geometry.geom"); // Optional
config.tess_control_shader = foundry.load_shader("shaders/tess_ctrl.tesc"); // Optional
config.tess_eval_shader = foundry.load_shader("shaders/tess_eval.tese"); // Optional
// Vertex input from semantic layout
config.vertex_bindings = {{0, sizeof(Vertex), vk::VertexInputRate::eVertex}};
config.vertex_attributes = {
{0, 0, vk::Format::eR32G32B32Sfloat, offsetof(Vertex, position)},
{1, 0, vk::Format::eR32G32B32A32Sfloat, offsetof(Vertex, color)},
{2, 0, vk::Format::eR32G32Sfloat, offsetof(Vertex, texcoord)}
};
// Rasterization state
config.topology = vk::PrimitiveTopology::eTriangleList;
config.polygon_mode = vk::PolygonMode::eFill;
config.cull_mode = vk::CullModeFlagBits::eBack;
// Blending
config.enable_alpha_blending();
auto pipeline_id = flow.create_pipeline(config);Or using RenderProcessor fluent API:
auto render_proc = std::make_shared<RenderProcessor>(
ShaderProcessorConfig{.shader_path = "shaders/vertex.vert"}
);
render_proc->set_fragment_shader("shaders/fragment.frag");
render_proc->set_geometry_shader("shaders/geometry.geom");
render_proc->set_render_pass(render_pass_id);
render_proc->set_target_window(my_window);
vertex_buffer->get_processing_chain()->add_processor(render_proc);Windows are first-class rendering targets integrated with the buffer system:
// Create window
auto window = std::make_shared<GlfwWindow>(WindowCreateInfo{
.width = 1920,
.height = 1080,
.title = "MayaFlux Visual Output"
});
// Register window with graphics system
graphics_subsystem->register_window(window);
// Create render pass for this window
auto render_pass = flow.create_simple_render_pass();
flow.register_window_for_rendering(window, render_pass);
// Attach RenderProcessor to buffer with this window as target
render_processor->set_target_window(window);
render_processor->set_render_pass(render_pass);DisplayService (in the Registry) handles swapchain management, framebuffer creation, and presentation:
class DisplayService {
public:
// Register window for graphics
bool register_window(std::shared_ptr<Window> window);
// Attach render pass to window (creates framebuffers)
bool attach_render_pass(std::shared_ptr<Window> window,
std::shared_ptr<VKRenderPass> render_pass);
// Get swapchain extent for viewport setup
void get_swapchain_extent(std::shared_ptr<Window> window,
uint32_t& width, uint32_t& height);
// Present frame (called by RenderFlow)
void present_frame(std::shared_ptr<Window> window, uint64_t cmd_buffer_bits);
};Here’s a full working example of graphics processing in MayaFlux:
// Setup
auto& foundry = Portal::Graphics::get_shader_foundry();
auto& flow = Portal::Graphics::get_render_flow();
auto buffer_manager = MayaFlux::get_buffer_manager();
// Create window and render pass
auto window = std::make_shared<GlfwWindow>(WindowCreateInfo{
.width = 1280, .height = 720, .title = "Graphics Demo"
});
auto render_pass = flow.create_simple_render_pass();
flow.register_window_for_rendering(window, render_pass);
// Create vertex buffer with semantic layout
std::vector<Vertex> vertices = generate_triangle_vertices();
auto vertex_buffer = std::make_shared<VKBuffer>(
vertices.size() * sizeof(Vertex),
VKBuffer::Usage::VERTEX,
Kakshya::DataModality::VERTEX_BUFFER
);
Kakshya::VertexLayout layout;
layout.add_attribute("position", Kakshya::AttributeFormat::FLOAT3);
layout.add_attribute("color", Kakshya::AttributeFormat::FLOAT4);
layout.vertex_count = vertices.size();
vertex_buffer->set_vertex_layout(layout);
// Upload vertex data
void* mapped = vertex_buffer->map_memory();
std::memcpy(mapped, vertices.data(), vertices.size() * sizeof(Vertex));
vertex_buffer->unmap_memory();
// Create render processor
auto render_proc = std::make_shared<RenderProcessor>(
ShaderProcessorConfig{.shader_path = "shaders/simple.vert"}
);
render_proc->set_fragment_shader("shaders/simple.frag");
render_proc->set_render_pass(render_pass);
render_proc->set_target_window(window);
// Register buffer with processor
buffer_manager->add_buffer(vertex_buffer, ProcessingToken::GRAPHICS_BACKEND);
vertex_buffer->get_processing_chain()->add_processor(render_proc);
// Graphics loop (simplified)
while (window->is_open()) {
// Process graphics buffers (executes render processor)
buffer_manager->process_graphics_buffers(ProcessingToken::GRAPHICS_BACKEND);
// Present (done inside PresentProcessor after all rendering)
// flow.present_rendered_image() called automatically
}While full audio-visual coordination workflows are still being developed, the architecture enables patterns like:
// Audio analysis feeding compute shader (conceptual)
auto spectral_analyzer = std::make_shared<SpectralAnalyzer>();
audio_buffer->get_processing_chain()->add_processor(spectral_analyzer);
auto visual_modulator = Kriya::metro(*scheduler, 0.016, [&]() { // ~60 FPS
// Get spectral data from audio processing
auto spectrum = spectral_analyzer->get_current_spectrum();
// Update compute shader push constants
ComputeShaderParams params;
params.bass_energy = spectrum.energy_in_range(20, 200);
params.mid_energy = spectrum.energy_in_range(200, 2000);
// Compute shader processes visual buffer based on audio features
compute_processor->update_push_constants(¶ms, sizeof(params));
});The infrastructure for this exists—unified buffer abstractions, processing chains, coroutine coordination—but concrete production examples are still being developed.
Yantra provides the transformation infrastructure that works with ComputationGrammar. The architecture is complete and proven—transformer hierarchy, declarative composition, integration with grammar-based pipelines. Specific algorithmic implementations are in active development.
// Base transformer interface
template <ComputeData InputType, ComputeData OutputType>
class UniversalTransformer {
public:
virtual TransformationType get_transformation_type() const = 0;
virtual IO<OutputType> apply_operation(const IO<InputType>& input) = 0;
// Chainable composition
template <typename NextTransformer>
auto chain(NextTransformer&& next) {
return ComposedTransformer(*this, std::forward<NextTransformer>(next));
}
};
// Concrete implementations
enum class TransformationType {
MATHEMATICAL, // Normalization, scaling, polynomial
TEMPORAL, // Reversal, time-stretch, delay
SPECTRAL, // FFT, filtering, phase manipulation
STRUCTURAL // Reshaping, dimensional transforms
};The core expressive power of Yantra is declarative pipeline construction:
// Manual chaining
auto pipeline = normalizer
.chain(time_reverser)
.chain(spectral_filter)
.chain(fade_envelope);
auto result = pipeline.apply_operation(input_data);// Grammar-based automatic selection
auto adaptive_pipeline = ComputationPipeline(grammar)
.add_input(input_data)
.with_context(ComputationContext::TEMPORAL)
.with_parameter("window_size", 1024)
.execute();// Operator-based fluent syntax
auto result = input_data
| normalize()
| reverse_time()
| apply_filter("lowpass", 2000.0);This demonstrates the declarative operation matching that makes Yantra powerful:
// Define transformation rules
grammar.create_rule("normalize_audio")
.with_context(ComputationContext::MATHEMATICAL)
.matches_type<std::vector<double>>()
.executes([](const std::any& input, const ExecutionContext& ctx) {
auto data = std::any_cast<std::vector<double>>(input);
auto transformer = MathematicalTransformer(MathOperation::NORMALIZE);
return transformer.apply_operation(IO<std::vector<double>>{data});
})
.build();
grammar.create_rule("spectral_process")
.with_context(ComputationContext::SPECTRAL)
.matches_custom(spectral_data_matcher)
.executes([](const std::any& input, const ExecutionContext& ctx) {
auto data = std::any_cast<SpectralData>(input);
auto transformer = SpectralTransformer(SpectralOperation::FILTER);
return transformer.apply_operation(IO<SpectralData>{data});
})
.build();
// Pipeline automatically selects appropriate transformers
auto pipeline = ComputationPipeline(grammar);
auto processed = pipeline
.add_input(unknown_data) // Could be audio, spectral, visual
.with_context(ComputationContext::AUTO_DETECT)
.execute(); // Grammar matches rules based on data type and contextArchitecture: ✓ Complete and tested - Transformer hierarchy with type-safe composition - Grammar integration with rule-based selection - Fluent operator syntax - Pipeline abstraction with context awareness
Algorithms: ⚙ In development - Mathematical operations: Functional (normalize, scale, clamp) - Temporal operations: Functional (reverse, basic time-stretch) - Spectral operations: Basic FFT functional, advanced processing planned - Advanced algorithms: Phase vocoder, granular synthesis, adaptive filtering in development
The design pattern is proven—transformers compose declaratively and integrate with grammar-based pipelines. What remains is implementing the breadth of algorithmic operations, not validating the architectural approach.
Traditional multimedia systems treat different data types as separate:
Each requires different APIs, different processing pipelines, different transformation logic.
NDData provides unified dimensional access regardless of data modality:
namespace Kakshya {
// Unified data type
using DataVariant = std::variant<
std::vector<double>,
std::vector<uint32_t>,
std::vector<glm::vec3>
>;
// Dimensional descriptor
struct DataDimension {
std::string name; // "time", "frequency", "x", "y", "channel"
uint64_t size; // Number of elements
uint64_t stride; // Memory stride
DimensionType type; // TEMPORAL, SPATIAL, SPECTRAL, CHANNEL
std::optional<double> scale; // Physical units conversion
};
}enum class DataModality {
AUDIO_1D, // Single channel audio
AUDIO_MULTICHANNEL, // Multi-channel audio
SPECTRAL_2D, // Frequency x Time
IMAGE_2D, // Height x Width
IMAGE_RGB, // Height x Width x 3
VIDEO_3D, // Height x Width x Time
VERTEX_BUFFER, // Vertex attributes (graphics)
TENSOR_ND // Arbitrary dimensions
};// Audio: 48000 samples
auto audio_module = NDData::create_audio_1d<double>(48000);
// Multi-channel audio: 48000 samples x 2 channels
auto stereo_module = NDData::create_audio_multichannel<double>(48000, 2);
// Spectral data: 128 time windows x 1024 frequency bins
auto spectral_module = NDData::create_spectral_2d<double>(128, 1024);
// Image: 1920x1080 pixels
auto image_module = NDData::create_image_2d<double>(1080, 1920);
// Generic N-dimensional tensor
auto tensor_module = NDData::create_for_modality<double>(
DataModality::TENSOR_ND,
{256, 256, 64}, // shape
0.0, // default value
MemoryLayout::ROW_MAJOR
);The same transformation operations work across modalities:
// Mathematical operations work on any modality
auto normalized = MathematicalTransformer(MathOperation::NORMALIZE)
.apply_operation(audio_data);
auto normalized_image = MathematicalTransformer(MathOperation::NORMALIZE)
.apply_operation(image_data);
// Temporal operations on audio
auto reversed_audio = TemporalTransformer(TemporalOperation::TIME_REVERSE)
.apply_operation(audio_data);
// Temporal operations on video (time dimension)
auto reversed_video = TemporalTransformer(TemporalOperation::TIME_REVERSE)
.apply_operation(video_data);Regions enable precise data selection across dimensions:
// Temporal region (audio)
auto intro = Region::audio_span(0, 6000, 0); // First 0.125s, channel 0
// Spatial region (image)
auto top_left = Region::image_rect(0, 0, 100, 100); // 100x100 pixel region
// Spectral region
auto bass_range = Region::spectral_range(0, 100, 0, 512); // Low frequencies
// Multi-dimensional region (video)
auto clip = Region::video_clip(0, 100, 50, 150, 0, 30); // Spatial crop + temporal clipVKBuffer integrates with NDData modality system:
// Create vertex buffer with modality
auto vertex_buffer = std::make_shared<VKBuffer>(
size_bytes,
VKBuffer::Usage::VERTEX,
Kakshya::DataModality::VERTEX_BUFFER // Semantic modality
);
// Vertex layout is a dimensional descriptor
Kakshya::VertexLayout layout;
layout.add_attribute("position", Kakshya::AttributeFormat::FLOAT3);
layout.add_attribute("normal", Kakshya::AttributeFormat::FLOAT3);
layout.add_attribute("texcoord", Kakshya::AttributeFormat::FLOAT2);
layout.vertex_count = num_vertices;
vertex_buffer->set_vertex_layout(layout);
// Grammar can now match on vertex buffer characteristics
grammar.create_rule("process_vertex_normals")
.matches_modality(Kakshya::DataModality::VERTEX_BUFFER)
.matches_attribute("normal")
.executes(recalculate_normals_operation)
.build();This demonstrates true cross-modal abstraction—the same dimensional concepts (modality, attributes, regions) apply to audio samples, spectral bins, pixels, and vertices.
NDimensionalContainers use regions for creative data organization:
class NDimensionalContainer {
public:
// Store region with metadata
void add_region(const std::string& name,
const Region& region,
const RegionMetadata& metadata);
// Extract data from region
DataVariant get_region_data(const Region& region);
// Modify region in place
void set_region_data(const Region& region, const DataVariant& data);
// Query regions by attributes
std::vector<Region> find_regions_with_attribute(
const std::string& key,
const std::any& value);
};MayaFlux reimagines windowing events as temporal coroutine patterns rather than traditional callback spaghetti.
// Traditional GLFW approach - fragmented callback hell
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods) {
if (action == GLFW_PRESS) {
handle_key_press(key);
}
}
void mouse_callback(GLFWwindow* window, double xpos, double ypos) {
handle_mouse_motion(xpos, ypos);
}
void resize_callback(GLFWwindow* window, int width, int height) {
handle_resize(width, height);
}
// Separate callback registration
glfwSetKeyCallback(window, key_callback);
glfwSetCursorPosCallback(window, mouse_callback);
glfwSetWindowSizeCallback(window, resize_callback);Problems:
Each window has an EventSource that signals events to
coroutines:
class GlfwWindow : public IWindow {
Vruta::EventSource m_event_source; // Coroutine event signaling
// GLFW callbacks signal EventSource
static void glfw_key_callback(GLFWwindow* window, int key, int scancode,
int action, int mods) {
auto* win = static_cast<GlfwWindow*>(glfwGetWindowUserPointer(window));
WindowEvent event;
event.type = (action == GLFW_PRESS) ? WindowEventType::KEY_PRESSED
: WindowEventType::KEY_RELEASED;
event.timestamp = glfwGetTime();
event.data = WindowEvent::KeyData{key, scancode, mods};
// Signal coroutines waiting on this event
win->m_event_source.signal(event);
}
};// Wait for specific window event
Event handle_user_input(GlfwWindow& window) {
while (true) {
// Await next keyboard event
auto event = co_await window.event_source().next_event(
WindowEventType::KEY_PRESSED
);
auto key_data = std::get<WindowEvent::KeyData>(event.data);
if (key_data.key == GLFW_KEY_SPACE) {
trigger_audio_event();
} else if (key_data.key == GLFW_KEY_ESCAPE) {
break;
}
}
}// Coordinate audio triggering with window focus
Event focus_aware_processing(GlfwWindow& window) {
while (true) {
auto event = co_await window.event_source().next_event();
if (event.type == WindowEventType::WINDOW_FOCUS_GAINED) {
// Resume audio processing
audio_graph_manager->set_processing_enabled(true);
visual_renderer->set_render_quality(RenderQuality::HIGH);
}
else if (event.type == WindowEventType::WINDOW_FOCUS_LOST) {
// Pause audio processing
audio_graph_manager->set_processing_enabled(false);
visual_renderer->set_render_quality(RenderQuality::LOW);
}
}
}namespace Vruta {
class EventSource {
std::queue<WindowEvent> m_event_queue;
std::vector<std::coroutine_handle<>> m_waiting_coroutines;
public:
// Signal event from callback
void signal(const WindowEvent& event) {
m_event_queue.push(event);
// Resume all waiting coroutines
for (auto& handle : m_waiting_coroutines) {
handle.resume();
}
m_waiting_coroutines.clear();
}
// Awaitable for next event
auto next_event(std::optional<WindowEventType> filter = std::nullopt) {
struct EventAwaiter {
EventSource& source;
std::optional<WindowEventType> filter;
WindowEvent result;
bool await_ready() {
if (source.m_event_queue.empty()) {
return false;
}
// Check if event matches filter
if (filter && source.m_event_queue.front().type != *filter) {
return false;
}
result = source.m_event_queue.front();
source.m_event_queue.pop();
return true;
}
void await_suspend(std::coroutine_handle<> handle) {
source.m_waiting_coroutines.push_back(handle);
}
WindowEvent await_resume() { return result; }
};
return EventAwaiter{*this, filter};
}
};
}This architecture transforms window management from callback spaghetti into expressive temporal coordination.
MayaFlux uses bitfield-composed tokens to specify processing characteristics:
// Node processing tokens
namespace Nodes {
enum class ProcessingToken {
AUDIO_RATE, // Process at audio sample rate
VISUAL_RATE, // Process at visual frame rate
CUSTOM_RATE // User-defined processing rate
};
}
// Buffer processing tokens (bitfield)
namespace Buffers {
enum ProcessingToken : uint32_t {
// Rate tokens
SAMPLE_RATE = 1 << 0,
FRAME_RATE = 1 << 1,
// Device tokens
CPU_PROCESS = 1 << 8,
GPU_PROCESS = 1 << 9,
// Concurrency tokens
SEQUENTIAL = 1 << 16,
PARALLEL = 1 << 17,
// Backend combinations
AUDIO_BACKEND = SAMPLE_RATE | CPU_PROCESS | SEQUENTIAL,
GRAPHICS_BACKEND = FRAME_RATE | GPU_PROCESS | PARALLEL
};
}
// Coroutine processing tokens
namespace Vruta {
enum class ProcessingToken {
SAMPLE_ACCURATE, // Sample-level temporal precision
FRAME_ACCURATE, // Frame-level temporal precision
EVENT_DRIVEN, // Sporadic event processing
CUSTOM
};
}Domains combine tokens into unified computational contexts:
enum class Domain : uint64_t {
// Audio processing with sample-accurate coordination
AUDIO = (Nodes::ProcessingToken::AUDIO_RATE << 32) |
(Buffers::ProcessingToken::AUDIO_BACKEND << 16) |
(Vruta::ProcessingToken::SAMPLE_ACCURATE),
// Graphics with frame-accurate synchronization
GRAPHICS = (Nodes::ProcessingToken::VISUAL_RATE << 32) |
(Buffers::ProcessingToken::GRAPHICS_BACKEND << 16) |
(Vruta::ProcessingToken::FRAME_ACCURATE)
};// Create audio node
auto sine = vega.Sine(440.0) | Domain::Audio;
// Create graphics buffer
auto vertex_buffer = vega.VKBuffer(vertices) | Domain::Graphics;
// Custom domain composition for future extensions
auto custom_domain = compose_domain(
Nodes::ProcessingToken::CUSTOM_RATE,
Buffers::ProcessingToken::GPU_PROCESS | Buffers::ProcessingToken::PARALLEL,
Vruta::ProcessingToken::CUSTOM
);
auto quantum_node = vega.custom_processor() | custom_domain;The token system is extensible—new domains can be composed from existing tokens or new tokens added for specialized processing requirements (neural accelerators, quantum processors, biomimetic hardware, etc.).
Graphics Infrastructure - VulkanBackend with full device/queue management (graphics, compute, transfer queues) - VKBuffer with 6 usage types (staging, device, compute, vertex, index, uniform) - Compute and graphics pipeline creation via VKComputePipeline and VKGraphicsPipeline - Shader compilation with hot-reload support via ShaderFoundry - Multi-stage shader support (vertex/fragment/geometry/tessellation/compute) - Window integration and swapchain management via DisplayService - RootGraphicsBuffer with processing chain integration (parallel to RootAudioBuffer) - Portal coordination layers: ShaderFoundry, ComputePress, RenderFlow - Full upload/download buffer support for CPU ↔︎ GPU data transfer - Descriptor management and push constants - Command buffer lifecycle management - 300+ graphics-specific tests validating infrastructure
Audio Infrastructure - RtAudio integration with
multi-channel routing - Sample-accurate processing via RootNode and
RootAudioBuffer - Lock-free node registration and channel routing -
Generator nodes (sine, noise, impulse, polynomial) - Processing nodes
(IIR filters, gain, chain) - Fluent operators (>>,
*, +) - Backend abstraction (pluggable) -
180+ node tests, 100+ buffer tests
Lock-Free Coordination - RootNode atomic registration and processing - NodeGraphManager concurrent coordination - BufferManager atomic accumulation (audio and graphics) - Atomic state guards with wait-free patterns - 150+ tests validating lock-free patterns
Coroutine Infrastructure - TaskScheduler with clock coordination - SampleClock (passive, audio-driven) and FrameClock (active, graphics-driven) - Vruta scheduling primitives (SampleDelay, BufferDelay, FrameDelay) - Kriya creative patterns (metro, timer, event chains, buffer pipelines) - EventSource integration for window events - 150+ tests validating temporal coordination
Windowing System - GLFW integration with EventSource - Window event coroutines - Multi-window support - Input handling via coroutine patterns - Graphics backend registration
ComputationGrammar Foundation - Rule-based matching system operational - UniversalMatcher combinators functional - ExecutionContext infrastructure complete - Priority-based rule evaluation tested - 80+ tests for core functionality
NDData Abstractions - DataVariant type system with multiple primitive types - DataDimension descriptors for dimensional metadata - Modality definitions (Audio, Spectral, Image, Video, Vertex, Tensor) - Region-based access patterns operational - VKBuffer modality integration functional - 90+ tests
Graphics Nodes - Architecture designed but not yet implemented - Current graphics processing is buffer-centric only - Node-level visual processing planned for future expansion
Cross-Modal Data Flow - Infrastructure exists (unified buffer abstractions, domain tokens, coroutine coordination) - VKBuffer integrates with NDData modality system - Actual production workflows (audio → compute shader, spectral → visual modulation) remain conceptual demonstrations - Architectural validation complete; creative workflow validation pending
Yantra Transformers - Architecture solid and proven (UniversalTransformer hierarchy, declarative composition) - Grammar integration functional - Mathematical operations: normalize, scale, clamp operational - Temporal operations: reverse, basic time-stretch operational - Spectral operations: basic FFT functional via FFTW integration - Advanced algorithms (phase vocoder, granular synthesis, adaptive filtering) in active development
ComputationGrammar Stress Testing - Core functionality operational and tested - Complex rule interactions tested in isolation - Real-world pipeline validation with large grammars limited - Production-scale stress testing pending
Container-Graphics Integration - NDimensionalContainers designed but not yet feeding GPU pipelines - Compute shader transformers for NDData planned - Region-based GPU data access architecture designed
Audio-Visual Coordination Workflows - Spectral analysis → shader parameter patterns conceptually designed - Coroutine-based audio-visual synchronization infrastructure exists - Concrete production examples in development
Inter-Component Integration - Component-level tests: ~700 total - Cross-component stress testing: ~50% coverage - Real-world creative validation: limited to proof-of-concept demonstrations - Production stability: unknown, requires adversarial testing
Live Coding Integration (Lila) - LLVM 21-based JIT compilation functional - Sub-buffer latency code execution demonstrated - Full integration with all MayaFlux subsystems in progress - Live modification of node graphs, buffer processors, and shaders
Game Engine Plugins - UE5 C++ bindings architecture designed - Godot C++ bindings architecture designed - Real-time audio-visual coordination for interactive media
Advanced Processing Backends - Neural accelerator integration (TPU, NPU support) - Quantum-inspired algorithms - Custom hardware acceleration via pluggable backend system
MayaFlux enables computational patterns impossible in analog-inspired systems:
Recursive Processing
// Recursive feedback that modifies its own structure
auto recursive_node = vega.custom([](double input, NodeState& state) {
if (state.recursion_depth < 5) {
return recursive_node->process_sample() * 0.5 + input;
}
return input;
});Grammar-Defined Adaptive Pipelines
// Pipeline adapts based on runtime data characteristics
auto adaptive = ComputationPipeline(grammar)
.add_input(unknown_data)
.auto_select_operations() // Grammar determines optimal path
.execute();Ahead-of-Time Complex Transformations
// Pre-calculate expensive operations impossible in real-time
auto precomputed = container->transform_region(
large_audio_region,
[](auto data) {
return apply_expensive_convolution(data);
}
);
// Use precomputed results in real-time processing
realtime_node->set_lookup_table(precomputed);True Cross-Modal Synthesis
// Visual parameters generate audio via compute shader
auto visual_to_audio = [&]() {
// Sample pixel buffer via compute shader
auto pixel_data = compute_shader_sample(visual_buffer, x, y);
// Map visual features to audio parameters
auto frequency = map_range(pixel_data.brightness, 0, 255, 20, 2000);
auto amplitude = pixel_data.saturation;
return complex_buffer_tick(frequency) * amplitude;
};
// Audio parameters drive visual compute shaders
auto audio_to_visual_compute = [&](const SpectralData& spectrum) {
ComputeShaderParams params;
params.bass_energy = spectrum.energy_in_range(20, 200);
params.mid_energy = spectrum.energy_in_range(200, 2000);
params.high_energy = spectrum.energy_in_range(2000, 20000);
// Compute shader processes texture based on audio features
compute_processor->update_push_constants(¶ms, sizeof(params));
compute_processor->dispatch_workgroups(texture_width / 16, texture_height / 16);
};MayaFlux is presented as an architectural experiment seeking community validation:
This project needs:
MayaFlux is not presented as a finished tool, but as a paradigm proposal seeking validation or refutation through community scrutiny.
The core systems work. The tests pass. Graphics infrastructure is production-ready. Audio processing is stable. But 8 months of solo development cannot validate real-world usage patterns, edge cases, creative applicability, or whether the unification of audio and graphics actually serves creative workflows or introduces unnecessary complexity.
This is an honest call for collaboration—to either prove this paradigm has merit or expose its fundamental limitations through adversarial testing and creative exploration.
ADC25 Virtual Presentation
Project Status
Seeking
Technical Contact
MayaFlux represents not just a new framework, but a fundamental rethinking of creative computation—moving from analog simulation to true digital-first paradigms where data transformation becomes the primary creative medium.
Audio and graphics are no longer separate domains, but different modalities of the same unified computational substrate. Sample-accurate audio coordination and frame-accurate graphics coordination use the same coroutine infrastructure. AudioBuffer and VKBuffer integrate with the same processing chains. Mathematical transformations, temporal manipulations, and spectral operations apply equally to sound samples and pixel arrays.
The architecture is proven at the component level. Lock-free patterns work. Coroutines coordinate complex temporal relationships. Grammar-based pipelines enable declarative operation selection. Vulkan backend provides full GPU processing capabilities. NDData unifies cross-modal data access.
What remains is validation at scale—in production systems, under creative constraints, with real-world workloads. This document presents the architecture honestly: what’s complete, what’s functional but limited, what’s planned.
The paradigm shift is real. Whether it’s useful requires community engagement.
MayaFlux: Where audio samples and pixel arrays are just different views of the same computational material, where time itself becomes malleable through coroutines, and where mathematical relationships replace analog metaphors as the language of creative expression.
Licensed under GPL-3.0 | Copyright © 2025 Ranjith Hegde / MayaFlux Project