MayaFlux 0.2.0
Digital-First Multimedia Processing Framework
Loading...
Searching...
No Matches
BufferUtils.hpp
Go to the documentation of this file.
1#pragma once
2
4
5namespace MayaFlux::Nodes {
6class Node;
7}
8
9namespace MayaFlux::Buffers {
10
11/**
12 * @enum TokenEnforcementStrategy
13 * @brief Defines how strictly processing token requirements are enforced in buffer processing chains
14 *
15 * TokenEnforcementStrategy provides different levels of flexibility for handling processor-buffer
16 * compatibility based on processing tokens. This allows the system to balance performance optimization
17 * with operational flexibility depending on the application's requirements.
18 *
19 * The enforcement strategy affects how BufferProcessingChain handles processors with incompatible
20 * tokens, ranging from strict validation to complete flexibility. This enables different operational
21 * modes for development, production, and specialized processing scenarios.
22 */
23enum class TokenEnforcementStrategy : uint8_t {
24 /**
25 * @brief Strictly enforces token assignment with no cross-token sharing
26 *
27 * Processors must exactly match the buffer's processing token requirements.
28 * Any incompatibility results in immediate rejection. This provides maximum
29 * performance optimization by ensuring all processors in a chain can execute
30 * with the same backend configuration, but offers the least flexibility.
31 */
32 STRICT,
33
34 /**
35 * @brief Filters processors through token enumeration, allowing compatible combinations
36 *
37 * Uses the are_tokens_compatible() function to determine if processors can work
38 * together despite different token assignments. This allows some flexibility while
39 * maintaining performance optimization for compatible processor combinations.
40 * Incompatible processors are filtered out rather than rejected outright.
41 */
43
44 /**
45 * @brief Allows token overrides but skips processing for incompatible operations
46 *
47 * Permits processors with different tokens to be added to processing chains,
48 * but skips their execution when the tokens are incompatible. This maintains
49 * chain integrity while allowing dynamic processor management. Useful for
50 * conditional processing scenarios where not all processors need to execute.
51 */
53
54 /**
55 * @brief Allows token overrides but rejects incompatible processors from chains
56 *
57 * Similar to OVERRIDE_SKIP but removes incompatible processors from the chain
58 * entirely rather than skipping them. This provides a middle ground between
59 * flexibility and performance by cleaning up incompatible processors while
60 * allowing initial token mismatches during chain construction.
61 */
63
64 /**
65 * @brief Ignores token assignments completely, allowing any processing combination
66 *
67 * Disables all token validation and compatibility checking. Any processor can
68 * be added to any buffer's processing chain regardless of token compatibility.
69 * This provides maximum flexibility but may result in suboptimal performance
70 * or execution errors. Primarily useful for debugging or specialized scenarios.
71 */
72 IGNORE
73};
74
75/**
76 * @brief Validates that a processing token has a valid, non-conflicting configuration
77 * @param token Processing token to validate
78 * @throws std::invalid_argument if the token contains mutually exclusive flags
79 *
80 * This function ensures that processing tokens contain only compatible flag combinations.
81 * It validates three key mutual exclusions that are fundamental to the processing model:
82 *
83 * **Rate Mutual Exclusion**: SAMPLE_RATE and FRAME_RATE cannot be combined as they
84 * represent fundamentally different temporal processing models that cannot be executed
85 * simultaneously within the same processing context.
86 *
87 * **Device Mutual Exclusion**: CPU_PROCESS and GPU_PROCESS cannot be combined as they
88 * represent different execution environments that require different resource allocation
89 * and execution strategies.
90 *
91 * **Concurrency Mutual Exclusion**: SEQUENTIAL and PARALLEL cannot be combined as they
92 * represent incompatible execution patterns that would create undefined behavior in
93 * processing chains.
94 *
95 * This validation is essential for maintaining system stability and ensuring that
96 * processing tokens represent achievable execution configurations.
97 */
99
100/**
101 * @brief Determines if two processing tokens are compatible for joint execution
102 * @param preferred The preferred processing token configuration
103 * @param current The current processing token configuration being evaluated
104 * @return true if the tokens are compatible, false otherwise
105 *
106 * This function implements sophisticated compatibility logic that goes beyond simple equality
107 * checking to determine if processors with different token requirements can work together
108 * in the same processing pipeline. The compatibility rules are designed to maximize
109 * processing flexibility while maintaining system stability and performance.
110 *
111 * **Rate Compatibility Rules:**
112 * - FRAME_RATE processors require FRAME_RATE execution contexts (strict requirement)
113 * - SAMPLE_RATE processors can adapt to FRAME_RATE contexts (flexible upward compatibility)
114 * - Same-rate combinations are always compatible
115 *
116 * **Device Compatibility Rules:**
117 * - SAMPLE_RATE processing cannot execute on GPU hardware (hardware limitation)
118 * - GPU-preferred processors cannot fall back to CPU execution (performance requirement)
119 * - CPU-preferred processors can use GPU for FRAME_RATE processing only
120 *
121 * **Concurrency Compatibility Rules:**
122 * - Sequential/Parallel differences are acceptable if rate requirements align
123 * - Mismatched concurrency with incompatible rates is rejected
124 * - Same concurrency patterns are always compatible
125 *
126 * This flexibility enables the system to optimize processing chains by allowing compatible
127 * processors to share execution contexts while preventing configurations that would result
128 * in poor performance or execution failures.
129 */
131
132/**
133 * @brief Gets the optimal processing token for a given buffer type and system configuration
134 * @param buffer_type Type identifier for the buffer (e.g., "audio", "video", "texture")
135 * @param system_capabilities Available system capabilities (GPU, multi-core CPU, etc.)
136 * @return Recommended processing token for optimal performance
137 *
138 * This function analyzes buffer characteristics and system capabilities to recommend
139 * the most appropriate processing token configuration. It considers factors like:
140 * - Buffer data type and size characteristics
141 * - Available hardware acceleration
142 * - System performance characteristics
143 * - Current system load and resource availability
144 *
145 * The recommendations help achieve optimal performance by matching processing
146 * requirements with available system capabilities.
147 */
148ProcessingToken get_optimal_token(const std::string& buffer_type, uint32_t system_capabilities);
149
150constexpr int MAX_SPINS = 1000;
151
152/**
153 * @brief Wait for an active snapshot context to complete using exponential backoff
154 * @return true if completed, false if timeout
155 */
157 const std::shared_ptr<Nodes::Node>& node,
158 uint64_t active_context_id,
159 int max_spins = MAX_SPINS);
160
161/**
162 * @brief Extract a single sample from a node with proper snapshot management
163 * @return Extracted sample value
164 */
165double extract_single_sample(const std::shared_ptr<Nodes::Node>& node);
166
167/**
168 * @brief Extract multiple samples from a node into a vector
169 */
170std::vector<double> extract_multiple_samples(
171 const std::shared_ptr<Nodes::Node>& node,
172 size_t num_samples);
173
174/**
175 * @brief Apply node output to an existing buffer with mixing
176 */
178 const std::shared_ptr<Nodes::Node>& node,
179 std::span<double> buffer,
180 double mix = 1.0);
181
182} // namespace MayaFlux::Buffers
183
184namespace std {
185template <>
186struct hash<std::pair<MayaFlux::Buffers::ProcessingToken, MayaFlux::Buffers::ProcessingToken>> {
187 size_t operator()(const std::pair<MayaFlux::Buffers::ProcessingToken, MayaFlux::Buffers::ProcessingToken>& pair) const
188 {
189 return hash<uint32_t>()(static_cast<uint32_t>(pair.first)) ^ (hash<uint32_t>()(static_cast<uint32_t>(pair.second)) << 1);
190 }
191};
192}
static MayaFlux::Nodes::ProcessingToken token
Definition Timers.cpp:8
constexpr int MAX_SPINS
std::vector< double > extract_multiple_samples(const std::shared_ptr< Nodes::Node > &node, size_t num_samples)
Extract multiple samples from a node into a vector.
bool are_tokens_compatible(ProcessingToken preferred, ProcessingToken current)
Determines if two processing tokens are compatible for joint execution.
ProcessingToken
Bitfield enum defining processing characteristics and backend requirements for buffer operations.
void validate_token(ProcessingToken token)
Validates that a processing token has a valid, non-conflicting configuration.
double extract_single_sample(const std::shared_ptr< Nodes::Node > &node)
Extract a single sample from a node with proper snapshot management.
ProcessingToken get_optimal_token(const std::string &buffer_type, uint32_t system_capabilities)
Gets the optimal processing token for a given buffer type and system configuration.
void update_buffer_with_node_data(const std::shared_ptr< Nodes::Node > &node, std::span< double > buffer, double mix)
Apply node output to an existing buffer with mixing.
bool wait_for_snapshot_completion(const std::shared_ptr< Nodes::Node > &node, uint64_t active_context_id, int max_spins)
Wait for an active snapshot context to complete using exponential backoff.
TokenEnforcementStrategy
Defines how strictly processing token requirements are enforced in buffer processing chains.
@ OVERRIDE_SKIP
Allows token overrides but skips processing for incompatible operations.
@ STRICT
Strictly enforces token assignment with no cross-token sharing.
@ OVERRIDE_REJECT
Allows token overrides but rejects incompatible processors from chains.
@ FILTERED
Filters processors through token enumeration, allowing compatible combinations.
@ IGNORE
Ignores token assignments completely, allowing any processing combination.
Contains the node-based computational processing system components.
Definition Chronie.hpp:5
std::vector< double > mix(const std::vector< std::vector< double > > &streams)
Mix multiple data streams with equal weighting.
Definition Yantra.cpp:1019
size_t operator()(const std::pair< MayaFlux::Buffers::ProcessingToken, MayaFlux::Buffers::ProcessingToken > &pair) const