After years of working with low delay and real-time DSP (Digital Signal Processing) pipelines, it is obvious that their development requires a specialized scheduling framework where this framework needs to do way more than just scheduling. In fact, such a framework needs to have functionality for building, easily reconfiguring, debugging and measuring the pipeline. Changing and tinkering with a pipeline sometimes involves distributing the parts onto multiple machines. Debugging frequently needs visualization steps to be easily introduced into such a system. Such a framework should also support the automatic self-measurement / autoprofiling of the system and its components to support tuning of the pipeline. The logging of measurements should be supported with a plug-in architecture to have different logging systems fit without performance impact.
What should be done
- Mapping up current, similar solutions and understanding them.
- Analyzing how our problem space is different, what challanges we have over other solutions.
- Formalize the problem of data packets optimally passing through the pipeline under different conditions.
- Draft a framework with all technical issues taken into consideration.
- Implement the framework.
- Make experiments and setups with it and evaluate its performance by using the autoprofiling capabilities.
Scheduler Framework Requirements
- High performance
- Written in C++
- Uses non-blocking, lock-free algorithms
- No need for user to have own mutexes or any kind of synchronization primitives
- In-place running in memory, allocations are only made in the setup phase, not in the running phase
- Easy to use
- Nice, compact OOP API
- Templates to support own data types of user
- RAII idiom used
- Uses usual DSP terminology (sources, sinks...)
- All packets are timestamp based
- Cross-platform code
- Configurable buffers per sinks (not per sources), supporting low- and high delay buffers
- Configurable drop policies (drop oldest, keep data fresh)
- Supports stateless and stateful (accumulate) tasks
- Supports user controlled threads besides own thread pool for UI usage or thread affinity
- Supports circles (feedback) in the pipeline chain
- Supports multiple entry and exit (even reentry) points for distributed workflows
- Has a measurement API which exposes autoprofiling capabilities of the whole pipeline
- Pluggable measurements: The user can fit any logging mechanism to the profiler
- Measurements should be of minimal impact
- Unused measurements should have zero impact