Features Documentation Examples Repository Blog Contact

Static Operation

What is Static Operation?

Static operation means everything in your system is decided and allocated before your program starts running.

In traditional embedded systems, programs often create and destroy things while running (dynamic allocation). CMRX takes a different approach: everything is planned and allocated at compile time.

Why Static Operation Matters

The Problem with Dynamic Allocation

When programs create things during runtime:

  • Memory can run out: Your program might crash when trying to allocate memory
  • Timing is unpredictable: You don’t know how long allocation will take
  • Memory fragmentation: Available memory gets broken into small, unusable pieces and may cause allocation fail even if free memory is still plenty
  • Hard to analyze: You can’t predict worst-case memory usage

The CMRX Solution

With static operation:

  • Predictable memory usage: You know exactly how much memory you need
  • No runtime failures: Can’t run out of memory because it’s pre-allocated
  • Deterministic timing: All operations take known, predictable time
  • Easy analysis: You can verify your system will work before deploying

This limitation means that kernel itself allocates limited (but configurable) amount of resources of each kind it uses. This allows kernel to run without the need of dynamic allocation. There is no dynamic allocation facility supported for the userspace either.

Memory is Not the Only Static Aspect

Dynamic memory allocation is quite obvious dynamic behavior of any system yet it is far from being the only dynamic behavior. In CMRX almost all aspects are static. One of them is the enforcement of memory isolation. While the memory isolation scheme of CMRX is quite flexible, it is actually computed at compile time. No region readjustment is done during runtime. While firmware is already running, the only interaction of kernel with memory protection unit is the act of switching processes.

This means that the actual implementation of memory protection is as fast as if you defined everything fully manually. No additional overhead and no dynamic MPU reconfiguration.

This means that there is no risk of unpredictable latency caused by “page faults” when code attempts to access memory region which is indeed accessible to it, but hardware is not currently configured to permit this access. Memory protection hardware is held in configuration where all legitimately accessible memory is mapped and accessible all the time. The execution of userspace code is predictable, without any random interruptions from protection hardware.

64kB of protected memory ought to be enough for everyone.