Introduction

cib is a C++ header-only library for building embedded firmware with reusable components. It implements the compile-time initialization and build pattern. Instead of initializing components and registering callbacks at runtime, this process is executed at compile-time using constexpr or consteval functions.

This documentation is a work in progress. See also the README file for information.

Compiler and C++ version support

The following compilers are supported:

  • clang 14 thru 17

  • GCC 12 thru 13

C++20 is required.

An older version of cib (not covered by this documentation) that uses C++17 is tagged at v1.7.0. It is tested with:

  • clang 9 thru 15

  • GCC 9 thru 12

Dependencies

This repository uses CPM and a common CI/CD infrastructure.

The library dependencies are:

Libraries

cib contains several libraries that work together:

The flow library

See code at https://github.com/intel/compile-time-init-build/tree/main/include/flow. Everything in the flow library is in the flow namespace.

A flow is a series of actions to be executed in order based on their dependencies. The flow library was originally designed to be used for power flows, moving from one power state to another. A flow can also be used as a generic extension point to register callbacks at compile-time. A flow implements the cib service pattern.

Example

Flow extension points are declared by deriving from flow::service.

struct MorningRoutine : public flow::service<"MorningRoutine"> {};

The builder is used during the constexpr init() phase to define and extend the flow. Actions to be added to the flow are declared and defined as constexpr constants.

namespace food {
  constexpr static auto MAKE_COFFEE = flow::action<"MAKE_COFFEE">([] {
    coffee_maker.start(ground_coffee.take(100_grams), water.take(16_ounces));
  });

  constexpr static auto DRINK_COFFEE = flow::action<"DRINK_COFFEE">([] {
    // gulp
  });
}

Actions are added to the flow inside a component’s cib::config.

struct morning {
  constexpr auto config = cib::config(
    cib::extend<MorningRoutine>(
        WAKE_UP >>
        selfcare::SHOWER >>
        selfcare::GET_DRESSED >>
        food::MAKE_COFFEE >>
        food::DRINK_COFFEE));
};

The >> operator is used to create a dependency between two actions. For example WAKE_UP >> SHOWER means you need to wake up first before you can take a shower. The flow library will order actions in a flow to respect these dependencies. The actions will be executed in an order that respects all given dependencies.

If we only use the morning component in our project, the MorningRoutine flow graph would look like the following:

Diagram

The power of flow services comes when more than one component adds actions to the flow. Flows can be extended by inserting additional actions with new dependencies.

struct childcare {
  constexpr static auto PACK_SCHOOL_LUNCHES = flow::action<"PACK_SCHOOL_LUNCHES">([] {
    // ...
  });

  constexpr static auto SEND_KIDS_TO_SCHOOL = flow::action<"SEND_KIDS_TO_SCHOOL">([] {
    // ...
  });

  constexpr auto config = cib::config(
      cib::extend<MorningRoutine>(
          food::MAKE_COFFEE >>          // this step exists in the MorningRoutine flow
          PACK_SCHOOL_LUNCHES >>        // new
          food::DRINK_COFFEE >>         // existing
          food::MAKE_BREAKFAST >>       // new
          food::EAT_BREAKFAST >>        // new
          SEND_KIDS_TO_SCHOOL));        // new
};

The new steps are inserted into the existing flow​'s dependency graph:

Diagram

Multiple independent components can add actions to the same flow. This is the power of flow services, they can be extended by multiple independent components to create new functionality.

struct exercise {
  constexpr static auto RIDE_STATIONARY_BIKE = flow::action<"RIDE_STATIONARY_BIKE">([] {
    // ...
  });

  constexpr auto config = cib::config(
      cib::extend<MorningRoutine>(
          morning::WAKE_UP >>
          RIDE_STATIONARY_BIKE >>
          selfcare::SHOWER));
};

The MorningRoutine flow now contains the functionality of three components, all without the MorningRoutine source code having known about the new functionality. We can mix and match new components without modifying the original source code.

Diagram

The cib library will take care of initializing and building all services, including flow services. For flow​s, this means the dependency graph will be serialized into a sequence of actions at compile-time to be executed in order at runtime.

MorningRoutine
 1. WAKE_UP
 2. exercise::RIDE_STATIONARY_BIKE
 3. selfcare::SHOWER
 4. selfcare::GET_DRESSED
 5. food::MAKE_COFFEE
 6. childcare::PACK_SCHOOL_LUNCHES
 7. food::DRINK_COFFEE
 8. food::MAKE_BREAKFAST
 9. food::EAT_BREAKFAST
10. childcare::SEND_KIDS_TO_SCHOOL

All of these components are composed in a project component and brought to life with an instance of cib::top. We need to make sure our flow​s get executed at the appropriate times, so our example has a day_cycle component that defines the various extension points and ensures they get executed over and over in cib::top​'s MainLoop.

// simple component for scheduling daily activities
struct day_cycle {
  constexpr static auto DAY_CYCLE = flow::action<"DAY_CYCLE">([] {
      flow::run<MorningRoutine>();
      flow::run<DaytimeRoutine>();
      flow::run<EveningRoutine>();
      wait_for_morning_time();
  });

  constexpr auto config = cib::config(
      cib::exports<
          MorningRoutine,
          DaytimeRoutine,
          EveningRoutine>,
      cib::extend<MainLoop>(
          DAY_CYCLE));
};

// bring together all the components for the project
struct my_life {
  constexpr auto config =
      cib::components<
          day_cycle,
          morning,
          childcare,
          exercise>;
};

// use cib::top to create our nexus and main function
cib::top<my_life> top{};

int main() {
  top.main();
}

API

service

Defines a new flow service. If the flow::service template is given a name then it will automatically log the beginning and end of the flow as well as all actions.

// declare a flow without logging
struct MyFlow : public flow::service<> {};

// declare a flow with automatic logging enabled
struct MyFlowWithLogging : public flow::service<"MyFlowWithLogging"> {};

action

Defines a new flow action. All flow actions are created with a name and a lambda expression. flow action and milestone names must be unique within a flow. The same action can be used in multiple flows. Actions cannot be added to a flow more than once, but can be referenced by other actions when adding dependencies.

constexpr static auto MY_ACTION = flow::action<"MY_ACTION_NAME">([] {
  // do useful stuff
});

milestone

Defines a new flow milestone. Milestones are used only for their name: they perform no action. They are used as points within a flow which other actions may base their dependencies on.

constexpr static auto MY_MILESTONE = flow::milestone<"MY_MILESTONE_NAME">();

run

Runs a flow, executing all its actions in the prescribed order.

flow::run<MyFlow>();

operator>>

Creates a dependency between two or more actions and/or milestones. Must be passed into the cib::extend configuration method for it to have an effect. Can be chained together to create a sequence of dependent actions.

namespace example_component {
  constexpr auto config = cib::config(
      cib::extend<MyFlow>(
          // SOME_ACTION must execute before SOME_OTHER_ACTION
          SOME_ACTION >> SOME_OTHER_ACTION));
}

operator&&

Allows two or more actions and/or milestones to run in parallel without any ordering requirement between them. If there is no dependency between two or more actions, this is the preferred way of adding them to a flow. Other components will then be able to insert actions in between if needed.

namespace example_component {
  constexpr auto config = cib::config(
      cib::extend<MyFlow>(
          // no order requirement between these actions
          SOME_ACTION && SOME_OTHER_ACTION));
}

Alternative flow builders

The default flow service uses a graph builder that outputs the flow steps as an array of function pointers. Traversing the array and calling those functions ensures the correct relative ordering of flow steps in the graph, and this is what happens by default when we run the flow.

// the default flow builder and service
template <stdx::ct_string Name = "">
using builder = flow::graph<Name, flow::graph_builder<impl>>;
template <stdx::ct_string Name = "">
struct service : cib::builder_meta<builder<Name>, flow::FunctionPtr> {};

// declare a flow service
struct MorningRoutine : public service<"MorningRoutine"> {};

// add steps, etc, then at runtime, run the flow:
nexus.service<"MorningRoutine">();

Here graph_builder is the type that renders the flow description into the array of function pointers, and flow::FunctionPtr is the type-erased interface (here a function taking no arguments and returning void) that is called to run a flow.

But given a flow, other renderings are possible.

// a flow builder and service that produces a graphviz rendering
template <stdx::ct_string Name = "">
using viz_builder = flow::graph<Name, flow::graphviz_builder>;
template <stdx::ct_string Name = "">
struct viz_service : cib::builder_meta<builder<Name>, flow::VizFunctionPtr> {};

Here, viz_service will produce a graphviz rendering of a flow using the graphviz_builder. flow::VizFunctionPtr is the type-erased interface once more, and it is defined to take no arguments and return a std::string. When we "run" the flow, we get the graphviz rendering.

// instead of the default flow::service, use the viz_service
struct MorningRoutine : public viz_service<"MorningRoutine"> {};

// add steps, etc, as before
// this time, when we "run" the flow, we get a string representing the graphviz rendering
auto graphviz_str = nexus.service<"MorningRoutine">();

graphviz_builder is available as a debugging aid. But in general, having the flow rendering separate from the flow definition enables any kind of rendering with correponding runtime behaviour.

The logging library

See code at https://github.com/intel/compile-time-init-build/tree/main/include/log. Everything in the logging library is in the logging namespace, although most user code will use macros.

Logging in cib is in two parts:

  • the interface, in log.hpp

  • an implementation, which can be specified at the top level

Three possible logger implementations are provided:

Log levels

cib offers 6 well-known and 2 user-defined log levels, according to the MIPI Sys-T spec.

enum struct level {
    MAX = 0,
    FATAL = 1,
    ERROR = 2,
    WARN = 3,
    INFO = 4,
    USER1 = 5,
    USER2 = 6,
    TRACE = 7
};

Log macros

cib log macros follow the log levels:

CIB_TRACE(...);
CIB_INFO(...);
CIB_WARN(...);
CIB_ERROR(...);
CIB_FATAL(...);

Or CIB_LOG(level, …​) can be used equivalently to these where the level is explicit. CIB_FATAL causes a call to stdx::panic, and CIB_ASSERT(expression) is equivalent to CIB_FATAL in the case where the expression evaluates to false.

Selecting a logger

In order to use logging in a header, it suffices only to include log.hpp and use the macros. Header-only clients of logging do not need to know the implementation selected.

To use logging in a translation unit, specialize the logging::config variable template. Left unspecialized, the null logger will be used.

// use fmt logging to std::cout
template <>
inline auto logging::config<> = logging::fmt::config{std::ostream_iterator<char>{std::cout}};

The provided fmt implementation can output to multiple destinations by constructing logging::fmt::config with multiple ostream iterators.

Caution
Be sure that each translation unit sees the same specialization of logging::config! Otherwise you will have an ODR violation.

Implementing a logger

Each logging implementation (configuration) provides a customization point: a logger object, which must implement log. Therefore providing a custom implementation is a matter of defining this structure appropriately.

struct my_logger_config {
  struct {
    template <logging::level L, typename ModuleId,
              typename File, typename Line, typename Msg>
    auto log(File, Line, Msg const &msg) -> void {
      // log according to my mechanism
    }
  } logger;
};

Notice that the first two template parameters to log are the level and the module ID. The ModuleId type is a compile-time string.

The first two runtime parameters receive preprocessor _​_FILE_​_ and __LINE_​_ values respectively. The msg argument is a structure containing a compile-time format string and runtime arguments to be interpolated into it. It supports an apply function, so one way to implement log is:

struct my_logger_config {
  struct {
    template <logging::level L, typename ModuleId,
              typename File, typename Line, typename Msg>
    auto log(File, Line, Msg const &msg) -> void {
      msg.apply([] <typename Str> (Str, auto const&... args) {
        std::print(Str::value, args...);
      });
    }
  } logger;
};
Note
Str::value here is a compile-time std::string_view.

To use the custom implementation, as with any built-in choice of logger, specialize logging::config:

template <>
inline auto logging::config<> = my_logger_config{};

Modules

It can be helpful to scope or filter log messages by associating them with module IDs. Several logging backends have support for this idea. Tagging every log call site gets verbose and error-prone, so instead the approach taken by cib is to override log modules by using CIB_LOG_MODULE declarations at namespace, class or function scope.

auto global_f() {
  CIB_INFO("This log uses the default module ID");
}

namespace my_ns {
CIB_LOG_MODULE("my_ns");
CIB_INFO("This log uses my_ns as its module ID");

struct my_struct {
  CIB_LOG_MODULE("my_struct");

  auto f() {
    CIB_INFO("This log uses my_struct as its module ID");
  }

  auto g() {
    CIB_LOG_MODULE("g");
    CIB_INFO("This log uses g as its module ID");
  }
};
}

String data

On a constrained system, space for text can be at a premium. The sc library and the MIPI Sys-T logger combine to solve this problem.

The interrupt library

interrupt::manager is intended to automate and hide the low level details of interrupt hardware initialization, registration, and execution while using the least amount of memory and execution time.

interrupt::manager

The interrupt manager is the one-stop shop for configuring interrupts and executing code when interrupts are triggered.

A manager is constructed by specifying an interrupt configuration and a nexus type:

struct my_flow : flow::service<"my flow"> {};

using interrupt::operator""_irq;

using config = interrupt::root<
    interrupt::irq<17_irq, 4, interrupt::policies<>, my_flow>>;

struct my_project {
  constexpr static auto config = cib::config(cib::exports<my_flow>);
};
using nexus_t = cib::nexus<my_project>;

auto mgr = interrupt::manager<config, nexus_t>{};

Given this setup, the idea is that when interrupt 17 is triggered, my_flow will run.

Interrupt configuration

Interrupt configuration comes in 4 flavors.

interrupt::irq

A basic interrupt configuration is specified with interrupt::irq.

using irq_t = interrupt::irq<
    17_irq,                    // the IRQ number
    4,                         // priority
    interrupt::policies<>,     // policies
    my_flow>;                  // flow(s) to run when triggered

interrupt::sub_irq

A sub_irq is designed to be a child of another interrupt configuration; as such, instead of specifiying an IRQ number and priority, instead it has enable and status fields.

using irq_t = interrupt::sub_irq<
    enable_field_t,
    status_field_t,
    interrupt::policies<>,
    my_flow>;

The idea here is that when the parent interrupt fires, my_flow will run if the enable/status field states allow it.

interrupt::shared_irq

Of course a sub_irq has to have a parent, and that is often a shared_irq.

using irq_t = interrupt::shared_irq<
    17_irq, 4, interrupt::policies<>, // as for irq
    interrupt::sub_irq<
        enable_field_t,
        status_field_t,
        interrupt::policies<>,
        my_flow>>;

A shared_irq may have any number of sub_irq children.

interrupt::shared_sub_irq

Lastly, we may want multiple nested levels. This is where shared_sub_irq comes in. Like sub_irq, shared_sub_irq has enable and status fields. And like shared_irq, shared_sub_irq can have any number of children.

using irq_t = nterrupt::shared_irq<
    17_irq, 4, interrupt::policies<>,
    interrupt::sub_irq<enable_field_t, status_field_t,
                       interrupt::policies<>, my_flow>,
    interrupt::shared_sub_irq<
        enable_field_1_t, status_field_1_t, interrupt::policies<>,
        interrupt::sub_irq<enable_field_1_0_t, status_field_1_0_t,
                           interrupt::policies<>, my_flow_0>,
        interrupt::sub_irq<enable_field_1_1_t, status_field_1_1_t,
                           interrupt::policies<>, my_flow_1>>>;

interrupt::root

The configuration passed to the interrupt::manager is an interrupt::root object which contains any number of IRQ configurations.

Note
The various interrupt configurations can run multiple flows: the last template parameter (the flow) in each case above is a variadic pack.

policies

Policies allow customization of how interrupt status fields get handled: chiefly, when they are cleared. There are three policies provided:

  • interrupt::clear_status_first

  • interrupt::clear_status_last

  • interrupt::dont_clear_status

Any of these policies can be specified in an interrupt configuration with for example interrupt::policies<interrupt::clear_status_last>. If none is provided, interrupt::clear_status_first is the default behavior.

Policies also allow interrupt configurations to depend upon resources. Resources are just types that are specified in the interrupt::required_resources template.

So an interrupt configuration that depends on a resource might look like this:

struct my_hw_resource; // just a tag type representing a resource

using config = interrupt::root<interrupt::irq<
    17_irq, 4,
    interrupt::policies<interrupt::required_resources<my_hw_resource>>,
    my_flow>>;
Note
interrupt::policies is a variadic template, but we didn’t specify the status policy here (only the resources policy), so the default status policy will be interrupt::clear_status_first. The default resource policy of course depends on no resources.

Resources can be turned on and off at runtime, allowing dynamic interrupt control. When a resource is turned off, any interrupt configurations that depend on that resource will not run their flows when the interrupt is triggered.

The interrupt HAL

The interrupt manager interacts with hardware through a HAL interface that must be provided. It has three functions.

struct my_hal {
  static auto init() -> void {
    // do any preliminary interrupt initialization work
  }

  template <bool Enable, interrupt::irq_num_t IrqNumber, std::size_t Priority>
  static auto irq_init() -> void {
    // enable/disable the given IRQ at the given priority
  }

  template <interrupt::status_policy P>
  static auto run(interrupt::irq_num_t irq, stdx::invocable auto const &isr) -> void {
    // run according to policy
    P::run([] {},            // clear status, if any
           [&] { isr(); });  // execute the interrupt service routine
  }
};

This global interface is injected by specializing a variable template.

template <> inline auto interrupt::injected_hal<> = my_hal{};

Initialization

The interrupt::manager​'s init() method enables and initializes all interrupts with associated flows.

Running

The interrupt::manager​'s run<irq_number>() method clears the pending interrupt status and runs any flows associated with that IRQ. If it’s a shared interrupt, each registered sub_irq will have its interrupt status field checked to determine if its flow should be executed, and the field will be cleared according to the specified policy for that sub_irq configuration.

This is the method that should be wired to an ISR vector table.

Dynamic interrupt control

At runtime, interrupts can be enabled and disabled dynamically according to resources, enable fields, or just by enabling/disabling flows. This is done with the dynamic_controller. It is initialized with the same configuration (a root object) as the interrupt::manager.

using dynamic_t = interrupt::dynamic_controller<config>;

// disable my_flow: the interrupt wired to trigger my_flow will be disabled if
// nothing else depends on it
dynamic_t::disable<my_flow>();

// turn off a resource: interrupts that depend on this resource will be disabled
dynamic_t::turn_off_resource<my_hw_resource>();

// turning flows and resources back on re-enables any dependent interrupts
dynamic_t::enable<my_flow>();
dynamic_t::turn_on_resource<my_hw_resource>();

The match library

See code at https://github.com/intel/compile-time-init-build/tree/main/include/match. Everything in the match library is in the match namespace.

The easiest way to get started using the match library is to #include <match/ops.hpp>.

What is a matcher?

A matcher is conceptually a predicate that takes some object to match on. Every matcher has at least the following interface:

// Given some type of event to match on
struct my_event;

struct my_matcher {
  // Expose an is_matcher type
  using is_matcher = void;

  // Implement a call operator taking an event and returning something
  // convertible to bool
  auto operator()(my_event const&) const -> bool;

  // Describe this matcher: return something that can be logged
  auto describe() const {
    return "my matcher"_sc;
  }

  // Describe a match: return something that can be logged
  auto describe_match(my_event const&) const {
    return "the match passed/failed because reasons"_sc;
  }
};

The above matcher models the matcher concept and also models matcher_for<my_event>.

static_assert(match::matcher<my_matcher>);
static_assert(match::matcher_for<my_matcher, my_event>);
Note
The type of is_matcher doesn’t matter: this works in the same way is_transparent on operators in the C++ standard.

Basic matchers

always and never are two foundational matchers. They do what you would expect: the call operator of always returns true no matter what its argument; likewise never​'s call operator returns false.

assert(match::always(/*any event*/));
assert(not match::never(/*any event*/));

A matcher can be made from a predicate function using match::predicate:

auto m = match::predicate<"my matcher">{
    [] (auto const& event) ( /* return true/false */ }};

Here, "my matcher"_sc will be returned from both the describe() and describe_match() functions.

Composing matchers

A matcher represents a Boolean value; therefore matchers can be composed with the ordinary Boolean operators &&, || and ! (or and, or and not).

// given matchers m1 and m2

// matches when both m1 and m2 match
match::matcher auto and_matcher = m1 and m2;

// matches when either m1 or m2 (or both) match
match::matcher auto or_matcher = m1 or m2;

// matches when m1 does not match
match::matcher auto not_matcher = not m1;

A Boolean composition of matchers is also a matcher. This means we can build up arbitrarily complex expressions representing Boolean compositions of matchers.

match::matcher auto complex = m1 and (m2 or m3) and (not m4 or m5);
Note
It can be seen that always is the identity for and, and never is the identity for or, which means these operations can also be used in binary fold expressions with the appropriate identity.

Tha match library also provides any and all functions for convenient expression of and and or folds.

// given a pack of matchers ms...

// equivalent to a fold over and
match::matcher auto all_ms = match::all(ms...);

// equivalent to a fold over or
match::matcher auto any_ms = match::any(ms...);

Boolean algebra with matchers

The ability to compose complex Boolean expressions also raises the possibility of simplifying those expressions with well-known laws of Boolean algebra. This helps efficient evaluation at runtime, as well as limiting the buildup of large complex types at compile time.

Some simplifications are obvious:

match::matcher auto s0 = not not m1;    // simplifies to m1

match::matcher auto s1 = m1 and always; // simplifies to m1
match::matcher auto s2 = m1 or never;   // simplifies to m1

match::matcher auto s3 = m1 and not m1; // simplifies to never
match::matcher auto s4 = m1 or not m1;  // simplifies to always

match::matcher auto s5 = m1 and m1;     // simplifies to m1
match::matcher auto s6 = m1 or m1;      // simplifies to m1

Other simplifications are straightforward, but not obvious unless you have studied Boolean algebra:

// absoprtion: this simplifies to m1
match::matcher auto s1 = m1 or (m1 and m2);

// de Morgan's law: this simplifies to not (m1 or m2)
match::matcher auto s2 = not m1 and not m2;
Note
Simplification is done automatically by the and, or and not operator overloads.

Customizing matchers

How does the match library know how to simplify an expression? This is a straightforward thing to decide when considering matchers known to the library. But the broader question is how to allow customization of matchers in a way that preserve’s the library’s ability to analyze and manipulate expressions.

The match library uses a tag_invoke technique to allow customization of two things in particular:

  • negation (i.e. applying not)

  • implication (i.e. warranting that A ⇒ B)

If you have a custom matcher, you can overload tag_invoke with negate_t for your type if there is a way to negate it that is outside the library’s know-how, or with implies_t if you can warrant implications that the library can use to simplify expressions.

Customizing negation

For instance, if we have a less-than matcher (with only the salient part shown here):

template <std::integral N>
struct less_than {
  auto operator()(auto value) const { return value < N; }
  // ...
};

We might overload tag_invoke on the negate_t tag to return a greater-than-or-equal-to matcher:

template <std::integral A>
struct less_than {
  // ...
private:
  friend auto tag_invoke(match::negate_t, less_than const &) {
    return greater_than_or_equal_to<A>{;
  }
};

This could be useful in eliminating a not which would otherwise be required.

Customizing implication

Implication rules are used by the match library to simplify expressions. The elementary boolean implications are known:

false => X
X => true
X => X

Let’s take the example of the less-than matcher again. We can warrant the following implication:

(X < A) => (X < B) when A < B

Actually, this is true when A == B as well, but in the case where A == B the match library already knows this, because it knows the general case of X ⇒ X. In code, overloading implication to warrant this looks like this:

template <std::integral A>
struct less_than {
  // ...
private:
  template <std::integral B>
  friend auto tag_invoke(match::implies_t, less_than const &, less_than<B> const &) {
    return A < B;
  }
};

Implication enables simplification

By providing just a few implication rules, we can enable the library to simplify complex boolean expressions involving relational operators. For example the following simplifications are all possible:

X < 3 and X > 5 -> false
X < 3 and X < 5 -> X < 3
X < 5 or X > 3  -> true
X < 5 or X < 3  -> X < 5

To see how this works, let’s look at a truth table:

A B A ∧ B A ∨ B A ⇒ B

F

F

F

F

T

F

T

F

T

T

T

F

F

T

F

T

T

T

T

T

Note
The proposition A ⇒ B is logically equivalent to the proposition ¬A ∨ B.

If we know that A ⇒ B is true, then we can disregard the third row of this table (italicized). In that case, it can be seen that:

A ∧ B ≡ A
A ∨ B ≡ B

Similarly, if we know that A ⇒ ¬B is true, we can conclude:

A ∧ B ≡ false

And if we know that ¬A ⇒ B is true, we can conclude:

A ∨ B ≡ true

These conclusions can be seen by striking out the last and first lines (italicized) respectively of the truth table:

A B A ∧ B A ∨ B A ⇒ ¬B ¬A ⇒ B

F

F

F

F

T

F

F

T

F

T

T

T

T

F

F

T

T

T

T

T

T

T

F

T

By warranting implications and perhaps overloading negation appropriately, these inferences, together with de Morgan’s laws, suffice to apply all the laws of Boolean algebra that are required to simplify expressions.

But as a last resort, if working completely outside of boolean algebra, the simplify_t tag is used in overloads for expression simplification, and can be used directly in a tag_invoke overload.

Disjunctive Normal Form

For some applications, disjunctive normal form (a.k.a. an or of ands, or a sum of products form) is a useful representation.

The match library can convert an arbitrary Boolean expression of matchers into this form with the sum_of_products transformation.

// s1 is not in DNF, because an or is inside an and
match::matcher auto s1 = m1 and (m2 or m3);

// s2 is in DNF
match::matcher auto s2 = match::sum_of_products(s);
// s2 is equivalent to (m1 and m2) or (m1 and m3)

In disjunctive normal form, all not applications are pushed down to act on single terms and not on the compound and and or terms. The transformation recursively applies distribution of and over or and de Morgan’s laws.

Matcher ordering and equivalence

Given a definition of implication, we can define a partial ordering of matchers:

  • Two matchers A and B are equivalent if A ⇒ B and B ⇒ A.

  • Otherwise, A ⇒ B means that A < B and vice versa.

  • If no implication relationship between A and B holds true, then A and B are unorderable.

Therefore, operator<​=​> is defined on matchers such that it returns a std::partial_ordering.

An intuition for what "less than" means when applied to matchers: one matcher is less than another if it matches fewer values. This is consistent with theory: matchers form a Boolean algebra with never and always as ⊥ and ⊤ respectively.

The message library

See code at https://github.com/intel/compile-time-init-build/tree/main/include/msg. Everything in the msg library is in the msg namespace.

Fields

A field represents a value, specified in bits, inside a unit of addressable storage. Currently in the cib library, storage is assumed to be in units of std::uint32_t. A field is a view type: it defines how to access storage, but it is an empty type.

Two things specify a field:

  • a field_spec_t which specifies the name, the type and the size (in bits) of the field

  • one or more field_locator_t​s which determine how to read and write the field from storage

A field has two important member functions:

// where T is the type of the field

[[nodiscard]] constexpr static auto extract(auto const &data) -> T;
[[nodiscard]] constexpr static auto insert(auto &data, T value) -> void;

A field may also specify a matcher; this can be used to verify that a particular storage area contains the field. By default this is match::always.

For example, a field type looks like this:

using namespace msg;
using my_field =
    field<"my field",               // name
          std::uint32_t>            // type
            ::located<at{0_dw,      // offset in storage (32-bit words)
                         31_msb,    // most significant bit
                         24_lsb}>;  // least significant bit

A field can specify multiple at arguments if it has several disjoint parts. In that case, earlier at arguments specify more significant bits in the field, with later at arguments being less significant. For example:

using namespace msg;
using my_field =
    field<"my field",               // name
          std::uint16_t>            // type
            ::located<at{0_dw, 31_msb, 24_lsb},  // high byte
                      at{0_dw,  7_msb,  0_lsb}>; // low byte

Fields also expose several matcher aliases which can typically be used to specify field values for a given message type; an example of this follows shortly. Further, fields expose aliases for expressing themselves as fields with the given matchers.

Field matchers can be found in https://github.com/intel/compile-time-init-build/tree/main/include/msg/field_matchers.hpp. The most commonly used field matchers are equal_to_t (for testing if a field has a certain value) and in_t (for testing if a field value lies within a set).

Messages

A message is a named collection of field types. A message can be associated with the storage that contains it in two ways: owning, i.e. containing an array, or as a view, i.e. containing a span.

For example, a message type looks like this:

using my_message_defn = msg::message<
    "my_message",                      // name
    my_field::WithRequired<0x80>>;     // field(s)

using my_message = msg::owning<my_message_defn>;

Here the message has only one field. WithRequired<0x80> is an alias specialization that expresses my_field with a matcher that is equal_to_t<0x80>.

Owning vs view types

An owning message uses underlying storage: by default, this is a std::array of std::uint32_t whose size is enough to hold all the fields in the message.

// by default, my_message will contain a std::array<std::uint32_t, 1>
using my_message = msg::owning<my_message_defn>;

// two corresponding default view types:

// holds a stdx::span<std::uint32_t, 1>
using mutable_view = msg::mutable_view<my_message_defn>;

// holds a stdx::span<std::uint32_t const, 1>
using const_view = msg::const_view<my_message_defn>;

The storage for a message can be customized with a tailored std::array:

// an owning message with the same fields and access, but different storage:
auto msg = my_message_defn::owner_t{std::array<std::uint8_t, 4>{}};

// another way to get the view types from that owning message
auto mut_view = msg.as_mutable_view();
auto const_view = msg.as_const_view();

View types are implicitly constructible from the corresponding owning types, or from an appropriate std::array or stdx::span, where they are const. A mutable view type must be constructed explicitly.

Owning types can also be constructed from views, arrays and spans - but always explicitly: since they are owning, they always incur a copy.

Fields can be set and retrieved in a message, including on construction:

auto msg = my_message{"my_field"_field = 42};
auto f = my_message.get("my_field"_field);    // 42
my_message.set("my_field"_field = 17);

Fields can also be set and retrieved on mutable view type messages. For obvious reasons, calling set on a const view type is a compile error. Likewise, setting a field during construction of a const view type is not possible.

The raw data underlying a message can be obtained with a call to data:

auto data = msg.data();

This always returns a (const-observing) stdx::span over the underlying data.

Handling messages with callbacks

cib contains an implementation of a basic message handler which can be used in the obvious way: given some storage, the handler will run matchers from various messages; when a matcher successfully matches, the callback(s) registered will be called.

// given the above field and message types, define a service
struct my_service : msg::service<my_message> {};

// define a callback with the matcher from the message definition
constexpr auto my_callback = msg::callback<"my_callback">(
    typename my_message_defn::matcher_t{},
    [](msg::const_view<my_message_defn>) { /* do something */ });

// define a project
struct my_project {
    constexpr static auto config = cib::config(
        cib::exports<my_service>,
        cib::extend<my_service>(my_callback));
};

In this case, the callback parameter is a const_view over the message definition as explained above. Given these definitions, we can create a nexus and ask the service to handle a message:

cib::nexus<my_project> my_nexus{};
my_nexus.init();

// handling this message calls my callback
using msg::operator""_field;
cib::service<my_service>->handle(my_message{"my field"_field = 0x80});

Notice in this case that our callback is defined with the matcher_t from the message definition; that matcher is the conjunction of all the field matchers, and the my_field matcher requires it to equal 0x80. Therefore, handling the following message will not call the callback:

// handling this message does not call my callback
// because my_message's field matcher does not match
cib::service<my_service>->handle(my_message{"my_field"_field = 0x81});
Note
Because message view types are implicitly constructible from an owning message type or from an appropriate std::array, it is possible to set up a service and handler that works with "raw data" in the form of a std::array, but whose callbacks and matchers take the appropriate message view types.

A more interesting (and better-performing) way to handle message dispatching is with indexed callbacks.

Indexed callbacks

The code for defining indexed callbacks and their handling is almost the same as for the non-indexed case, with the addition that we need to say which fields to build indices on:

// index on my_field
using my_indices = msg::index_spec<my_field>;

// the service is now an indexed_service
struct my_indexed_service : msg::indexed_service<my_indices, my_message> {};

// this time, the callback is an indexed_callback
constexpr auto my_callback = msg::indexed_callback<"my_indexed_callback">(
    typename my_message_defn::matcher_t{},
    [](msg::const_view<my_message_defn>) { /* do something */ });

// everything else is the same

How does indexing work?

Note
This section documents the details of indexed callbacks. It’s not required to understand this to use indexed callbacks.

Indexing callbacks properly, interacting with arbitrary matchers, and calling the appropriate callbacks on reception of a message involves several pieces that work together. We leverage information known at compile time so as to expend minimal effort at runtime.

Building the indices

For each field in the msg::index_spec, we build a map from field values to bitsets, where the values in the bitsets represent callback indices.

Note
The bitsets may be run-length encoded by using the rle_indexed_service inplace of the indexed_service. This may be useful if you have limited space and/or a large set of possible callbacks. See Run Length Encoding Implementation Details

Each indexed_callback has a matcher that may be an arbitrary Boolean matcher expression. The indexed_callback construction process ensures that this matcher is in sum of products form. The process of handling messages works by set intersection on the bitsets, so each separate or​ed term at the top level within each matcher (as well as each matcher itself) must conceptually map to a separate callback.

The initialization process when indexed_callback​s are added to the builder takes care of this top-level concern, so that at build time, each callback matcher is a suitable Boolean term (either a single term, a negation or a conjunction, but not a disjunction).

The process of populating the field maps is then as follows:

  • Walk the matcher expression, outputting all the positive (non-negated) terms. Each such term is a field matcher specifying a field and a value. Add an entry to the appropriate field map, where the key is the matched value and the current callback index is added into the bitset value.

  • Any callback index not represented in the value bitsets of the map is collected into the default bitset. This is saying that if we don’t have a key in the map for a given message field value, we’ll call the callbacks that didn’t specify that key.

  • Walk the matcher expression again, this time outputting any negated terms. For each such term, add an entry in the map where the key is the field value and the value is the default bitset, excepting the current callback index. The current callback index is also added into all other values in the map.

  • Take all the callback indices in the default bitset that were not used for negated terms, and propagate them to all the values in the map.

This process happens conceptually for each indexed field. Each such field then has a map from field values to bitsets (representing indices of callbacks to call when the field has that value), and a default bitset (indices of callbacks to call when the field value was not found in the map).

That was perhaps hard to understand, so here are a couple of examples.

Simple example

Given two simple callback matchers:

m[0] == my_field::equal_to_t<​42>
m[1] == my_field::equal_to_t<​17>

First we walk the matcher expressions outputting the non-negated values. After this stage, the data for my_field is:

default_value = {}
map = {
  17 -> {1},
  42 -> {0}
}

i.e. each expected value is a key in the map, and the corresponding value in the map is a bitset of the callbacks to be called when that value is seen.

Next we check the map for any unrepresented callbacks. In this case every callback (0 and 1) is represented in the map, so the default value is unchanged.

Next we walk the matcher expressions again, outputting negated values. In this case there are none, so nothing happens.

Finally we propagate the "positive" value from the default value. Again in this case it’s empty, so no change. The final data for my_field is:

default_value = {}
map = {
  17 -> {1},
  42 -> {0}
}
// recall:
m[0] == my_field::equal_to_t<​42>
m[1] == my_field::equal_to_t<​17>

Now consider this in action.

  • If we get a message where my_field is 42, callback 0 will be eligible.

  • If we get a message where my_field is 17, callback 1 will be eligible.

  • If we get a message where my_field is another value, no callback will be eligible.

All correct.

Slightly more complex example

Given three callback matchers:

m[0] == my_field::equal_to_t<​42>
m[1] == not my_field::equal_to_t<​17>
m[2] == another_field::equal_to_t<​3>

First we walk the matcher expressions outputting the non-negated values. After this stage, the data for my_field is:

default_value = {}
map = {
  42 -> {0}
}

(m[1] is a negated value, so it is not yet considered, and m[2] contained no data for my_field.)

Next we check the map for any unrepresented callbacks. In this case callbacks 1 and 2 do not occur, so they are added to the defaults. The current data for my_field is:

default_value = {1,2}
map = {
  42 -> {0}
}

Next we walk the matcher expressions again, outputting negated values (m[1]). Now the my_field data becomes:

default_value = {1,2}
map = {
  17 -> {2}
  42 -> {0,1}
}

i.e. the entry with value 17 was populated with the defaults, minus its own index (1), and its own index (1) was entered into all the other mapped values.

Finally we propagate the "positive" defaults, i.e. {2} (because index 1 was associated with a negative term). The final data for my_field:

default_value = {1,2}
map = {
  17 -> {2}
  42 -> {0,1,2}
}
// recall:
m[0] == my_field::equal_to_t<​42>
m[1] == not my_field::equal_to_t<​17>
m[2] == another_field::equal_to_t<​3>

Now consider this in action.

  • If we get a message where my_field is 42, callbacks 0, 1 and 2 will be eligible.

  • If we get a message where my_field is 17, callback 2 will be eligible.

  • If we get a message where my_field is another value, callbacks 1 and 2 will be eligible.

Again, all correct.

Remember that this is only considering the indexing on my_field to assess eligibility: those bitsets would then be intersected with bitsets obtained by a similar process on another_field.

Working through more complex examples is left as an exercise to the reader.

Lookup strategies

Given an index map on a field, at compile time we can decide which runtime lookup strategy to use. All the code for this is found in https://github.com/intel/compile-time-init-build/tree/main/include/lookup.

There are three main lookup strategies:

  • linear search - this is suitable for a small number of possible field values.

  • direct array indexing - this is suitable when the min and max values are not too far apart, and the data is populated not too sparsely (a hash map is likely sparse, so this could be thought of as a very fast hash map that uses the identity function).

  • hash lookup - using a "bad" hash function.

For any given data, the lookup strategy is selected at compile time from a long list of potential strategies ordered by speed and found in https://github.com/intel/compile-time-init-build/tree/main/include/lookup/strategy/arc_cpu.hpp.

With compile-time selection, hash functions don’t need to be judged according to the usual criteria! We know the data; we just need something that is fast to compute and collision-free. So it is fairly easy to generate "bad" hash functions that are fast, and pick the first one that works according to the data we have.

Handling messages

Having selected the indexing strategy, when a message arrives, we can handle it as follows:

  • for each indexed field, extract the field from the message and lookup (using an appropriate selected strategy) the bitset of callbacks.

  • and together all the resulting bitsets (i.e. perform their set intersection).

This gives us the callbacks to be called. Each callback still has an associated matcher that may include field constraints that were already handled by the indexing, but may also include constraints on fields that were not indexed. With a little Boolean matcher manipulation, we can remove the fields that were indexed by setting them to match::always and simplifying the resulting expression. This is decidable at compile time.

For each callback, we now run the remaining matcher expression to deal with any unindexed but constrained fields, and call the callback if it passes. Bob’s your uncle.

The sc (string constant) library

See code at https://github.com/intel/compile-time-init-build/tree/main/include/sc. Everything in the sc library is in the sc namespace.

String constants

An sc::string_constant is a compile-time string in the same way that std::integral_constant is a compile-time integral value: it carries the string in the type as a pack of char​s.

template <typename Char, Char... chars> struct string_constant;

The easiest way to create a string_constant is using a user-defined literal:

constexpr auto hello_world = "Hello, world!"_sc;

// the string data itself is available as a compile-time std::string_view
using std::string_view_literals;
static_assert(decltype(hello_world)::value == "Hello, world!"sv);

string_constant specializations can be compared (even though they are different types), and operator+ is defined for concatenation.

// comparisons use lexicographical ordering
static_assert("Alice"_sc < "Bob"_sc);

static_assert("Hello,"_sc + " World!"_sc == "Hello, World!"_sc);

Efficient logging with MIPI Sys-T

The reason string_constant exists is for efficient logging. On a constrained system, space for text can be limited-to-nonexistent. string_constant interacts with the MIPI Sys-T logging config to solve this problem.

  • First, each string_constant contains string character data in its type.

  • The MIPI logger calls the function template specialization catalog to get the catalog ID corresponding to each string_constant.

But: the catalog function template is just that — only a template — to begin with. It is specialized as follows:

  • The application is built as a library.

  • Running nm on that library reveals missing symbols: precisely the function specializations that are required for all the instances of string_constant.

  • Those symbols are used to generate the template specializations in another file, which itself is compiled into a library.

  • String data is recovered from the symbol types and used to generate the catalog collateral in XML and/or JSON format.

  • Link-time optimization inlines the catalog function template specializations, each of which is a one-line function that returns a catalog ID.

Thus no string data exists in the executable, but the correct catalog IDs are used in logging, and the remote log handler can reconstitute the actual strings. The XML and JSON collateral also contains information about any runtime arguments that need to be interpolated into the string and whose values are sent by the MIPI Sys-T logger after the catalog ID.

Tooling support

The process of generating log strings from the type information revealed by missing symbols is automated by a python script provided and by a CMake wrapper function (gen_str_catalog) that drives the process. See the test that exercises that functionality for an example.

Note
This process assigns IDs to both strings and log modules. catalog is specialized for catalog IDs; module is specialized for module IDs.

Formatting strings

string_constants are formatted at compile time using fmt. Helpers are available for compile-time formatting of enum values, type names or integral constants:

// string constants themselves are interpolated at compile time
sc::format("My name is {}"_sc, "Alice"_sc); // "My name is Alice"

// enum values are interpolated at compile time
sc::format("The max log level is {}"_sc, sc::enum_<logging::MAX>); // "The max log level is MAX"

// integral constants likewise
sc::format("{} is a taxicab number"_sc, std::integral_constant<int, 1729>{}); // "1729 is a taxicab number"

// and names of types
sc::format("When in doubt, do as the {}s do"_sc, sc::type_<int>); // "When in doubt, do as the ints do"

When using cib log macros, any arguments that can be formatted at compile-time are automatically interpolated into the string_constant, leaving the rest as dynamic runtime arguments to log.

// here, 42 is interpolated at compile time
// (sc::int_<42> is an integral_constant)
sc::format("The answer is {}"_sc, sc::int_<42>);

// here, 42 is a runtime argument
// (because C++ doesn't have constexpr function parameters)
constexpr auto answer = 42;
sc::format("The answer is {}"_sc, answer);

Implementation Details

This section details some of the internal implementation details to assist contributors. The details here are not required to use the cib library.

Run Length Encoded Message Indices

To switch to using the RLE indices is as simple as converting your msg::indexed_service to a msg::rle_indexed_service.

The initial building of the mapping indices proceeds the same as the normal ones, where a series of entries in an index is generated and the callback that match are encoded into a stdx::bitset.

However, once this initial representation is built, we then take this and perform additional work (at compile time) to encode the bitsets as RLE data, and store in the index just an offset into the blob of RLE data rather than the bitset itself.

This is good for message maps that contain a large number of handlers as we trade off storage space for some decoding overhead.

Once encoded, the normal operation of the lookup process at run time proceeds and a set of candidate matches is collected, these are then intersected from the RLE data and the final set of callbacks invoked without needing to materialise any of the underlying bitsets.

RLE Data Encoding

There are several options for encoding the bitset into an RLE pattern, many of which will result in smaller size, but a lot of bit-shifting to extract data. We have chosen to trade off encoded size for faster decoding, as it is likely the handling of the RLE data and index lookup will be in the critical path for system state changes.

The encoding chosen is simply the number of consecutive bits of 0​s or 1​s.

Specifics:

  • The encoding runs from the least significant bit to most significant bit

  • The number of consecutive bits is stored as a std::byte and ranges 0…​255

  • The first byte of the encoding counts the number of 0 bits

  • If there are more than 255 consecutive identical bits, they can only be encoded in blocks of 255, and an additional 0 is needed to indicate zero opposite bits are needed.

Diagram

The msg::rle_indexed_builder will go through a process to take the indices and their bitset data and build a single blob of RLE encoded data for all indices, stored in an instance of a msg::detail::rle_storage. It also generates a set of msg::detail::rle_index entries for each of the index entries that maps the original bitmap to a location in the shared storage blob.

The rle_storage object contains a simple array of all RLE data bytes. The rle_index contains a simple offset into that array. We compute the smallest size that can contain the offset to avoid wasted storage and use that.

Note
The specific rle_storage and rle_index​s are locked together using a unique type so that the rle_index can not be used with the wrong rle_storage object.

When building the shared blob, the encoder will attempt to reduce the storage size by finding and reusing repeated patterns in the RLE data.

The final msg::indexed_handler contains an instance of the msg::rle_indices which contains both the storage and the maps referring to all the rle_index objects.

This means that the final compile time data generated consists of:

  • The Message Map lookups as per the normal implementation, however they store a simple offset rather than a bitset.

  • The blob of all RLE bitset data for all indices in the message handling map

Runtime Handling

The msg::indexed_handler implementation will delegate the mapping call for an incoming message down to the msg::rle_indices implementation. It will further call into it’s storage indices and match to the set of rle_index values for each mapping index.

This set of rle_index values (which are just offsets) are then converted to instances of a msg::detail::rle_decoder by the rle_storage. This converts the offset into a pointer to the sequence of std::byte​s for the RLE encoding.

All the collected rle_decoders from the various maps in the set of indices are then passed to an instance of the msg::detail::rle_intersect object and returned from the rle_indices call operator.

The rle_decoder provides a single-use enumerator that will step over the groups of 0​s or 1​s, providing a way to advance through them by arbitrary increments.

The rle_intersect implementation wraps the variadic set of rle_decoder​s so that the caller can iterate through all 1​s, calling the appropriate callback as it goes.

Efficient Iteration of Bits

The msg::detail::rle_decoder::chunk_enumerator provides a way to step through the RLE data for the encoded bitset an arbitrary number of bits at a time. It does this by exposing the current number of bits of consecutive value.

This is presented so that it is possible to efficiently find:

  • the longest run of 0​s

  • or, if none, the shortest run of 1​s.

Remember that we are trying to compute the intersection of all the encoded bitsets, so where all bitsets have a 1, we call the associated callback, where any of the bitsets has a 0, we skip that callback.

So the chunk_enumerator will return a signed 16 bit (at least) value indicating:

  • negative value - the number of 0​s

  • positive value - the number of 1​s

  • zero when past the end (special case)

The rle_intersect will initialise an array of rle_decoder::chunk_enumerators when it is asked to run a lambda for each 1 bit using the for_each() method.

This list is then searched for the minimum value of chunk size. This will either be the largest negative value, and so the longest run of 0​s, or the smallest number of 1​s, representing the next set of bits that are set in all bitsets.

The for_each() method will then advance past all the 0​s, or execute the lambda for that many set bits, until it has consumed all bits in the encoded bitsets.

This means that the cost of intersection of N indices is a number of pointers and a small amount of state for tracking the current run of bits and their type for each index.

There is no need to materialise a full bitset at all. This can be quite a memory saving if there are a large number of callbacks. The trade-off, of course, is more complex iteration of bits to discover the callbacks to run.