Asynchronous computation
Much of what happens in firmware is asynchronous and event-driven. Tasks get scheduled for future execution. After sending a messages to a remote endpoint, a reply will return at some point. Flows wait for hardware to change signals before proceeding.
The asynchronous building blocks provided by this library follow the ideas in P2300, the C++ proposal for an execution model based on senders and receivers. Among the aims:
-
a declarative style of specifying asynchronous behaviour
-
easy composition of asynchronous tasks
-
control of when and where tasks run in an asynchronous context
-
structured error handling that is easy to use
-
clear and safe management of asynchronous object lifetimes
The design is a mature one, proved in practice at Facebook and other companies, and rooted in compositional patterns inspired from functional programming. But there is no need to be a functional programming expert to use this library.
Note
|
Names in this library purposefully follow naming in the standard proposal. If you have a beef with naming, take it up with the C++ standards committee. For our purposes it is important not to introduce an extra source of confusion by changing names; this library is a partial implementation of P2300, thus external documentation is also useful. |
Concepts
If you are new to these ideas, it is best to forget anything you think you know about what "sender" and "receiver" might mean, and for now, try to treat them as abstract concepts.
Senders
A sender is an object that describes a unit of asynchronous work. Most user code deals with defining and composing senders. You can think of a sender as a function that will run asynchronously, and the return type of that function characterises the sender.
Receivers
A receiver is an object that handles what a sender returns. Receivers are the glue between senders, but are absent from user code; they exist under the hood. A receiver has three callbacks for handling three states:
-
set_value (success: handling whatever value a sender returns)
-
set_error (failure: handling an error)
-
set_stopped (cancellation)
Operation states
An operation state is the state that represents an asynchronous task. Operation states are concrete objects whose lifetime covers the asynchronous work. Just as synchronous function calls form an execution stack, operation states nest like an onion, the inner layers representing asynchronous operations that complete before the outer layers.
The details of operation states are also mostly absent from user code. An
operation state is the result of a call to connect
a sender with a receiver.
Schedulers
A scheduler is a handle to some compute resource. In the context of the proposal that is a broad definition: for this library, a scheduler comes primarily from the timer and scheduler components. Something that can actually run an asynchronous task.
External references
-
P2300 - the paper that sets out a complete motivation, examples and taxonomy of asynchronous building blocks
-
Working with Asynchrony Generically (Part 1) - Eric Niebler’s talk from CppCon 2021
-
Working with Asynchrony Generically (Part 2) - Eric Niebler’s talk from CppCon 2021
-
Using Sender/Receiver to Implement Control Flow for Async Processing - Steve Downey’s talk from C++Now 2023
Composing senders
Sender composition makes up most of what user code does. This is the process of defining how tasks run with respect to each other, e.g. "run task A, then forward what it returns to task B". To that end, there are three kinds of functions in the library:
-
sender factories: functions that return senders and represent entry points of asynchronous computation.
-
sender adaptors: functions that take one or more senders and return a sender that is somehow the composition of them.
-
sender consumers: functions that take a sender and actually kick off the asynchronous work.
Important
|
Creating and composing senders does not do any work. Senders only describe a computation. After composing multiple senders into a sender, a sender consumer will actually start the work. |
Sender factories
just
Found in the header: async/just.hpp
just
returns a sender that produces the provided values.
auto sndr = async::just(42, 17);
// sndr produces (sends) the values 42 and 17
just
can be given a name that is used for debug events. By default, its name
is "just"
.
auto sndr = async::just<"the answer">(42);
Note
|
For functional programmers
|
just_error
Found in the header: async/just.hpp
just_error
is like just
, but instead of completing by calling set_value
on
a receiver, it will call set_error
.
auto sndr = async::just_error(42);
// sndr produces (sends) the value 42 on the error channel
just_error
can be given a name that is used for debug events. By default, its name
is "just_error"
.
auto sndr = async::just_error<"oops">(42);
just_error_result_of
Found in the header: async/just_result_of.hpp
just_error_result_of
is like just_error
, but instead of taking raw values, it takes
functions that produce values.
auto sndr1 = async::just_error_result_of([] { return 42; });
// sndr1 sends 42 on the error channel
auto sndr2 = async::just_error_result_of([] { return 42; },
[] { do_something(); });
// sndr2 also sends 42 on the error channel,
// as well as executing do_something() - which returns void
just_error_result_of
can be given a name that is used for debug events. By default, its name
is "just_error_result_of"
.
auto sndr = async::just_error_result_of<"oops">([] { return 42; });
Note
|
Do not rely on the order of evaluation of the functions given to
just_error_result_of !
|
Note
|
Only one value can be sent on the error channel. Hence just_error is a
unary function. However just_error_result_of is an n-ary function: exactly one
of the functions passed must return something other than void .
|
just_result_of
Found in the header: async/just_result_of.hpp
just_result_of
is like just
, but instead of taking raw values, it takes
functions that produce values.
auto sndr1 = async::just_result_of([] { return 42; },
[] { return 17; });
// sndr1 produces (sends) the values 42 and 17
auto sndr2 = async::just_result_of([] { return 42; },
[] { do_something(); },
[] { return 17; });
// sndr2 also produces (sends) the values 42 and 17,
// as well as executing do_something() - which returns void
just_result_of
can be given a name that is used for debug events. By default, its name
is "just_result_of"
.
auto sndr = async::just_result_of<"the answer">([] { return 42; });
Note
|
Do not rely on the order of evaluation of the functions given to
just_result_of !
|
just_result_of
behaves the same way as just
followed by then
:
auto s = async::just_result_of([] { return 42; });
// as if:
auto s = async::just() | async::then([] { return 42; });
just_stopped
Found in the header: async/just.hpp
just_stopped
is like just
, but instead of completing by calling set_value
on
a receiver, it will call set_stopped
.
auto sndr = async::just_stopped();
// sndr completes on the stopped channel
just_stopped
can be given a name that is used for debug events. By default, its name
is "just_stopped"
.
auto sndr = async::just_stopped<"cancel">();
read_env
Found in the header: async/read_env.hpp
read_env
takes a tag and returns a sender that sends the value denoted by
that tag, pulled from the environment of the receiver the sender is
connected to.
auto s = async::read_env(async::get_stop_token_t{});
// when connected to a receiver, s will send the stop token from that receiver's environment
read_env
has some built-in aliases for well-known tags:
auto s1 = async::get_stop_token(); // same as async::read_env(async::get_stop_token_t{});
auto s2 = async::get_scheduler(); // same as async::read_env(async::get_scheduler_t{});
read_env
(and its tag aliases) can be given a name that is used for debug
events. By default, the name is the name exposed by the tag, or if there is no
such name, "read_env"
.
auto s1 = async::get_stop_token<"gst">(); // name = "gst"
auto s2 = async::get_stop_token(); // name = "get_stop_token"
auto s3 = async::read_env(unnamed_tag{}); // name = "read_env"
schedule
See Schedulers
.
A scheduler's schedule
function returns a sender that produces nothing, but
represents running on that scheduler’s compute resource. We can chain more work
(i.e. other senders) on to that starting point.
// s is a scheduler
auto sndr = s.schedule();
// sndr does no work (yet) but when consumed, will run according to that
// scheduler. We can use sender adaptors to compose more work.
Sender adaptors
continue_on
Found in the header: async/continue_on.hpp
continue_on
allows an asynchronous computation to switch where it is running.
// s1 and s2 are different schedulers representing different computation contexts
auto sndr = async::start_on(s1, async::just(42));
auto t = async::continue_on(sndr, s2);
auto transferred = async::then(t, [] (int i) { return std::to_string(i); });
// when transferred runs:
// first on s1 it will produce 42
// then on s2 it will convert 42 to a string, producing "42"
Warning
|
If the upstream sender causes the downstream scheduler to run, using
continue_on is incorrect: incite_on
should be used instead.
|
incite_on
Found in the header: async/incite_on.hpp
incite_on
is like continue_on
but is
for the use case where running a sender causes a scheduler to be triggered. It
is intended for use with trigger_scheduler
.
auto sndr = async::just([] { send_request(); });
| async::incite_on(trigger_scheduler<"msg">{})
| async::then([] (auto msg) { /* handle response */ });
auto on_recv_message(auto msg) {
async::run_triggers<"msg">(msg);
}
// when sndr runs:
// send_message will send a request
// time passes...
// when the response is received, its handler calls run_triggers
// sndr continues on the trigger scheduler
The sender upstream of incite_on
must complete by sending a function. When
that function is called it will in some way cause the downstream scheduled
sender to run. This may happen indirectly (e.g. as above, via another
asynchronous mechanism like message reception), or directly. The upstream sender
must complete successfully in one way — with this function — although it may
still send an error, or be cancelled.
continue_on
would be incorrect in this circumstance, because once the just
completes and sends a message, continue_on(trigger_scheduler{})
is racing with
message reception. If the message is received before the trigger_scheduler
is
ready to fire, the trigger would be missed.
Note
|
The incited scheduler must produce a sender which completes
asynchronously. A synchronous scheduler would require no incitement, and
continue_on would be correct.
|
let_error
Found in the header: async/let_error.hpp
let_error
is like let_value
, but instead of applying the
function to values, it applies to errors.
auto sndr = async::just_error(42);
auto let_sndr = async::let_error(sndr, [] (int i) { return async::just(std::to_string(i)); });
// when run, let_sndr will produce the string "42"
let_stopped
Found in the header: async/let_stopped.hpp
let_stopped
is like let_value
, but instead of applying the
function to values, it applies to the stopped channel.
auto sndr = async::just_stopped();
auto let_sndr = async::let_stopped(sndr, [] { return async::just(42); });
// when run, let_sndr will produce 42
let_value
Found in the header: async/let_value.hpp
let_value
is like then
, but the function given to it will itself
return a sender.
auto sndr = async::just(42);
auto let_sndr = async::let_value(sndr, [] (int i) { return async::just(std::to_string(i)); });
// when run, let_sndr will produce the string "42"
Note
|
For functional programmers
|
A primary use case of let_value
is to allow dynamic selection of senders at
runtime based on what a previous sender produced. In this case, the function
passed to let_value
must return a single type. A naive approach doesn’t work:
auto sndr = async::just(42);
auto let_sndr = async::let_value(
sndr,
[] (int i) {
if (i == 42) {
return async::just(std::to_string(i));
} else {
return async::just_error(i);
}
});
// this fails to compile because the lambda tries to return either a just sender
// or a just_error: these are different types
For this use case, the function provided to let_value
must return a
variant sender : a sender that can
encapsulate several others and select which one is used at runtime.
auto sndr = async::just(42);
auto let_sndr = async::let_value(
sndr,
[] (int i) {
return async::make_variant_sender(
i == 42,
[=] { return async::just(std::to_string(i)); },
[=] { return async::just_error(i); }
);
});
This works: using the helper function make_variant_sender
, let_value
can
successfully make a runtime choice about which sender to proceed with.
repeat
Found in the header: async/repeat.hpp
repeat
takes a sender and repeats it indefinitely. When the sender completes
with a value, it is reconnected and restarted. This is useful for periodic
tasks. A repeat
sender can still be stopped, or complete with an error.
auto s = some_sender | async::repeat();
// when s runs, some_sender runs. If some_sender sends an error or is stopped, s
// reflects that. If some_sender completes successfully, the result is discarded
// and some_sender runs again.
Caution
|
repeat can cause stack overflows if used with a scheduler that
doesn’t break the callstack, like
inline_scheduler .
|
repeat_n
Found in the header: async/repeat.hpp
repeat_n
works the same way as repeat
, but repeats a given number of times.
Note
|
repeat_n must always run at least once to be able to complete. So
repeat_n(1) repeats once, i.e. runs twice. repeat_n(0) runs once (thus is redundant).
|
repeat_until
Found in the header: async/repeat.hpp
repeat_until
works the same way as repeat
, but repeats the sender until a
given predicate returns true.
// this is the same as repeat_n(0), i.e. just run once
auto s = some_sender | async::repeat_until([] (auto&&...) { return true; });
Note
|
The arguments passed to the predicate are those in the value completion(s) of the sender. |
retry
Found in the header: async/retry.hpp
retry
takes a multishot sender and wraps it: if that sender sends an error,
the error gets discarded and the sender is reconnected and restarted.
auto s = some_sender | async::retry();
// if some_sender sends an error, it will be reconnected and restarted
// s completes when some_sender completes with set_value or set_stopped
Caution
|
retry can cause stack overflows if used with a scheduler that
doesn’t break the callstack, like
inline_scheduler .
|
retry_until
Found in the header: async/retry.hpp
retry_until
works like retry
, but takes a predicate. If the predicate
returns true, retry_until
can complete on the error channel.
// this is the same as just running the sender
auto s = some_sender | async::retry_until([] (auto&&) { return true; });
Note
|
The arguments passed to the predicate are those in the error completion(s) of the sender. |
sequence
Found in the header: async/sequence.hpp
sequence
is like let_value
, but it must take a nullary
function that returns a sender. In other words, the first sender’s values (if
any) are discarded before the second sender is run.
auto sndr = async::just(17);
auto seq_sndr = async::sequence(sndr, [] { return async::just(42); });
// when run, seq_sndr will produce 42
Note
|
For functional programmers
|
let_value
should be used when dynamic sender selection at runtime is required
based on a sent value. When it suffices to run one sender after another with no
dependency between them, use sequence
. Because sequence
is more constrained,
in some cases it allows more compile-time manipulation like
sender attribute interrogation.
Sequencing unrelated senders is common enough that there is a shortcut for
sequence
that saves typing a lambda expression: seq
.
auto seq_sndr = async::just(17) | async::seq(async::just(42));
// when run, seq_sndr will produce 42
seq
is useful, but notice the difference between these two:
auto seq1 = async::seq(async::just(move_only_obj{}));
auto seq2 = async::sequence([] { return async::just(move_only_obj{}); });
They are compositionally the same. However seq1
constructs the sender (just
)
early; seq2
constructs the sender only when called. In this case with a
move-only object, that means that seq1
is single shot, but seq2
is
multishot.
split
Found in the header: async/split.hpp
Some senders are single shot: they can only run once. Doing so may consume
resources that the sender owns. The call to connect
such a sender has an
overload for rvalue references only.
Other senders are multishot and can connect to multiple receivers and run multiple times.
split
turns a single shot sender into a multishot sender. It has no effect
when called on a multishot sender.
start_on
Found in the header: async/start_on.hpp
start_on
takes a scheduler and a sender, and composes them so that the work
will run on that scheduler. It chains the sender work onto the result of
calling schedule
.
// s is a scheduler
auto sndr = async::start_on(s, async::just(42));
// when run, sndr will execute on the compute resource specified by s, producing 42
start_on
is equivalent to seq(scheduler.schedule(), sender
):
auto sndr = s.schedule() | async::seq(async::just(42));
then
Found in the header: async/then.hpp
then
takes a sender and a function, and returns a sender that will call the
function with the values that the sender sends.
auto sndr = async::just(42);
auto then_sndr = async::then(sndr, [] (int i) { return std::to_string(i); });
// when run, then_sndr will produce the string "42"
Note
|
For functional programmers
|
then
can also take a variadic pack of functions, for a use case when the
sender sends multiple values. This provides an easy way to apply a different
function to each value, and avoids having to return a tuple of values which
would then require extra handling downstream.
auto sndr = async::just(42, 17);
auto then_sndr = async::then(sndr,
[] (int i) { return std::to_string(i); },
[] (int j) { return j + 1; });
// when run, then_sndr will send "42" and 18
In both the "normal" and variadic cases, functions passed to then
may return
void
. In the "normal" case, the resulting then
sender completes by calling
set_value
with no arguments. In the variadic case, set_value
will be called
with the void
-returns filtered out.
auto s1 = async::just(42);
auto normal_then = async::then(s1, [] (int) {});
// when run, this will call set_value() on the downstream receiver
auto s2 = async::just(42, 17);
auto variadic_then = async::then(s2,
[] (int i) { return std::to_string(i); },
[] (int) {});
// when run, this will call set_value("42") on the downstream receiver
In the variadic case, then
can distribute the values sent from upstream to the
functions by arity:
auto s = async::just(42, 17, false, "Hello"sv);
auto t = async::then(s,
[] (int i, int j) { return i + j; },
[] (auto b, std::string_view s) -> std::string_view { if (b) return s; else return "no"; },
[] { return 1.0f; });
// when run, this will call set_value(59, "no", 1.0f) on the downstream receiver
timeout_after
Found in the header: async/timeout_after.hpp
timeout_after
takes a sender, a duration and an error value, and returns a
sender that will complete with an error after the given timeout. Otherwise it
will complete as the given sender completes.
auto sndr = async::start_on(my_scheduler{}, async::just(42))
| async::timeout_after(1s, error{17});
// when run, sndr will produce 42 on the value channel if my_scheduler runs within 1s
// otherwise it will produce error{17} on the error channel
Note
|
timeout_after is implemented using stop_when .
|
upon_error
Found in the header: async/then.hpp
upon_error
works like then
, but instead of applying the function to values, it applies to errors.
auto sndr = async::just_error(42);
auto then_sndr = async::upon_error(sndr, [] (int i) { return std::to_string(i); });
// when run, then_sndr will produce the string "42" as an error
upon_stopped
Found in the header: async/then.hpp
upon_stopped
works like then
, but instead of applying the function to
values, it applies to the stopped signal. Therefore the function takes no arguments.
auto sndr = async::just_stopped();
auto then_sndr = async::upon_stopped(sndr, [] { return 42; });
// when run, then_sndr will produce 42
when_all
Found in the header: async/when_all.hpp
when_all
takes a number of senders and after they all complete, forwards all
the values. If any of them produces an error or is cancelled, when_all
cancels
the remaining senders.
Each sender passed to when_all
must complete with set_value
in exactly one way.
auto s1 = async::just(42);
auto s2 = async::just(17);
auto w = async::when_all(s1, s2);
// when w runs, s1 and s2 both run, and downstream receives both 42 and 17
Note
|
The order in which the sender arguments to when_all run is unspecified.
|
Important
|
If no arguments are given to when_all , it will complete
immediately. If only one argument is given to when_all ,
when_all has no effect. i.e. it behaves like the identity function.
|
when_any
Found in the header: async/when_any.hpp
when_any
takes a number of senders and races them. It is available in
different flavors:
when_any
determines completion as soon as any of its senders completes with
either set_value
or set_error
. It completes with the first such completion
it sees. If all its senders are complete with set_stopped
, when_any
completes with set_stopped
.
first_successful
determines completion as soon as any of its senders completes
with set_value
. It completes with the first such completion it sees. If no
senders complete with set_value
, first_successful
completes with the first
set_error
completion it sees. If all its senders complete with set_stopped
,
first_successful
completes with set_stopped
.
stop_when
is a binary sender adaptor. It determines completion as soon as
either of its senders completes on any channel. Because it’s a binary function,
stop_when
can also be piped.
Note
|
As soon as a completion is determined, any remaining senders whose completion becomes irrelevant are cancelled. |
auto s1 = async::just(42);
auto s2 = async::just(17);
auto w = async::when_any(s1, s2);
// when w runs, s1 and s2 race; downstream receives either 42 or 17
auto s = some_sender | async::stop_when(some_other_sender);
// when s runs, some_sender and some_other_sender race
// the first to complete determines the completion of s
// the other is requested to stop
Note
|
For all flavors, the order in which the sender arguments run is unspecified. |
Important
|
Each of these functions completes after all of its senders complete. The completion reflects — according to flavor — which sender completed first, but it cannot occur before all senders complete (regardless of the channel each may complete on). |
Important
|
If no arguments are given to when_any , it will never complete
unless it is cancelled. If only one argument is given to when_any ,
when_any has no effect. i.e. it behaves like the identity function.
|
Sender consumers
start_detached
Found in the header: async/start_detached.hpp
start_detached
takes a sender, connects and starts it, and returns, leaving
the work running detached. The return value is a
stdx::optional
. If
the optional is empty, the sender was not started. Otherwise, it contains a
pointer to an inplace_stop_source
that
can be used to cancel the operation.
auto sndr = async::just(42);
auto started = async::start_detached(sndr);
// in this case, starting the sender also completes it
If a sender starts detached, its operation state has to be allocated somewhere.
That is achieved through an allocator
determined from the sender’s attributes. Without further customization, if a
sender completes synchronously, it will use the stack_allocator
by default.
Otherwise it will use the static_allocator
.
To hook into the static allocation strategy, a
template argument (representing the name of the allocation domain) can be given
to start_detached
. This is used to select a static allocator.
auto result = async::start_detached<struct Name>(s);
The default template argument results in a different static_allocator
for each
call site, with a default allocation limit of 1. If a name is given, that name
is used to specialize the static_allocator
, and can be used with
stop_detached
to request
cancellation.
If the allocator’s construct
method returns false
(presumably because the
allocation limit has been reached), the result of start_detached
is an empty
optional.
An extra environment may be given to
start_detached
in order to control sender behaviour, or to specify a custom
allocator:
auto result = async::start_detached(
s, async::prop{async::get_allocator_t{}, custom_allocator{}}));
start_detached_unstoppable
Found in the header: async/start_detached.hpp
start_detached_unstoppable
behaves identically to start_detached
, except
that the returned optional value contains a pointer to a never_stop_source
,
which has the same interface as an
inplace_stop_source
but never actually
cancels the operation. So start_detached_unstoppable
is slightly more
efficient than start_detached
for the cases where cancellation is not
required.
auto result = async::start_detached_unstoppable<struct Name>(s);
stop_detached
Found in the header: async/start_detached.hpp
A sender started with start_detached
may be cancelled with stop_detached
,
using the same template argument:
struct Name;
auto result = async::start_detached<Name>(s);
// later, in another context...
auto stop_requested = async::stop_detached<Name>(); // true if a stop was requested
stop_detached
will return false
if it cannot request a stop:
-
because no sender with that name was given to
start_detached
-
because the sender has already completed
-
because a stop was already requested
-
because the sender was started using
start_detached_unstoppable
-
because the associated allocator supports multiple operation states, so a single template argument is not sufficient to determine which one to stop (in this case, the return value of
start_detached
may be used to request cancellation)
sync_wait
Found in the header: async/sync_wait.hpp
sync_wait
takes a sender and:
-
connects and starts it
-
blocks waiting for it to complete
-
returns any values it sends in a
std::optional<stdx::tuple<…>>
auto sndr = async::just(42);
auto [i] = async::sync_wait(sndr).value();
// i is now 42
As with start_detached
, an extra environment may be given to
sync_wait
in order to control sender behaviour:
auto result = async::sync_wait(
s, async::prop{async::get_custom_property_t{}, custom_property{}}));
Pipe syntax
We can compose senders with pipe syntax, which can make things easier to read. To take the continue_on example:
// this is equivalent to the previous non-piped continue_on example
auto async_computation =
s1.schedule()
| async::then([] { return 42; })
| async::continue_on(s2)
| async::then([] (int i) { return std::to_string(i); });
It is also possible to compose sender adaptors with pipe syntax, allowing us to defer both where the operation runs and how the result is obtained:
auto async_computation =
| async::then([] { return 42; })
| async::continue_on(s2)
| async::then([] (int i) { return std::to_string(i); });
s1.schedule() | async_computation | async::sync_wait();
Variant senders
Variant senders work primarily with
let_value
to provide a runtime choice in
a computation. There are helper functions to create variant senders.
In the simplest formulation, make_variant_sender
makes a choice between two
senders based on a boolean value (just like an if
statement). The consequent
and alternative are lambda expressions:
auto s = async::make_variant_sender(/* boolean-expression */,
[] { return /* a sender */; },
[] { return /* a different sender */; });
This often suffices for a binary choice, but if we want a choice of more possibilities, the same function supports that:
auto s = async::make_variant_sender(
async::match([] (auto...) { /* test A */ }) >> [] (auto...) { return /* sender A */; },
async::match([] (auto...) { /* test B */ }) >> [] (auto...) { return /* sender B */; },
async::match([] (auto...) { /* test C */ }) >> [] (auto...) { return /* sender C */; },
async::otherwise >> [] (auto...) { return /* fallback */; },
args...);
Each predicate in turn receives the values of args…
; the first
predicate that succeeds indicates the corresponding sender. The pattern matching
must be exhaustive; otherwise
is a helpful catch-all to achieve that.
In the simple binary choice overload of make_variant_sender
, the functions
take no arguments (but lambda expressions can capture what they need); in the
second more general overload, the functions take the same arguments as the
predicates do, so arguments can be used directly without capturing.
Caution
|
Capturing by reference is generally a bad idea in asynchronous code; it is easy to get dangling references that way. Init capture by move is preferable if needed. |
Error handling
Sender adaptors won’t touch values and errors that they aren’t interested in, but will just pass them through, so we can do error handling in a compositional style:
auto s1 = async::just(42)
| async::then([] (int i) { return i + 17; })
| async::upon_error([] (int i) { return std::to_string(i); });
// when run, s1 will produce 59: upon_error had nothing to do
auto s2 = async::just_error(42)
| async::then([] (int i) { return i + 17; })
| async::upon_error([] (int i) { return std::to_string(i); });
// when run, s2 will produce the string "42" as an error: then had nothing to do
Cancellation
Cancellation is cooperative. That is the most important thing to remember: senders and receivers form a general framework for dealing with asynchronous computation; as such they do not prescribe any particular mechanism of concurrency. Senders and receivers know nothing about threads, fibres, interrupts, etc. So there is no way they could support any kind of pre-emptive cancellation.
Instead, we use stop_source
, stop_token
and stop_callback
for cooperative
cancellation.
A stop_source
is a non-movable object that contains state (typically one or
more atomic variables) relating to cancellation. It is the object that controls
cancellation; to cancel an operation, call the associated stop_source
 's
request_stop()
method.
A stop_token
is a lightweight handle obtained from a stop_source
. A
stop_token
can check whether an operation has been cancelled with the
stop_requested()
method. A source may hand out more than one token; all tokens
will all observe the source in a thread-safe manner.
A stop_callback
registers a callback that will be called on cancellation of a
given stop_source
. stop_callback
 's constructor takes a stop_token
associated with the stop_source
.
Find more documentation for these constructs on
cppreference.com. In the
senders and receivers framework, the following implementations are in
the stop_token.hpp
header:
-
async::inplace_stop_source
- a non-movable type that keeps the state -
async::inplace_stop_token
- lightweight and copyable -
async::inplace_stop_callback
- also non-movable
None of these types causes allocation. inplace_stop_callback
is similar to
the scheduler and timer task types, implemented as an intrusive list (hence it
is non-movable).
Note
|
A stop_callback is called when stop_requested is called, not when an
operation finally completes. It executes in the thread that calls
stop_requested . If a stop_callback is constructed when a stop has already
been requested, the callback will run immediately in the constructing thread.
|
Important
|
Once more, cancellation is cooperative. Any parts of operations
that don’t support cancellation will run to completion (and then may complete
with set_stopped ). Sender adaptors support cancellation at transition points.
|
Customizing senders and receivers
The framework provides a general set of adaptors for composing senders. Users will write their own senders, receivers, operation states, etc to provide for individual use cases.
The basics to remember are:
-
senders and receivers are movable (and probably copyable)
-
operation states may be non-movable
-
senders advertise what they send
Senders and receivers must always be movable. In particular, don’t put non-movable objects (e.g. atomic values, mutexes) into a receiver. The operation state, which may be non-movable, is the place to put them.
Completion signatures
Senders advertise the types that they may send.
One way to do this, for simple cases, is to expose a
completion_signatures
typedef. This is a type list of function signatures
representing all the ways a sender may complete. The return type of each
signature signals whether it is a success, error or cancellation. Some trivial
examples:
// this just sends 42 (as if async::just(42))
struct just_42_sender {
// ...
// the only way it completes is by successfully sending an int
using completion_signatures =
async::completion_signatures<async::set_value_t(int)>;
};
// this just sends 42 (as if async::just(42)),
// but it may also error with an error_t
struct just_42_or_error_sender {
// ...
using completion_signatures =
async::completion_signatures<async::set_value_t(int),
async::set_error_t(error_t)>;
};
// this just sends 42 (as if async::just(42)),
// but it may also be stopped
struct just_42_or_stopped_sender {
// ...
using completion_signatures =
async::completion_signatures<async::set_value_t(int),
async::set_stopped_t()>;
};
Another way that senders can advertise their completions is through a query. It looks like this:
// this just sends 42 (as if async::just(42))
struct just_42_sender {
// ...
// the only way it completes is by successfully sending an int
template <typename Env>
[[nodiscard]] constexpr auto get_completion_signatures(Env const &)
-> async::completion_signatures<async::set_value_t(int)> {
return {};
}
};
For a simple case like this both methods are equivalent. However, by using a query a sender can send types dependent on an environment - and when a sender connects to a receiver, the receiver provides that environment.
Environments
An important mechanism for customization is the idea of an environment.
Environments are usually associated with receivers and the framework looks up
values in the environment by using an overloaded query
function. In particular, the
framework calls R::query(get_env_t)
to find the environment of a receiver type
R
, and E::query(get_stop_token_t)
to find the stop token of an environment
type E. async::get_env_t
and async::get_stop_token_t
are the tag types used for this
purpose.
In practice, here’s what supporting that might look like for a custom receiver that supports cancellation:
struct custom_receiver {
auto set_value(auto&&...) { ... }
auto set_error(auto&&) { ... }
auto set_stopped() { ... }
[[nodiscard]] constexpr auto query(async::get_env_t) const {
return async::prop{async::get_stop_token_t{}, stop_source->get_token()};
}
inplace_stop_source* stop_source;
};
Given this, we can construct an arbitrary composition of senders with this as
the final receiver. If we want to cancel the operation, we call request_stop()
on the (external) inplace_stop_source
. The internal senders, receivers and
operation states in the composition can observe this request by querying the
stop token in the environment for the final receiver, and this knowledge can
propagate through the sender-receiver chain.
Note
|
Remember that a receiver should not own a stop_source: receivers must be movable, and in general a stop_source is not. |
Constructing and composing environments
An environment is conceptually a key-value store through which a receiver (or a sender) provides properties to algorithms (sender adaptors) to customize the way they work.
Some of the facilities for environment handling are covered in P3325.
A prop
is a single key-value pair. Construct one with a query and the value
that will be returned for that query.
auto e = async::prop{async::get_stop_token_t{}, stop_source->get_token()};
Note
|
A prop is an environment. A small one, but it models the
concepts required.
|
env
is how we compose multiple properties, or multiple environments:
auto e = async::env{
async::prop{async::get_stop_token_t{}, stop_source->get_token()},
async::prop{async::get_allocator_t{}, async::stack_allocator{}}};
This allows us to use an existing environment and extend it with new properties, or override existing ones.
auto old_e = /* existing environment */;
auto e = async::env{
async::prop{async::get_stop_token_t{}, stop_source->get_token()},
old_e};
In this case, whether or not get_stop_token
is a valid query on old_e
,
calling it on e
will return the newly-provided stop token.
Sender attributes
Senders have attributes that can be retrieved with get_env
in the same way as
querying a receiver’s environment.
Note
|
Don’t blame me for the name: it’s in P2300.
Receivers have environments. Senders have attributes. Both are obtained by
calling get_env .
|
completion_scheduler
A sender’s attributes often include its completion scheduler. In particular, a
sender obtained from calling schedule
on a scheduler will always have that
scheduler as its completion scheduler. Perhaps that’s clearer in code:
auto orig_sched = /* some scheduler */;
auto sndr = orig_sched.schedule();
auto sched = async::get_completion_scheduler(async::get_env(sndr));
assert(sched == orig_sched);
get_completion_scheduler
uses A::query(get_completion_scheduler_t)
to
find the completion scheduler for a sender’s attributes type A
.
allocator
A sender’s attributes also include an allocator, that is used when
start_detached
is called.
auto sched = /* some scheduler */;
auto sndr = orig_sched.schedule();
auto alloc = async::get_allocator(async::get_env(sndr));
Similarly, get_allocator
uses A::query(get_allocator_t)
to
find the allocator for a sender’s attributes type A
.
Given a class T
, an allocator
supports two operations:
template <typename T>
struct allocator {
// allocate space for a T and construct it with Args...
// then, call F with the (rvalue) T
// return false if unable to allocate T, otherwise true
template <typename F, typename... Args> auto construct(F&&, Args &&...) -> bool;
// destroy and deallocate a T
auto destruct(T const *) -> void;
};
Note
|
construct is a little different from what you might expect: it doesn’t
return a pointer-to-T, it calls a given function with the constructed T. This
allows easier support for stack allocators with non-movable objects (like
operation states).
|
The default allocator, if a sender doesn’t otherwise specify one, is a
static_allocator
. A static_allocator
is parameterized with a tag
representing the allocation domain for a particular call site. This tag can be
passed to start_detached
and used to specialize the variable template
async::allocation_limit
in order to control static allocation.
// a tag type to indicate the allocation domain
struct my_alloc_domain;
// specialize the limit for the domain (if left unspecialized, the default is 1)
template <>
constexpr inline auto async::allocation_limit<my_alloc_domain> = std::size_t{8};
// when I call start_detached, the static allocator for the domain will be used
auto result = async::start_detached<my_alloc_domain>(sndr);
Note
|
The default allocation strategy for a sender is static allocation, but
some senders are synchronous by nature: for example just or the sender
produced by an inline_scheduler . These senders use stack allocators.
|
Schedulers
fixed_priority_scheduler
Found in the header: async/schedulers/priority_scheduler.hpp
A fixed_priority_scheduler
represents work that will be run at a certain priority.
using S = async::fixed_priority_scheduler<0>; // highest priority
Note
|
The intended use case for fixed_priority_scheduler is to schedule tasks
to be executed on prioritized interrupts.
|
A fixed_priority_scheduler
can be given a name which is used to output debug events.
The default name is "fixed_priority_scheduler"
.
auto s = async::fixed_priority_scheduler<0, "my scheduler">{};
The fixed_priority_scheduler
works hand in glove with a task_manager
that
manages tasks in priority order. A priority_task_manager
is provided and may
be used by providing a HAL, and by specializing the injected_task_manager
variable template.
namespace {
// A HAL provides one function to enable the priority interrupt
struct hal {
static auto schedule(async::priority_t) {}
};
// a priority task manager with 8 priority levels
using task_manager_t = async::priority_task_manager<hal, 8>;
} // namespace
// fixed_priority_scheduler will use this task_manager
template <> inline auto async::injected_task_manager<> = task_manager_t{};
// when a priority interrupt fires, the ISR executes the tasks
template <async::priority_t P>
auto interrupt_service_routine() {
async::task_mgr::service_tasks<P>();
}
The result of using a fixed_priority_scheduler
is that work is scheduled to be
run when priority interrupts fire.
int x{};
async::start_on(async::fixed_priority_scheduler<0>{},
async::just_result_of([&] { x = 42; }))
| async::start_detached();
// when the interrupt fires...
async::task_mgr::service_tasks<0>();
// x is now 42
inline_scheduler
Found in the header: async/schedulers/inline_scheduler.hpp
The most basic scheduler is the inline_scheduler
. It runs work with a regular
function call in the current execution context. It’s the degenerate case as far
as concurrency goes; starting the work also completes it.
int x{};
auto s = async::start_on(async::inline_scheduler{},
async::just(42)
| async::then([&] (auto i) { x = i; });
async::start_detached(s);
// i is now 42
An inline_scheduler
can be given a name which is used to output debug events.
The default name is "inline_scheduler"
.
auto s = async::inline_scheduler<"my scheduler">{};
runloop_scheduler
Found in the header: async/schedulers/runloop_scheduler.hpp
The runloop_scheduler
adds any work to a queue that is executed in order. It
is used as a completion scheduler inside
sync_wait
.
auto value = async::get_scheduler()
| async::let_value([&](auto sched) {
return async::start_on(sched, async::just(42));
})
| async::sync_wait();
This code uses get_scheduler
to read the
scheduler provided by sync_wait
. That runloop_scheduler
is then used to
schedule work.
thread_scheduler
Found in the header: async/schedulers/thread_scheduler.hpp
The thread_scheduler
is a basic scheduler that runs work on a newly-created
thread that is detached.
int x{};
auto s = async::start_on(async::thread_scheduler{},
async::just(42) | async::then([&] (auto i) { x = i; });
async::start_detached(s);
// without some other sync mechanism, this is risky:
// there is now a detached thread running that will update x at some point
A thread_scheduler
can be given a name which is used to output debug events.
The default name is "thread_scheduler"
.
auto s = async::thread_scheduler<"my scheduler">{};
time_scheduler
Found in the header: async/schedulers/time_scheduler.hpp
A time_scheduler
represents work that will be run after a certain duration has
elapsed.
auto s = async::time_scheduler{10ms}; // after a duration of 10ms
Note
|
The intended use case for time_scheduler is to schedule tasks
to be executed on timer interrupts.
|
The time_scheduler
works hand in glove with a timer_manager
that
manages timer tasks. A generic_timer_manager
is provided and may
be used by providing a HAL, and by specializing the injected_timer_manager
variable template.
namespace {
// A HAL defines a time_point type and a task type,
// and provides functions to control a timer interrupt
struct hal {
using time_point_t = std::chrono::steady_clock::time_point;
using task_t = async::timer_task<time_point_t>;
static auto enable() -> void;
static auto enable(auto duration) -> time_point_t; // optional
static auto disable() -> void;
static auto set_event_time(time_point_t tp) -> void;
static auto now() -> time_point_t;
};
// use the generic timer manager
using timer_manager_t = async::generic_timer_manager<hal>;
} // namespace
// tell the library how to infer a time point type from a duration type by
// specializing time_point_for
template <typename Rep, typename Period>
struct async::timer_mgr::time_point_for<std::chrono::duration<Rep, Period>> {
using type = hal::time_point_t;
};
// time_scheduler will use this timer_manager
template <> inline auto async::injected_timer_manager<> = timer_manager_t{};
// when a timer interrupt fires, the ISR executes the next task
auto timer_interrupt_service_routine() {
async::timer_mgr::service_task();
}
Note
|
async::timer_task is a generic provided task type that is parameterized
with the time point type.
|
Note
|
If async::timer_mgr::time_point_for is left unspecialized, the library
will assume that a duration type and time_point type are the same.
|
The result of using a time_scheduler
is that work is scheduled to be
run when a timer interrupt fires.
int x{};
async::start_on(async::time_scheduler{10ms},
async::just_result_of([&] { x = 42; }))
| async::start_detached();
// when the interrupt fires...
async::timer_mgr::service_task();
// x is now 42
HAL interaction
The various HAL functions are called as follows:
On queueing the first task (consuming a time_scheduler
sender), either:
-
enable()
-
now()
-
set_event_time(time_point)
or (if this function is optionally available):
-
enable(duration)
Note
|
The second case allows the HAL to fuse enabling and setting the expiry
time, if it’s possible to do that more efficiently. The return value should be
equivalent to now() + duration . The type of duration is equivalent to the
type obtained by subtracting two time_point_t ​s.
|
On queueing a new task which is not the next to expire:
-
now()
On queueing a new task which is the next to expire:
-
now()
-
set_event_time(time_point)
On processing a task (not the last) with service_task()
:
-
set_event_time(time_point)
On processing the last currently queued task with service_task()
:
-
disable()
Note that interaction with the HAL starts with a single enable()
call, and
ends with a single disable()
call. This means that when enable()
is called,
the HAL is free to reset its timer. And when disable()
is called, the HAL is
free to disable the timer or even remove power. In between enable()
and
disable()
calls, the timer should be free-running. It should not be reset
while running: this will invalidate timing data in the task queue.
Note
|
enable() is called to start the timer, and disable() is called when no
more timers are active. When one timer expires and another in the queue is set,
enable() is not called again for the second timer.
|
Note
|
enable() is called from the context which consumes the time_scheduler
sender to kick off the work. disable() is called from the timer interrupt
context that processes the last task in the queue. And in general, each call to
set_event_time() to schedule the next timer task executes in the previous
task’s timer interrupt context.
|
time domains
A given system may have several independent timers. For that reason, a
time_scheduler
and an injected_timer_manager
may be associated with a
domain. A domain is typically just a tag type.
namespace {
// a tag type identifying an alternative timer domain
struct alt_domain;
// A HAL that interacts with different registers
// for that alternative timer domain
struct alt_domain_hal { ... };
// the generic timer manager is still fine for the alt_domain
using alt_timer_manager_t = async::generic_timer_manager<alt_domain_hal>;
} // namespace
// a time_scheduler for the alt domain will use the alt timer_manager
template <> inline auto async::injected_timer_manager<alt_domain> = alt_timer_manager_t{};
// to make it easy to create schedulers for that domain, use a factory
auto sched_factory = async::time_scheduler_factory<alt_domain>;
auto sched = sched_factory(10ms);
int x{};
auto s = async::start_on(sched,
async::just(42) | async::then([&] (auto i) { x = i; });
async::start_detached(s);
// after 10ms, the alt domain interrupt will
// call service_task for the alt_domain...
auto alt_timer_interrupt_service_routine() {
async::timer_mgr::service_task<alt_domain>();
}
// and now x is 42
A time_scheduler_factory
can be given a name that it passes on to the
schedulers it creates, and which is used to output debug events. The default
name is "time_scheduler"
.
auto sched_factory = async::time_scheduler_factory<alt_domain, "my scheduler">;
trigger_scheduler
Found in the header: async/schedulers/trigger_scheduler.hpp
A trigger_scheduler
represents work that will be run on a named user-defined
trigger, like a specific interrupt service routine.
using S = async::trigger_scheduler<"name">;
The trigger_scheduler
works hand in glove with a trigger_manager
that
manages tasks in queued order. The action is very similar to that of the
priority_task_manager
, but instead of dealing with multiple priorities, tasks
for a given trigger are identified with the trigger name.
// when an interrupt fires, the ISR executes the tasks for the trigger
auto interrupt_service_routine() {
async::triggers<"name">.run();
}
The result of using a trigger_scheduler
is that work is scheduled to be
run when such an interrupt fires and runs the ISR.
int x{};
async::start_on(async::trigger_scheduler<"name">{},
async::just_result_of([&] { x = 42; }))
| async::start_detached();
// when the interrupt fires...
async::triggers<"name">.run();
// x is now 42
A trigger_scheduler
can also be triggered with arguments, which must be
specified as template arguments, and supplied using run_triggers
:
int x{};
async::trigger_scheduler<"name", int>{}
| async::then([&] (auto i) { x = i; })
| async::start_detached();
// when the interrupt fires...
async::run_triggers<"name">(42);
// x is now 42
Note
|
It is possible to use a trigger_scheduler that takes arguments as a
"normal" scheduler, i.e. functions like start_on will work; however the
arguments passed to run_triggers will be discarded when used with constructs
like start_on(trigger_scheduler<"name", int>{}, just(42)) .
|
Debugging
One way to debug a sender chain is to use a debugger and insert a breakpoint
inside a suitable place where "real work" is being done: inside a function
passed to then
for example. This is certainly doable, but perhaps challenging
for all the same reasons that debugging asynchronous code is usually
challenging.
Another approach to debugging is to construct sender chains without deciding
which scheduler they run on. Switching a sender chain to run on an
inline_scheduler
provides a way to debug — it is basically the same as
debugging synchronous code.
Handling a debug signal
To debug code running asynchronously, this library provides a mechanism to
inject a debug handler. This is done by defining a handler struct and
specializing the injected_debug_handler
variable template. The debug handler
has one member function (template): signal
.
#include <async/debug.hpp>
struct debug_handler {
template <stdx::ct_string C, stdx::ct_string S, typename Ctx>
constexpr auto signal(auto &&...) {
fmt::print("{} {} {}", C, async::debug::name_of<Ctx>, S);
}
};
template <> inline auto async::injected_debug_handler<> = debug_handler{};
Note
|
The injection mechanism uses the same pattern as for other global concerns, like the timer manager or the priority task manager. |
signal
has three template arguments:
-
C
: a compile-time string representing the name of the whole sender chain -
S
: a compile-time string which is the debug signal raised -
Ctx
: a debug context
A debug context type Ctx
adheres to the following API:
-
async::debug::name_of<Ctx>
is a compile-time string -
async::debug::tag_of<Ctx>
is a tag type that represents the sender or adaptor type -
async::debug::type_of<Ctx>
is the opaque "internal" type that the context is for (e.g. the operation state that is executing) -
async::debug::children_of<Ctx>
is a type list containing the child context(s)
These arguments can be used to do compile-time filtering of signal types if
desired. signal
may also have arbitrary runtime arguments providing runtime
context for the signal.
Important
|
signal may be called at any time from any execution context. It’s
up to signal to take care of its own thread safety and avoid data races.
|
Raising a debug signal
During operation, a sender (or actually, the operation state that is doing
work) may raise a debug signal by calling debug_signal
:
// the signature of debug_signal
namespace async {
template <stdx::ct_string Signal, typename Ctx, queryable Q, typename... Args>
auto debug_signal(Q &&q, Args &&...args) -> void;
}
// the operation state will raise a debug signal when it is started
template <typename Rcvr>
struct my_op_state {
Rcvr r;
auto start() & -> void {
async::debug_signal<"start", async::debug::erased_context_for<my_op_state>>(get_env(r));
// ...
}
};
// and we provide a specialization of debug_context_for that fulfils the API
struct my_sender_t;
template <typename Rcvr>
async::debug::debug_context_for<my_op_state<Rcvr>> {
using tag = my_sender_t;
constexpr static auto name = stdx::ct_string{"my sender"};
using type = my_op_state<Rcvr>;
using children = stdx::type_list<>;
};
debug_signal
takes template arguments:
-
Signal
: the (name of the) debug signal -
Ctx
: the debug context for this signal
and runtime arguments:
-
q
: a queryable object that responds to theget_debug_interface
query - usually the environment of a connected receiver -
args…
: any runtime context arguments to be forwarded tosignal
The context for a signal is obtained through
async::debug::erased_context_for<T>
which in turn picks up a specialization of
async::debug::debug_context_for
.
Generic senders and adaptors will typically send well-known signals at transition points:
-
start
-
set_value
-
set_error
-
set_stopped
Naming senders and operations
When raising a debug signal, we know the name of the signal and the name of the sender or adaptor that is raising it. But we don’t know the name of the overall operation or sender chain. This is supplied from outside, using the environment of the external receiver.
The provision of that receiver typically happens inside a sender consumer like
start_detached
. By providing a
string name to start_detached
, the internal receiver that is connected will
have an environment that responds to the get_debug_interface
query. The value
of that query will be an interface whose signal
function calls the injected
debug handler’s signal
function, providing the given name as the C
argument.
auto s0 = async::fixed_priority_scheduler<0, "fp_sched[0]">{}.schedule()
| async::then<"answer0">([] { return 24; });
auto s1 = async::fixed_priority_scheduler<1, "fp_sched[1]">{}.schedule()
| async::then<"answer1">([] { return 18; });
auto result = start_detached<"my op">(async::when_all(s1, s0));
The debug signals produced by this code could be:
-
"my op" "start" context["when all"]
-
"my op" "start" context["fp_sched[1]"]
-
"my op" "set_value" context["fp_sched[1]"]
-
"my op" "set_value" context["answer1"]
-
"my op" "start" context["fp_sched[0]"]
-
"my op" "set_value" context["fp_sched[0]"]
-
"my op" "set_value" context["answer0"]
-
"my op" "set_value" context["when_all"]
Things to note here:
-
when_all
started first, because ultimately what was passed tostart_detached
(the outer layer of the onion) waswhen_all
. -
likewise,
when_all
is the last thing to complete. -
the call to
when_all
is not named, so we get the default name"when_all"
-
then
does not produce a"start"
debug signal of its own. -
fp sched[1]
ran beforefp sched[0]
even though presumably0
is a higher priority than1
. What happened was thatwhen_all
startedfp sched[1]
first - and this caused an immediate interrupt. That interrupt did not return until theanswer1
sender had completed.
Other orderings are possible, of course, according to exactly how a sender chain is executed. But the usual invariants apply.
Index of identifiers
By header
allocator.hpp
-
allocator
- a concept for anallocator
-
allocator_of_t
- the type returned byget_allocator
-
get_allocator
- a tag used to retrieve an allocator from a sender’s attributes
completes_synchronously.hpp
-
completes_synchronously
- a query used to determine whether a sender completes synchronously
completion_tags.hpp
-
set_error
- a tag used to complete on the error channel -
set_stopped
- a tag used to complete on the stopped channel -
set_value
- a tag used to complete on the value channel
compose.hpp
An internal header that contains no public-facing identifiers. compose.hpp
is used
in pipe-composition syntax.
concepts.hpp
-
multishot_sender<S>
- a concept modelled by senders whereconnect
may operate on lvalues -
operation_state<O>
- a concept modelled by operation states -
receiver<R>
- a concept modelled by receivers -
receiver_base
- an empty type; deriving from this opts in to modelling thereceiver
concept -
receiver_from<R, S>
- a concept modelled by a receiverR
that handles what a senderS
sends -
scheduler<S>
- a concept modelled by schedulers -
sender<S>
- a concept modelled by senders -
sender_base
- an empty type; deriving from this opts in to modelling thesender
concept -
sender_in<S, E>
- a concept modelled by a senderS
whose completion signatures depend on an environmentE
-
sender_of<S, Sig, E>
- a concept modelled by a senderS
that may complete withSig
given environmentE
-
sender_to<S, R>
- the inverse ofreceiver_from<R, S>
-
singleshot_sender<S>
- a concept modelled by senders whereconnect
operates on rvalues only
connect.hpp
-
connect
- a tag used to connect a sender with a receiver
continue_on.hpp
-
continue_on
- a sender adaptor that continues execution on another scheduler
debug.hpp
-
get_debug_interface
- a query used to get a debug interface from an environment -
injected_debug_handler<>
- a variable template used to inject a specific implementation of a debug handler -
debug::make_named_interface<"name">
- a function that makes a debug interface with arguments passed as context -
debug_signal<"signal", "name", Ctx>
- a function to raise a debug signal
env.hpp
-
get_env
- a query used to retrieve the environment of a receiver or the attributes of a sender
forwarding_query.hpp
-
forwarding_query
- a tag indicating whether or not a query may be forwarded
get_completion_scheduler.hpp
-
get_completion_scheduler
- a query used to retrieve a completion_scheduler from a sender’s attributes
get_scheduler.hpp
-
get_scheduler
- a query used to retrieve a scheduler from a sender’s attributes
incite_on.hpp
-
incite_on
- a sender adaptor that incites execution on another scheduler
just.hpp
-
just
- a sender factory that sends on the value channel -
just_error
- a sender factory that sends on the error channel -
just_stopped
- a sender factory that sends on the stopped channel
just_result_of.hpp
-
just_error_result_of
- a sender factory that sends lazily computed values on the error channel -
just_result_of
- a sender factory that sends lazily computed values on the value channel
let.hpp
An internal header that contains no public-facing identifiers. let.hpp
is used
by let_error.hpp
, let_stopped.hpp
, and let_value.hpp
let_error.hpp
-
let_error
- a sender adaptor that can make runtime decisions on the error channel
let_stopped.hpp
-
let_stopped
- a sender adaptor that can make runtime decisions on the stopped channel
let_value.hpp
-
let_value
- a sender adaptor that can make runtime decisions on the value channel
read_env.hpp
-
get_scheduler
- a sender factory equivalent toread_env(get_scheduler_t{})
-
get_stop_token
- a sender factory equivalent toread_env(get_stop_token_t{})
-
read_env
- a sender factory that sends values obtained from a receiver’s environment
repeat.hpp
-
repeat
- a sender adaptor that repeats a sender indefinitely -
repeat_n
- a sender adaptor that repeats a sender a set number of times -
repeat_until
- a sender adaptor that repeats a sender until a condition becomes true
retry.hpp
-
retry
- a sender adaptor that retries a sender that completes with an error -
retry_until
- a sender adaptor that retries an error-completing sender until a condition becomes true
schedulers/inline_scheduler.hpp
-
inline_scheduler
- a scheduler that completes inline as if by a normal function call
schedulers/priority_scheduler.hpp
-
fixed_priority_scheduler<P>
- a scheduler that completes on a priority interrupt
schedulers/requeue_policy.hpp
-
requeue_policy::immediate
- a policy used withpriority_task_manager::service_tasks()
andtriggers<"name">.run
-
requeue_policy::deferred
- the default policy used withpriority_task_manager::service_tasks()
andtriggers<"name">.run
schedulers/task.hpp
An internal header that contains no public-facing identifiers. task.hpp
defines base classes that are used by
fixed_priority_scheduler and
time_scheduler.
schedulers/task_manager.hpp
-
priority_task_manager<HAL, NumPriorities>
- an implementation of a task manager that can be used with fixed_priority_scheduler
schedulers/task_manager_interface.hpp
-
injected_task_manager<>
- a variable template used to inject a specific implementation of a priority task manager -
priority_t
- a type used for priority values -
task_mgr::is_idle()
- a function that returnstrue
when no priority tasks are queued -
task_mgr::service_tasks<P>()
- an ISR function used to execute tasks at a given priority
schedulers/thread_scheduler.hpp
-
thread_scheduler
- a scheduler that completes on a newly created thread
schedulers/time_scheduler.hpp
-
time_scheduler
- a scheduler that completes on a timer interrupt
schedulers/timer_manager.hpp
-
generic_timer_manager<HAL>
- an implementation of a timer manager that can be used with time_scheduler
schedulers/timer_manager_interface.hpp
-
injected_timer_manager<>
- a variable template used to inject a specific implementation of a timer manager -
timer_mgr::is_idle()
- a function that returnstrue
when no timer tasks are queued -
timer_mgr::service_task()
- an ISR function used to execute the next timer task -
timer_mgr::time_point_for
- a class template that can be specialized to specify atime_point
type corresponding to aduration
type
schedulers/trigger_manager.hpp
-
triggers<"name">
- a named trigger manager that is used with trigger_scheduler
schedulers/trigger_scheduler.hpp
-
trigger_scheduler<"name">
- a trigger_scheduler that completes on a user-defined stimulus by callingtriggers<"name">.run
.
sequence.hpp
-
seq
- a sender adaptor used to sequence two senders without typing a lambda expression -
sequence
- a sender adaptor that sequences two senders
split.hpp
-
split
- a sender adaptor that turns a single-shot sender into a multi-shot sender
stack_allocator.hpp
-
stack_allocator
- anallocator
that allocates on the stack
start.hpp
-
start
- a tag used to start an operation state
start_detached.hpp
-
start_detached
- a sender consumer that starts a sender without waiting for it to complete -
start_detached_unstoppable
- a sender consumer that starts a sender without waiting for it to complete, without a provision for cancellation -
stop_detached
- a function that may request cancellation of a sender started withstart_detached
start_on.hpp
-
start_on
- a sender adaptor that starts execution on a given scheduler
static_allocator.hpp
-
static_allocation_limit<Domain>
- a variable template that can be specialized to customize the allocation limit for a domain -
static_allocator
- anallocator
that allocates using static storage
stop_token.hpp
-
inplace_stop_source
- a stop source that can be used to control cancellation -
inplace_stop_token
- a stop token corresponding toinplace_stop_source
-
stop_token_of_t
- the type returned byget_stop_token
sync_wait.hpp
-
sync_wait
- a sender consumer that starts a sender and waits for it to complete
then.hpp
-
then
- a sender adaptor that transforms what a sender sends on the value channel -
upon error
- a sender adaptor that transforms what a sender sends on the error channel -
upon stopped
- a sender adaptor that transforms what a sender sends on the stopped channel
timeout_after.hpp
-
timeout_after
- a sender adaptor that races a sender against a time limit
type_traits.hpp
An internal header that contains no public-facing identifiers. type_traits.hpp
contains traits and metaprogramming constructs used by many senders.
variant_sender.hpp
-
make_variant_sender
- a function used to create a sender returned fromlet_value
when_all.hpp
-
when_all
- an n-ary sender adaptor that completes when all of its child senders complete
when_any.hpp
-
first_successful
- a sender adaptor that completes when any of its child senders complete on the value channel -
stop_when
- a binary sender adaptor equivalent towhen_any
-
when_any
- an n-ary sender adaptor that completes when any of its child senders complete on the value or error channels
By identifier
-
allocator_of_t
-#include <async/allocator.hpp>
-
connect
-#include <async/connect.hpp>
-
fixed_priority_scheduler<P>
-#include <async/schedulers/priority_scheduler.hpp>
-
forwarding_query
-#include <async/forwarding_query.hpp>
-
generic_timer_manager<HAL>
-#include <async/schedulers/timer_manager.hpp>
-
get_allocator
-#include <async/allocator.hpp>
-
get_completion_scheduler
-#include <async/get_completion_scheduler.hpp>
-
get_scheduler
-#include <async/read_env.hpp>
-
get_stop_token
-#include <async/read_env.hpp>
-
injected_task_manager<>
-#include <async/schedulers/task_manager_interface.hpp>
-
injected_timer_manager<>
-#include <async/schedulers/timer_manager_interface.hpp>
-
inline_scheduler
-#include <async/schedulers/inline_scheduler.hpp>
-
inplace_stop_source
-#include <async/stop_token.hpp>
-
inplace_stop_token
-#include <async/stop_token.hpp>
-
multishot_sender<S>
-#include <async/concepts.hpp>
-
operation_state<O>
-#include <async/concepts.hpp>
-
priority_t
-#include <async/schedulers/task_manager_interface.hpp>
-
priority_task_manager<HAL, NumPriorities>
-#include <async/schedulers/task_manager.hpp>
-
receiver<R>
-#include <async/concepts.hpp>
-
receiver_base
-#include <async/concepts.hpp>
-
receiver_from<R, S>
-#include <async/concepts.hpp>
-
requeue_policy::immediate
-#include <async/schedulers/requeue_policy.hpp>
-
requeue_policy::deferred
-#include <async/schedulers/requeue_policy.hpp>
-
runloop_scheduler
-#include <async/schedulers/runloop_scheduler.hpp>
-
scheduler<S>
-#include <async/concepts.hpp>
-
sender<S>
-#include <async/concepts.hpp>
-
sender_base
-#include <async/concepts.hpp>
-
sender_in<S, E>
-#include <async/concepts.hpp>
-
sender_of<S, Sig, E>
-#include <async/concepts.hpp>
-
sender_to<S, R>
-#include <async/concepts.hpp>
-
set_error
-#include <async/completion_tags.hpp>
-
set_stopped
-#include <async/completion_tags.hpp>
-
set_value
-#include <async/completion_tags.hpp>
-
singleshot_sender<S>
-#include <async/concepts.hpp>
-
start
-#include <async/start.hpp>
-
start_detached_unstoppable
-#include <async/start_detached.hpp>
-
static_allocation_limit<Domain>
-#include <async/static_allocator.hpp>
-
stop_token_of_t
-#include <async/stop_token.hpp>
-
task_mgr::is_idle()
-#include <async/schedulers/task_manager_interface.hpp>
-
task_mgr::service_tasks<P>()
-#include <async/schedulers/task_manager_interface.hpp>
-
thread_scheduler
-#include <async/schedulers/thread_scheduler.hpp>
-
time_scheduler
-#include <async/schedulers/time_scheduler.hpp>
-
timer_mgr::is_idle()
-#include <async/schedulers/timer_manager_interface.hpp>
-
timer_mgr::service_task()
-#include <async/schedulers/timer_manager_interface.hpp>
-
timer_mgr::time_point_for
-#include <async/schedulers/timer_manager_interface.hpp>
-
trigger_scheduler<"name">
-#include <async/schedulers/trigger_scheduler.hpp>
-
triggers<"name">
-#include <async/schedulers/trigger_manager.hpp>