Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Tail sender #678

Draft
wants to merge 14 commits into
base: main
Choose a base branch
from
Draft

[WIP] Tail sender #678

wants to merge 14 commits into from

Conversation

kirkshoop
Copy link
Contributor

@kirkshoop kirkshoop commented Nov 5, 2022

This change adds support for TailCalls, in library form, to allow Sender/Receiver algorithms to iterate without blowing the stack or using a scheduler.

Needs examples, documentation and a paper.

The solution here is translated from an original design and implementation by @lewissbaker.

A tail_sender is a constrained sender concept where always_completes_inline_v (a new CPO) is true, all destructors are trivial and all methods are noexcept (copy/move/connect etc..) and the tail_operation_state must support the new unwind() CPO.

A nullable_tail_sender must also have a tail_operation_state that is convertible to bool.
A terminal_tail_sender has a tail_operation_state that returns void from start().

The existing start, set_value, set_error, and set_stopped CPOs now support returning a tail_sender.

A tail_sender would be used extensively in sender/receiver sequences.

A run() that returns a tail_sender is also the way to compose multiple 'drivable' schedulers (like run_loop) on the same thread. This will be important for the system_executor.

The repeat() algorithm would use tail_sender per start() instead of a schedule() per start().

To drive a set of recursive tail_senders without exhausting the stack, use:

TailSender resume_tail_senders_until_one_remaining(TailReceiver, TailSenders...)

An example main() that safely starts and stops async execution:

main() {
  async_scope main_scope;
  system_executor system_context;
  run_loop main_loop;

  sync_wait(
    when_all(
      application(
        get_scope(main_scope), 
        get_scheduler(system_context), 
        get_scheduler(main_loop));
      resume_tail_senders_until_one_remaining(
        null_tail_receiver{},
        run(system_context), 
        run(main_loop))));

  sync_wait(
    when_all(
      main_scope.join(),
      main_loop.join(),
      system_context.join());
}

Other framework loops can be integrated into main as additional 'drivable' contexts that provide run(). Or by inserting:

      resume_tail_senders_until_one_remaining(
        null_tail_receiver{},
        run_pending(system_context), 
        run_pending(main_loop));

Running this each iteration of the loop in the framework, will ensure that the schedulers are making forward progress.

@ericniebler
Copy link
Collaborator

Cool. What impact does this have on algorithm implementations? I assume it's not safe to just drop a tail sender on the floor, or is it?

@kirkshoop
Copy link
Contributor Author

Correct. The results of start(), set_value(), set_error(), and set_stopped() must either be propagated or resumed.
sync_wait() will add a resume for the result of start(). Typically, a start() that calls set_..() or start() will return the result of those functions. Typically, a set_..() that calls set_..() or start() will return the result of those functions.

This affects lifetimes as well. When a tail_sender is returned, the lifetime of the object that returned the tail_sender is extended until all tail_senders are resumed. This is why sync_wait will resume the tail_senders before the operation_state is destructed.

@ericniebler ericniebler linked an issue Mar 19, 2023 that may be closed by this pull request
@ericniebler
Copy link
Collaborator

@kirkshoop there's a variant_sender now. Maybe you could use it in variant_tail_sender ?

@trxcllnt
Copy link
Member

/ok to test

@ericniebler ericniebler marked this pull request as draft March 29, 2023 18:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Propose symmetric transfer for senders (tail senders?)
3 participants