Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In ready_chunks, reserve capacity based on size_hint #2661

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

stepancheg
Copy link
Contributor

Reserving ready_chunks limit every time may be too expensive when the limit is high but only a few number of messages available.

Now rely on Stream::size_hint to reserve the capacity. If underlying stream implements size_hint, this will work great. Otherwise ready_chunk will be somewhat less efficient.

This is better alternative to #2657

Reserving `ready_chunks` limit every time may be too expensive when
the limit is high but only a few number of messages available.

Now rely on `Stream::size_hint` to reserve the capacity.  If
underlying stream implements `size_hint`, this will work great.
Otherwise `ready_chunk` will be somewhat less efficient.

This is better alternative to rust-lang#2657
@notgull
Copy link
Contributor

notgull commented Oct 30, 2022

Do you have any benchmarks pr use cases that demonstrate an improvement in performance?

@stepancheg
Copy link
Contributor Author

Do you have any benchmarks pr use cases that demonstrate an improvement in performance?

I can create a benchmark which demonstrates the behavior:

  • call ready_chunks with argument 1000000
  • create a channel which provides one message at a time
  • observe excessive memory usage: 1000000 * size_of::<T>()

Current behavior of excessive memory allocation prevents using ready_chunks with large capacity.

// Note we reserve capacity here, not when `items` is created,
// because stream may know the remaining size,
// but items might not be readily available.
let size_hint = this.stream.as_mut().size_hint().0.wrapping_add(1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this wrapping_add instead of saturating_add?

@taiki-e
Copy link
Member

taiki-e commented Nov 27, 2022

Thanks for the PR! size_hint means the number of elements remaining in the entire stream, and since the information we really want here is the number of elements until the next Pending, I guess the effect of this is not very large.

That said, this seems to be somewhat more reasonable than #2657 or the current implementation. I tend to merge this once the review is addressed.

@taiki-e taiki-e added the S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author label Jan 20, 2023
@tzachar
Copy link

tzachar commented Jan 8, 2024

Is there a plan to merge this?
Also, on a side note, why is the reallocation of a new buffer required on every entry to poll_next? wouldn't it be more efficient to allocate the buffer on creation, in ReadyChunks::new, and reuse it on every entry to poll_next?

@taiki-e
Copy link
Member

taiki-e commented Jan 8, 2024

reuse it on every entry to poll_next?

How to reuse it? poll_next returns an owned Vec.

@tzachar
Copy link

tzachar commented Jan 8, 2024

Can't we return a reference to the internal vector?
make Self::Item be a & Vec<St::Item> ?
Or is there a different way of achieving this, where we are not reallocating a new vector on each call to poll_next?

@taiki-e
Copy link
Member

taiki-e commented Jan 8, 2024

Can't we return a reference to the internal vector?
make Self::Item be a & Vec<St::Item> ?

I don't think it is possible without using GAT in the Future trait.

@tzachar
Copy link

tzachar commented Jan 9, 2024

Its worth trying to support this.
Any idea about how to go about it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants