Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to implement sliding windows quantile? #204

Open
lyupy opened this issue Nov 22, 2022 · 1 comment
Open

how to implement sliding windows quantile? #204

lyupy opened this issue Nov 22, 2022 · 1 comment

Comments

@lyupy
Copy link

lyupy commented Nov 22, 2022

how to implement sliding windows quantile, such as window size is 1000 on data stream?
how to remove element outside of window?

@tdunning
Copy link
Owner

Pure sliding windows are probably not possible with a t-digest and basing windows on counts is a bit unusual as well. You can implement a form of exponential windowing pretty easily, but it becomes very difficult to understand the digest invariant if you do that.

Typically, what is done instead is to store a digest per time period in compressed form, typically for a minute or 5 minutes. At query time, you simply combine as many digests as necessary to cover the window you want. In many cases, you store many digests for each window so the aggregation involves multiple digests at each time point.

The key point here is that the total bandwidth of metrics is heavily compressed but accuracy is not lost. Suppose that you are storing 10,000 digests every minute, a few of which get a million values per second, most get thousands of samples per second and some get only a few values per minute. The hot digests will have nearly the maximum number of centroids, but will be bounded in size to a few kB (for compression = 100 or 200). The cold digests will only have a few centroids and thus will be considerably smaller. Overall, however, the overall number of bytes per second required to store your methods will be less than 250kB/s which is very modest for such a large amount of metrics. Moreover, a year of data at full resolution is less than 10TB which is (amazingly) now a relatively small amount of data.

If you add aggregated digests for each day then querying any time period will be very fast.

In the end, the question of windowed aggregates is why you need a truly windowed digest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants