Replies: 7 comments 7 replies
-
@tannerlinsley - Was wondering if you could give me your guru insights into this query :) thank you so much for tool. |
Beta Was this translation helpful? Give feedback.
-
Use the virtualization Ifen! |
Beta Was this translation helpful? Give feedback.
-
@ifen did you ever solve this issue? |
Beta Was this translation helpful? Give feedback.
-
I'm seeing a similar story. On every single CRUD operation, as well as sort / group / filter. The result is that the library is functioning very very slowly on any user interaction. I'm gonna try to come up with something to speed it up... |
Beta Was this translation helpful? Give feedback.
-
This still seems to be an issue with @tanstack/[email protected]. I have a table with 10K rows, and every time I set a new data array, the ENTIRE rowModel gets recreated. This takes 2–3 seconds, and the UI is unresponsive in the meantime. I am surprised that the rows aren't cached, as most things are cached by the library. Almost all entries in my data array have the same index and are referentially equal to the data array before. It would be possible to identify which rows changed by index and reference, and skip accessing (and recreating) unmodified rows. I think that would be a general optimization, and also empower users, if they cared about this, to optimize the performance, by using immutable changes only to their data array objects. (mutable changes don't get detected by react anyway, so nothing should break because of this). What I don't know is if it can be implemented, without breaking something inside the library? I don't know my way around the code... |
Beta Was this translation helpful? Give feedback.
-
@retnag and others: Table uses a memoized computation pipeline for it's row model. This means that if any part of the pipeline is invalidated, all of it's dependents are also invalidated. So for example, when you change the data, the row model, filter model, sorting model, grouping model, pagination model etc will also need to be recalculated. The primary reason for this is that we don't really know what changed when the core data model changes, since that's in your control, not the libraries. So, we assume the worst, that the entire underlying data model has changed. I haven't really found a good way around this without bloating the library into some kind of excel-like graph system (see AG Grid), which would allow for more robust fine-grained dependency tracking and computationally efficient updates. The best tips I have for you:
|
Beta Was this translation helpful? Give feedback.
-
thanks for the quick and detailed reply! <3 |
Beta Was this translation helpful? Give feedback.
-
Using react table to display a large set of data (approx 20k rows) this data is updated via web-socket periodically throughout the day (can be to create, update or delete entries) the data passed into the table is generated using a computed value linking to an array of all items. The problem is that its causing full render of the entire table every-time just a single row is added or deleted and thus makes freezes the UI for us. Are there any recommendations on how to handle this better?
Beta Was this translation helpful? Give feedback.
All reactions