-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
c/driver/postgresql: Strange error when ingesting a very large table #1921
Comments
Interesting. Kou in #1881 mentions there are various timeout parameters, I wonder if we hit one of those. |
That's good to know! I think we might be writing every array to a single buffer and then sending it: arrow-adbc/c/driver/postgresql/statement.cc Lines 608 to 612 in 00f1526
...and perhaps we need to find a way to In this case in particular I think we might be overflowing the |
Oh, that sounds like a better guess. Splitting up the batch probably makes sense anyways... |
At least we should not overflow (if that is indeed what happens) |
We're hitting sfackler/rust-postgres#986 |
PostgreSQL apparently has an internal limit - split up batches to stay under that limit. It doesn't care about message boundaries in this mode, so we can chunk naively. Fixes apache#1921.
What happened?
When bulk inserting a very large array into we get an error that does not occur when bulk inserting many small arrays.
How can we reproduce the bug?
From R:
I don't think this is a result of nanoarrow's array creation:
Environment/Setup
The text was updated successfully, but these errors were encountered: