-
Notifications
You must be signed in to change notification settings - Fork 468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sstable/blob: new package #4207
base: master
Are you sure you want to change the base?
Conversation
Refactor the pooling of value blocks in valblk.Writer to use new block.Buffer and block.BufHandle types. This resolves a TODO in the layout writer around some previous idiosyncrasises in writing valblk index blocks, makes the code a little more understandable and prepares for the introduction of a blob file writer.
Add blob FileWriter type for writing out blob files, consisting of a series of value blocks, a value block index and a footer. Add the initial, largely unimplemented stencil of a FileReader type. Informs cockroachdb#112.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flushing some comments on the first commit.
Reviewed 3 of 3 files at r1.
Reviewable status: 2 of 7 files reviewed, 5 unresolved discussions (waiting on @jbowens)
sstable/block/compression.go
line 357 at r1 (raw file):
// CompressAndChecksum compresses and checksums the block data, returning a // PhysicalBuffer that is owned by the caller. If non-nil, the returned
should this say PhysicalBlock
instead of PhysicalBuffer
?
I am guessing that the slice in PhysicalBlock
is backed by BufHandle
but it isn't explicit in this code comment.
sstable/block/compression.go
line 377 at r1 (raw file):
// Use the compressedBuf as the new b.h. pbHandle := b.h b.h = compressedBuf
This is a bit different from what we used to do in valblk.Writer.compressAndFlush
in that given the difference in sizes, it was trying to use the compressed and uncompressed *blockBuffer
s separate, and not mix the pools.
Perhaps that wasn't necessary, but I am curious about the motivation.
sstable/block/compression.go
line 404 at r1 (raw file):
// Release releases the buffer back to the pool for reuse. func (h *BufHandle) Release() { if invariants.Enabled && (h.pool == nil) != (h.b == nil) {
this suggests we have an invariant h != nil
, so the next if
predicate shouldn't need to do h != nil
.
sstable/block/compression.go
line 410 at r1 (raw file):
// maximum to the pool. This avoids holding on to occassional large buffers // necesary for, for example, single large values. if h != nil && h.b != nil && (h.pool.Max == 0 || len(h.b) < h.pool.Max) {
is there a reason to allow Max == 0
to mean unlimited, given we don't use it?
sstable/block/compression.go
line 450 at r1 (raw file):
return v.(*BufHandle) } return &BufHandle{b: make([]byte, 0, max(p.Default, 1024)), pool: p}
Do we really need this max
in that we are only creating two of these bufferSyncPool
s with reasonable defaults. And if someone uses a smaller default, they potentially have a good reason for it.
Add blob FileWriter type for writing out blob files, consisting of a series of
value blocks, a value block index and a footer.
Add the initial, largely unimplemented stencil of a FileReader type.
Informs #112.