You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are successfully using qcow2 on a file based implementation for SMAPIv3 (BTW, is it still the "common" name of this API?) Basically, you got your device, a filesystem then your qcow2 files inside. That's fine, and I suppose that's similar to your way to do it for GFS2.
But what would be the best approach for dealing with block devices. In short, where to stop SMAPIv3 logic and where to start a fully independent storage logic?
I see 2 categories:
One big block storage that you split into smaller block "zones" (LVM approach done in SMAPIv1). Eg one big LUN for all your VMs.
One block storage per VDI, eg one LUN per VM disk
Regarding your experience on building a storage stack, what would be the best solution?
I'm not talking about a shared SR, only in a local scenario.
The text was updated successfully, but these errors were encountered:
The problem with block storage is that you need to do locking somewhere. With SMAPIv3 it is (correctly) the storage's responsibility to do this (i.e. we no longer (ab)use xapi for holding locks),
however a purely block storage device does not offer any APIs to do so.
If your SR is never meant to be shared, then you can do such locking using some other filesystem from the host (or even just in /var/run/nonpersistent, since rebooting the host would mean all your VMs have rebooted and released whatever locks they had anyway).
For actually allocating blocks, you can also consider dm-thin as a 3rd option if you can figure out how to avoid loosing data when it runs out of space.
you can use SMAPIv3 to "tell" the storage where to lock/unlock a resource. Is that correct?
if that's correct (because SMAPIv3 is aware when a VM boots, stops, migrate, whatever action), you indeed need a "layer" between a dumb block storage and SMAPIv3, to make proper locks and avoid corruption. Right?
About the remove/shared scenario: I think there is multiple possibilities to do that. Not on a dumb iSCSI drive bay obviously.
Regarding the local SR (not shared): LVM enable/disable is a kind of "lock", right? If 1. is correct, it might be enough, or am I wrong?
I'll take a look at dm-thin but IIRC, all file based SR also have this problem when being full (at least blocked in RO)
Hi there!
Asking for advice on a block based scenario.
We are successfully using
qcow2
on a file based implementation for SMAPIv3 (BTW, is it still the "common" name of this API?) Basically, you got your device, a filesystem then yourqcow2
files inside. That's fine, and I suppose that's similar to your way to do it for GFS2.But what would be the best approach for dealing with block devices. In short, where to stop SMAPIv3 logic and where to start a fully independent storage logic?
I see 2 categories:
Regarding your experience on building a storage stack, what would be the best solution?
I'm not talking about a shared SR, only in a local scenario.
The text was updated successfully, but these errors were encountered: