Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deactive logical pool cause vm failed to start while storage of the vm is in this logical pool #569

Closed
gouzongmei opened this issue Jan 15, 2015 · 4 comments

Comments

@gouzongmei
Copy link
Contributor

How to reproduce:

  1. create a logical type storage pool named "logical_test" and active it
  2. eidt Templates-storage to choose "logical_test"
  3. create vm1 with the template
  4. start vm1, success
  5. stop vm1 and eidt Templates-storage to choose another storage pool, not "logical_test"
  6. deactive "logical_test", success
  7. start vm1, failed and prompt image file not found.

Expected result:
step 6 failed or step 7 success

While pool type is 'dir', process above does not cause an error, both step 6 and 7 success.

@gouzongmei
Copy link
Contributor Author

Storage pool's "deactive" action call libvirt "StoragePoolDestroy" function which causes different result on different pool type.
Once the pool is destroyed, the 'dir' type pool remains its target path and images while the 'logical' type pool deletes its target path and images.

IMO, once 'logical' type pool is used by guest, UI should prompt can not deactive this pool. Anybody has a different opinion ?

@clnperez
Copy link
Contributor

According to the documentation, the data should still be there, so that's not good. Did you find some documentation that cited this behavior? I know I've seen different behaviors documented for storage pools, but just not finding this particular case documented anywhere.

@cd1
Copy link
Member

cd1 commented Feb 19, 2015

I did some experiments about this issue and I learned that this is exactly libvirt's expected behavior. There are a few points to take a look:

  1. Different storage pools behave differently.
  2. When a [non-transient] logical storage pool is destroyed, its data is not deleted. It only becomes unavailable in the location which it was available before. The data of a logical pool lies in a separate partition, so the data is still there even when the partition symlinks are gone - which is what happens when the pool is deactivated; you just can't access them right away.
  3. When a directory storage pool is destroyed, nothing regarding its data is deleted; libvirt cannot delete those files and magically recreate them later if the user activates the pool again.

Here is the output of some virsh commands I performed which show that the volumes in a logical pool are not deleted when the pool is destroyed and later started:

[vianac@fedora kimchi]$ file /dev/vdb
/dev/vdb: block special (252/16)
[vianac@fedora kimchi]$ sudo virsh pool-define-as vgfoo logical --source-dev /dev/vdb
Pool vgfoo defined
[vianac@fedora kimchi]$ sudo vgs
  VG            #PV #LV #SN Attr   VSize  VFree
  fedora_fedora   1   2   0 wz--n- 19,51g 40,00m
[vianac@fedora kimchi]$ ls /dev/vgfoo
ls: cannot access /dev/vgfoo: No such file or directory
[vianac@fedora kimchi]$ sudo virsh pool-build vgfoo
Pool vgfoo built

[vianac@fedora kimchi]$ sudo vgs
  VG            #PV #LV #SN Attr   VSize  VFree
  fedora_fedora   1   2   0 wz--n- 19,51g 40,00m
  vgfoo           1   0   0 wz--n-  5,00g  5,00g
[vianac@fedora kimchi]$ ls /dev/vgfoo
ls: cannot access /dev/vgfoo: No such file or directory
[vianac@fedora kimchi]$ sudo virsh pool-start vgfoo
Pool vgfoo started

[vianac@fedora kimchi]$ sudo virsh vol-create-as vgfoo pvbar 1G
Vol pvbar created

[vianac@fedora kimchi]$ ls /dev/vgfoo
pvbar
[vianac@fedora kimchi]$ sudo mkfs.ext4 /dev/vgfoo/pvbar
sudo mkfs.ext4 /dev/vgfoo/pvbar
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: d2b1ab30-7abf-4805-821a-5bcb63031525
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

[vianac@fedora kimchi]$ sudo mount /dev/vgfoo/pvbar /mnt/
[vianac@fedora kimchi]$ ls /mnt
lost+found
[vianac@fedora kimchi]$ sudo touch /mnt/kimchi
[vianac@fedora kimchi]$ sudo umount /mnt
[vianac@fedora kimchi]$ sudo virsh pool-destroy vgfoo
Pool vgfoo destroyed

[vianac@fedora kimchi]$ ls /dev/vgfoo
ls: cannot access /dev/vgfoo: No such file or directory
[vianac@fedora kimchi]$ sudo virsh pool-start vgfoo
Pool vgfoo started

[vianac@fedora kimchi]$ sudo mount /dev/vgfoo/pvbar /mnt/
[vianac@fedora kimchi]$ ls /mnt/
kimchi  lost+found

Therefore, what's happening in the description of this issue is expected, according to libvirt.

I have two proposals for this issue:

  1. Do nothing and rely on libvirt's behavior here. As shown above, this is exactly what happens if the user uses virsh or any other libvirt-based application. In all cases, a VM cannot be started if it contains at least one disk in an inactive logical storage pool because libvirt removes the corresponding symlinks when it's destroyed.
  2. Create a consistent behavior and never allow a VM to be started if it contains at least one disk in an inactive storage pool. Even if that means removing the "feature" which currently allows a VM with a disk in an inactive directory storage pool to be started.

Personally, I prefer proposal (2) as it is more consistent and makes more sense; afterall, the user shouldn't be trying to start a VM in an inactive storage pool.

@alinefm
Copy link
Member

alinefm commented Mar 9, 2015

I will close it as duplicated of #355

@alinefm alinefm closed this as completed Mar 9, 2015
@cd1 cd1 added the duplicated label Mar 11, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants