You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TASK [SCORED | 1.1.5 | PATCH | Ensure separate partition exists for /var] ***************************************************************[WARNING]: Consider using the mount module rather than running 'mount'. If you need to use command because mount is insufficient youcan add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.ok: [127.0.0.1]
But if you run this:
- hosts: 127.0.0.1connection: localtasks:
- name: See if /var is mountedfail:
msg: "/var is not mounted"when: ansible_mounts | selectattr('mount','equalto',"/var") | list | length == 0vars:
- mounts: "{{ansible_mounts}}"
You will get the following output:
fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "/var is not mounted"}
If you change /var to just / in the "when" statement of my playbook, you'll get "skipping: [127.0.0.1]" because / is mounted. So this method can correctly detect mounts.
There are ways to create mounts in Ansible using the mount module here but I'm not sure if you actually want to do that or want to just warn. Either way, I figured I would bring it up for consideration.
Note; I know a lot of people would do these mounts at OS install time so this is moot but I am using a VM from Digital Ocean which doesn't give you that option so you have to do it all using the CLI.
The text was updated successfully, but these errors were encountered:
Hi andrefecto. We had a lot of discussions regarding these topic. Long story short. Partitions should be defined during setup and not with a hardening role. IMHO, these rules for partitions should not apply to cloud images anyway. There is usually just one volatile partition for OS and a second bock-device for persistence if necessary.
@florianutz Thanks for the response. I agree that they shouldn't be done on cloud images because after some consideration I realized if I tried to resize my VM, it probably wouldn't work because of the partition table not being what it expects. However, for documenting which CIS rules a system isn't compliant with, it would still be helpful to have it at least warn that those directories aren't mounted.
The current setup for checking mounts doesn't actually check for mounts and will always return "OK."
For example, if you run the following playbook on a system that doesn't have /var as a mount:
Then you'll see this:
But if you run this:
You will get the following output:
If you change /var to just / in the "when" statement of my playbook, you'll get "skipping: [127.0.0.1]" because / is mounted. So this method can correctly detect mounts.
There are ways to create mounts in Ansible using the mount module here but I'm not sure if you actually want to do that or want to just warn. Either way, I figured I would bring it up for consideration.
Note; I know a lot of people would do these mounts at OS install time so this is moot but I am using a VM from Digital Ocean which doesn't give you that option so you have to do it all using the CLI.
The text was updated successfully, but these errors were encountered: