-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NAS-130126 / 24.10 / Wipe unused boot-pool disks #89
Conversation
time 1:00 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have been working on this for a few days now. This won't fix the issue. The only way to reliably remove the zpool labels is to run zpool labelclear -f /dev/disk<p1/2/3>
truenas_installer/install.py
Outdated
@@ -42,14 +46,18 @@ async def install(disks, set_pmbr, authentication, post_install, sql, callback): | |||
raise InstallError(f"Command {' '.join(e.cmd)} failed:\n{e.stderr.rstrip()}") | |||
|
|||
|
|||
async def format_disk(device, set_pmbr, callback): | |||
async def wipe_disk(device, callback): | |||
if (result := await run(["wipefs", "-a", device], check=False)).returncode != 0: | |||
callback(0, f"Warning: unable to wipe partition table for {device}: {result.stderr.rstrip()}") | |||
|
|||
# Erase both typical metadata area. | |||
await run(["sgdisk", "-Z", device], check=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Furthermore, the double call to sgdisk -Z
is completely unnecessary since wipefs -a
does everything we need it to.
@yocalebo IIRC we agreed on this general idea some time ago. They only thing that needs to be fixed is the |
Sorry, so what are we trying to do here? You want to wipe any disk that has a |
@yocalebo I am offering a UI to do it. I worked under assumption that the way we wipe the disk is correct. If it is not, I can either make changes here (wipe each partition individually?) or rebase on top of your code that fixes the issue. |
I see, so it just so happens that you're working on same area where there is a bug. The function we have right now wipes partition tables but it doesn't touch any filesystem labels. The issue that was found be QE recently is kind of annoying. They fresh install 24.04.2 and choose to install using no SWAP. They then fresh install using swap. Because of where zfs puts the zpool label information, the swap partition now has a
Notice the My branch doesn't have many changes. It simply removes the |
Btw, I did some historical investigation and found a commit you made here: https://ixsystems.atlassian.net/browse/NAS-108809 but this has never worked. |
7e359ad
to
1a1f02b
Compare
@yocalebo I tested it and it works |
time 1:00 |
with installation_lock: | ||
try: | ||
if not os.path.exists("/etc/hostid"): | ||
await run(["zgenhostid"]) | ||
|
||
for disk in disks: | ||
for disk in destination_disks: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You've changed it so that you're iterating destination_disks
but you're passing the disks
list object into format_disks
. This is sloppy. The same goes for for disk in wipe_disks
. If you kept those there for logging purposes, then we should moving the logging into format_disk
and wipe_disk
. Also, because we're now dealing with multiple disks, we should rename the functions to wipe_disks
and format_disks
accordingly.
cac2ccb
to
27753cf
Compare
27753cf
to
587bc0b
Compare
This PR has been merged and conversations have been locked. |
No description provided.