-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Fix(zpool): Ensure consistent device path normalization for idempotency #11020
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 3 commits
b476b2f
5ad795d
72afd38
ed52c1f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,2 @@ | ||
| bugfixes: | ||
| - zpool - idempotency failed when canonical device IDs were used; the fix now ensures consistent device path normalization (https://github.com/ansible-collections/community.general/pull/11020). | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -310,11 +310,16 @@ def base_device(self, device): | |
| if match: | ||
| return match.group(1) | ||
|
|
||
| # disk/by-id drives | ||
| match = re.match(r'^(/dev/disk/by-id/(.*))-part\d+$', device) | ||
| if match: | ||
| return match.group(1) | ||
|
|
||
| return device | ||
|
|
||
| def get_current_layout(self): | ||
| with self.zpool_runner('subcommand full_paths real_paths name', check_rc=True) as ctx: | ||
| rc, stdout, stderr = ctx.run(subcommand='status', full_paths=True, real_paths=True) | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm wondering whether this change (and the one further below) could break some uses of the module unrelated to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've been hoping to take some time to look at this a bit more, but I just haven't been able to find any lately, so my comments are based solely on browsing the changes and not testing, but my instincts are that:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Second that. A fix for the underlying problem should aim to support all kinds of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @felixfontein , I thought about that, but I do not think it will cause problems. Not here where we only read the current layout, and not further down. As an admin, when creating a zfs pool on a server with lots of disks, I expect disks to fail over time, to disappear or be replaced, and so on. So when I create the pool, I make sure to use device links that will survive those changes. Enforcing I checked the history of the module, to see if the translation to "real_paths" was added in a commit that would explain the rational, but it seems There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
@dthomson-triumf , @n3ph , I briefly considered extending the regex match to include all I know more-or-less what to expect in
I don't have much experience with zfs, but the module does not resolve symlinks when creating a pool, and zpools seems to work nicely with those symlinked devices. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @gumbo2k IMO the risk is fairly low considering that these device paths are values that would have been entered by a person (or a lookup module or something, but I think the point remains). However, I think the best solution is what you mentioned in your previous reply to @felixfontein. I was kind of fuzzy on why the device names were different in the zpool from the values that were entered. I didn't realize that it was the zpool module that was trying to change the device name via the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think, most of these devices are managed by udev and device mapper anyway. In general, I would leave it up to the user to make sense out of what is going to be configured. Underlying LUN specifics are nothing for an anisble module to assume, but for the user to consider. Edit: Though, thank you for looking into this issue.. 🙇🏼 |
||
| with self.zpool_runner('subcommand full_paths name', check_rc=True) as ctx: | ||
| rc, stdout, stderr = ctx.run(subcommand='status', full_paths=True) | ||
|
|
||
| vdevs = [] | ||
| current = None | ||
|
|
@@ -433,8 +438,8 @@ def add_vdevs(self): | |
| return {'prepared': stdout} | ||
|
|
||
| def list_vdevs_with_names(self): | ||
| with self.zpool_runner('subcommand full_paths real_paths name', check_rc=True) as ctx: | ||
| rc, stdout, stderr = ctx.run(subcommand='status', full_paths=True, real_paths=True) | ||
| with self.zpool_runner('subcommand full_paths name', check_rc=True) as ctx: | ||
| rc, stdout, stderr = ctx.run(subcommand='status', full_paths=True) | ||
| in_cfg = False | ||
| saw_pool = False | ||
| vdevs = [] | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.