Discussion:
[Libstoragemgmt-devel] Behavior when deleting a pool
Tony Asleson
2014-03-24 04:02:21 UTC
Permalink
Hi Gris,

When you have an array that supports pool creation/deletion, what have
you observed when volumes exist on the pool and you try to delete the
pool? Does it fail as volumes exist or does it go forward and delete
the pool and all associated volumes with it?

Thanks,
Tony
Gris Ge
2014-03-24 07:20:08 UTC
Permalink
Post by Tony Asleson
Hi Gris,
When you have an array that supports pool creation/deletion, what have
you observed when volumes exist on the pool and you try to delete the
pool? Does it fail as volumes exist or does it go forward and delete
the pool and all associated volumes with it?
Thanks,
Tony
That is exactly what I want to raise to discuss next IRC meeting.

For SMI-S plugin, it will fail as volume exists.

The same question also apply to volume deletion. When there are some LUN
masking against the volume to delete, it will fail.

If we require user to clean up the dependency, we need to document the
detail about dependency of volume and pool when deleting pool.

I would suggest to delete whatever preventing the pool/volume/fs deletion.

Any feedback?

Thanks.
Best regards.
--
Gris Ge
Tony Asleson
2014-03-24 15:15:04 UTC
Permalink
Post by Gris Ge
Post by Tony Asleson
Hi Gris,
When you have an array that supports pool creation/deletion, what have
you observed when volumes exist on the pool and you try to delete the
pool? Does it fail as volumes exist or does it go forward and delete
the pool and all associated volumes with it?
Thanks,
Tony
That is exactly what I want to raise to discuss next IRC meeting.
For SMI-S plugin, it will fail as volume exists.
OK, that is what I thought, this would be the 'safest' option.
Post by Gris Ge
The same question also apply to volume deletion. When there are some LUN
masking against the volume to delete, it will fail.
If we require user to clean up the dependency, we need to document the
detail about dependency of volume and pool when deleting pool.
I would suggest to delete whatever preventing the pool/volume/fs deletion.
Any feedback?
Users have the ability for a given volume to see:

a. What initiators/groups have been granted access to it
b. What volumes exist for a given pool

Thus the user can remove all the these dependencies so that they could
delete it, these are the easy ones. However, for the actual volume and
some types of thin provisioning, you can't simply delete a volume if it
has one or more dependent children. That is why I added the
child_dependency & child_dependency_rm methods. The dependency chain
for a volume and the actual blocks it has allocated could be quite
involved and thus require someone who is knowledgeable about the array
to delete. If we choose to remove these methods, then we need the
ability to express the dependency chain and communicate it back to the
caller so that they can manually walk it replicating block for volumes
as needed.

My initial thought is that we let the end user of the library remove
individual access for each volume they want to delete and remove all the
volumes and file systems for a pool that they want to delete. It seems
like the safest default option.

In addition if a user did this while the block device or fs was in use,
it would be difficult to complete without throwing all types of errors
up the storage stack or causing systems to potentially hang/fail.


Regards,
Tony
Gris Ge
2014-03-25 02:47:57 UTC
Permalink
Post by Tony Asleson
Thus the user can remove all the these dependencies so that they could
delete it, these are the easy ones. However, for the actual volume and
some types of thin provisioning, you can't simply delete a volume if it
has one or more dependent children. That is why I added the
child_dependency & child_dependency_rm methods. The dependency chain
for a volume and the actual blocks it has allocated could be quite
involved and thus require someone who is knowledgeable about the array
to delete. If we choose to remove these methods, then we need the
ability to express the dependency chain and communicate it back to the
caller so that they can manually walk it replicating block for volumes
as needed.
We can raise these errors to guide user for dependency clean up.
Pool deletion:
ErrorNumber.POOL_HAS_VOLUME

Volume deletion:
ErrorNumber.VOLUME_MASKED
ErrorNumber.SNAPSHOT_FOUND
ErrorNumber.REPLICA_FOUND

FS deletion:
#TODO

Thanks.
--
Gris Ge
Tony Asleson
2014-03-25 16:05:42 UTC
Permalink
Post by Gris Ge
Post by Tony Asleson
Thus the user can remove all the these dependencies so that they could
delete it, these are the easy ones. However, for the actual volume and
some types of thin provisioning, you can't simply delete a volume if it
has one or more dependent children. That is why I added the
child_dependency & child_dependency_rm methods. The dependency chain
for a volume and the actual blocks it has allocated could be quite
involved and thus require someone who is knowledgeable about the array
to delete. If we choose to remove these methods, then we need the
ability to express the dependency chain and communicate it back to the
caller so that they can manually walk it replicating block for volumes
as needed.
We can raise these errors to guide user for dependency clean up.
ErrorNumber.POOL_HAS_VOLUME
We have this information, so yes user can handle.
Post by Gris Ge
ErrorNumber.VOLUME_MASKED
We have this information, so yes user can handle.
Post by Gris Ge
ErrorNumber.SNAPSHOT_FOUND
We have this for FS, the ability to list snapshots for a FS. We don't
for volumes.
Post by Gris Ge
ErrorNumber.REPLICA_FOUND
We don't have a way to give the user the dependency chain and that was
something I was trying to avoid and elaborated on in above comment.
Post by Gris Ge
#TODO
Same as deleting a volume if the array has dependencies between
replicated file systems.

For NetApp and file systems we call a method to split the clone.

I don't remember what SMI-S has for this for volumes, do you recall how
they handle it?

Regards,
Tony

Loading...