Discussion:
unknown
1970-01-01 00:00:00 UTC
Permalink
I'm leaning towards keeping at least
UNSUPPORTED/SUPPORTED_OFFLINE/SUPPORTED. Add ability to offline/online
volume & FS from command line. We are missing FS call for this.

Thoughts/Suggestions?
* Pool
- Thinking we should make element_type mandatory, would this be an issue?
- ELEMENT_TYPE_SYS_RESERVED, I believe we should remove this and not
return pools of this type to the user, thus this wouldn't be needed
NetApp ONTAP allowing user to use system pool(aggr0), I don't we should
override settings.
IBM XIV only has a system pool where all other pool is sub pool of it.
If we hiding system pool, there is no way for us to know there total
usable size of IBM XIV. (Actually, current code does not show system
pool, but I intend to show them in the future, it help user to determine
the space when creating sub pool).
I am OK to promote 'element_type' to mandatory and remove
'ELEMENT_TYPE_SYS_RESERVED'. But I will not agree on hiding system pool.
OK, lets go with making element_type mandatory. I didn't want to expose
things people could use, but if this is needed to create new pools on
XIV so be it.
* pool_create_from_disks: We need a way to tell which disks can be used
for this method.
lsm.Disk.DISK_ROLE_POOL_MEMBER
# Acting as pool member.
lsm.DISK.DISK_ROLE_SPARE
# Acting as spare disk.
lsm.DISK.DISK_ROLE_FREE
# Can be used for pool creating(if support) or spare disk(if
# support)
lsm.DISK.DISK_ROLE_SYSTEM_RESERVE
# Not usable.
lsm.DISK.DISK_ROLE_UNKNOWN
# Not sure
These would help. How would a user be able to add the same disk to
multiple pools as long as the pool doesn't consume the entire disk?

For example you have (4) 2TB disks which are empty (ROLE_FREE). You
should be able to use the same disks to make a ~3.2TB RAID5 and a ~2TB
RAID10. What would the enumerated type be for the disks after the first
pool create? Can we accommodate this use case?
* lsm.Client.volumes()
- Same question on status_info from above
- Are we going to support all search fields for all plug-ins?
Yes. In order to save the search time and transfer data, plugin should
do it.
To save plugin developer load, we can allowing plugin setting a flag(or
other magic thing) to inform _client.py whether they can handle
searching(during plugin_startup() maybe). If not, _client.py will use
the routine way to search/filter the returned data.
If we are going to always support filtering and the plug-in cannot
offload search to array, we should provide generic filtering functions
for plug-ins writers. This will reduce traffic over the JSON RPC
interface. If you're only asking for 1 and the plugin needs to transfer
all across socket from array to plug-in, it would be better to not
transfer 100K across RPC for the client side code to search and throw
away.

I'm also thinking we may want to add a capability flag which denotes
array searching supported. That way a user could determine that it
would probably be faster for them to retrieve all items if they want to
do a number of different searching etc. on the data set themselves
instead of doing queries which result in the data being retrieved
multiple times.

Actually, this might be a good time to re-visit adding a generic caching
support layer, but we should probably defer that until later as it will
be hidden from user view.
* VolumeReplication, wondering if we can come up with a better name?
Some document said 'Replica'.
I am not native English speaker with no creative mine. Do you any name
in mind?
Lets go with this for now as I can't think of anything else better at
the moment. If we come up with something better before API freeze we
will do a rename.

Anyone else have some suggestions?
BTW, there is still having FsSnapshot, FsClone, should we merge them
into 'FsReplica'(need better name also)?
Yes, perhaps we could use FsDeltaRO, FsDeltaRW or are you suggesting we
change to single call with a copy type?
* volume_replication_giveback: What arrays support this? What are the
differences between volume_replication_restore and
volume_replication_giveback other than different types of supported
replication types?
Restore is copy target volume data back to source volume.
Failover and giveback is only for SYNC_REMOTE and
ASYNC_REMOTE(technically SYNC_LOCAL and ASYNC_LOCAL also support this,
source volume is RW
target volume is RO or NO_ACCESS.
Syncing with SYNC or ASYNC
source volume is NO_ACCESS or RO or the whole site completely down.
target volume is RW. DR site is acting as production
environment to handling all user data.
No sync
source volume is NO_ACCESS or RO or RW.
Depending on vendor design.
target volume is NO_ACCESS or RO or RW.
Depending on vendor design.
Data is copying from DR site target volume to PR site source
volume.
EMC VMAX/Symmetrix SRDF and SRDF/A support failover and giveback.
Since this use case is pretty common, I believe NetApp support this
also.
So the giveback is really more than just making the data the same, it
also changes the access to the source and target volumes. Are these
directly modeled after SMI-S? I haven't explored the docs on these in
depth.

Thanks,
Tony

Loading...