Discussion:
[Libstoragemgmt-devel] Some missing functionality
Tony Asleson
2014-04-07 18:40:15 UTC
Permalink
While working on improving plug-in tests it became apparent that we
could use the following:

- A way to retrieve the target interfaces supported and their associated
ID's (FC WWPN, iSCSI IQN, IPs) etc. I'm thinking this should be added
to the system class, thoughts? Perhaps we rename initiator class to
interface and leverage as a list in the system class?

- A way to drive IO (block & file), create files, compare block devices
& file systems to ensure that array API calls are doing what we expect
of them. For example for file systems API we can copy/clone individual
files so having the ability to create files, call into the array to copy
and then verify the result would be ideal. What suggestions do people
have for existing tools to do this? I guess we could just write a
simple python library to do a lot of this without much effort.

Thanks,
Tony
Gris Ge
2014-04-08 01:51:37 UTC
Permalink
Post by Tony Asleson
While working on improving plug-in tests it became apparent that we
- A way to retrieve the target interfaces supported and their associated
ID's (FC WWPN, iSCSI IQN, IPs) etc. I'm thinking this should be added
to the system class, thoughts? Perhaps we rename initiator class to
interface and leverage as a list in the system class?
I was planning to purge the initiator class. We can discuss on demo
patch.

How about use FrontEnd? It represent the storage system ports for host
initiator to collect. (BackEnd was for storage system to collect disks or
disk enclosure).

Meanwhile, a class 'Controller' could be introduced also to state the
ownership of FrontEnd, Disk[1], Volume[2] and etc. Controller state is
every useful when having failover.

[1] NetApp have their disk owned by different filers/controllers.
[2] EMC/NetApp are having their LUN/volume owned by different controllers.
It's very important information on ALUA or active/passive failover.
For active/active array(like EMC VMAX/DMAX), volume and disk owner
could all available controller which indicate I/O can be handled by
any controller.
Post by Tony Asleson
- A way to drive IO (block & file), create files, compare block devices
& file systems to ensure that array API calls are doing what we expect
of them. For example for file systems API we can copy/clone individual
files so having the ability to create files, call into the array to copy
and then verify the result would be ideal. What suggestions do people
have for existing tools to do this? I guess we could just write a
simple python library to do a lot of this without much effort.
Are you referring to offload data transfer(ODX)?
https://lwn.net/Articles/548347/
http://ashwinpawar.blogspot.com/2013/07/offloaded-data-transfers-odx-support.html
https://lkml.org/lkml/2013/8/23/259

I was expecting this handled by filesystem or OS scsi/block layer.
Post by Tony Asleson
Thanks,
Tony
--
Gris Ge
Tony Asleson
2014-04-08 14:03:52 UTC
Permalink
Post by Gris Ge
Post by Tony Asleson
While working on improving plug-in tests it became apparent that we
- A way to retrieve the target interfaces supported and their associated
ID's (FC WWPN, iSCSI IQN, IPs) etc. I'm thinking this should be added
to the system class, thoughts? Perhaps we rename initiator class to
interface and leverage as a list in the system class?
I was planning to purge the initiator class. We can discuss on demo
patch.
That's why I said rename initiator class. It would effectively cease to
exist then.
Post by Gris Ge
How about use FrontEnd? It represent the storage system ports for host
initiator to collect. (BackEnd was for storage system to collect disks or
disk enclosure).
I don't have specific names in mind, we just need something that
represents what supported transports an array has and how to address
them. When working on the plug-in tester I want to be able to query
such information so that I can mask & map an appropriate initiator to
the array without having to know what an array support through other means.
Post by Gris Ge
Meanwhile, a class 'Controller' could be introduced also to state the
ownership of FrontEnd, Disk[1], Volume[2] and etc. Controller state is
every useful when having failover.
[1] NetApp have their disk owned by different filers/controllers.
[2] EMC/NetApp are having their LUN/volume owned by different controllers.
It's very important information on ALUA or active/passive failover.
For active/active array(like EMC VMAX/DMAX), volume and disk owner
could all available controller which indicate I/O can be handled by
any controller.
Lets work on this after we get the other stuff ironed out. We also need
to work on a library functionality for the initiator side and multipath.
Post by Gris Ge
Post by Tony Asleson
- A way to drive IO (block & file), create files, compare block devices
& file systems to ensure that array API calls are doing what we expect
of them. For example for file systems API we can copy/clone individual
files so having the ability to create files, call into the array to copy
and then verify the result would be ideal. What suggestions do people
have for existing tools to do this? I guess we could just write a
simple python library to do a lot of this without much effort.
Are you referring to offload data transfer(ODX)?
No
Post by Gris Ge
https://lwn.net/Articles/548347/
http://ashwinpawar.blogspot.com/2013/07/offloaded-data-transfers-odx-support.html
https://lkml.org/lkml/2013/8/23/259
I was expecting this handled by filesystem or OS scsi/block layer.
Sorry I wasn't clear in my request. This is for a pure testing
perspective, not features needed by the library. We don't have to do
exhaustive tests, but quick evaluations to ensure we are providing
consistent behavior. For example, we report to a user that a copy is
complete, it would be good to verify the first, middle and last LBAs
that they actually have the correct data etc. during testing.

Regards,
Tony

Loading...