Discussion:
[PATCH 00/17] Pool RAID info query and hardware RAID volume creation
Gris Ge
2015-04-16 15:05:23 UTC
Permalink
* Putting two patch sets into one. As `pool_raid_info()` patch set is
just preparation for `volume_create_raid()` patch set. It's better to explain
design choice with both.

* New method `pool_raid_info()` to query pool member information:
* RAID type
* Member type: disk RAID pool or sub-pool.
* Members: disks or parent pools.

* New method `volume_create_raid()` to create RAID group volume.

* New method `volume_create_raid_cap_get()` to query RAID group volume create
supported RAID type and strip sizes.

* Design notes:
* Why we need pool_raid_info() when we already have volume_raid_info()?
The `pool_raid_info()` is focus on RAID disk or sub-pool layout.

The `volume_raid_info()` is focus on providing information about host
I/O performance and availability(raid).

* Why don't create single sharable method for SAN/NAS/DAS about RAID
creation?
Please refer to design note of patch 'New feature: Create RAID volume
on hardware RAID.'

* Why no RAID creation method for SAN and NAS?
As we tried before, due to complex RAID settings of SAN and NAS
storage, I didn't find a simple design to support all of them.

* Please refer to design note of patch 'New feature: Create RAID volume'
also.

* Please comment in github page, thank you.
https://github.com/libstorage/libstoragemgmt/pull/11


Gris Ge (17):
New method: lsm.Client.pool_raid_info()/lsm_pool_raid_info()
lsmcli: Add new command pool-raid-info
Tests: Add test for lsm.Client.pool_raid_info() and
lsm_pool_raid_info().
Simulator Plugin: Add lsm.Client.pool_raid_info() support.
Simulator C plugin: Add lsm_pool_raid_info() support.
ONTAP Plugin: Add lsm.Client.pool_raid_info() support.
MegaRAID Plugin: Add lsm.Client.pool_raid_info() support.
HP Smart Array Plugin: Add lsm.Client.pool_raid_info() support.
Simulator Plugin: Fix pool free space calculation.
MegaRAID Plugin: Change Disk.plugin_data to parse friendly format.
New feature: Create RAID volume on hardware RAID.
lsmcli: New commands for hardware RAID volume creation.
Simulator Plugin: Add hardware RAID volume creation support.
Simulator C plugin: Sync changes of lsm_ops_v1_2 about
lsm_plug_volume_create_raid_cap_get
HP SmartArray Plugin: Add volume creation support.
MegaRAID Plugin: Add volume creation support.
Test: Hardware RAID volume creation.

c_binding/include/libstoragemgmt/libstoragemgmt.h | 89 ++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 5 +-
.../include/libstoragemgmt/libstoragemgmt_error.h | 2 +
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 84 ++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 20 ++
c_binding/lsm_convert.cpp | 41 +++
c_binding/lsm_convert.hpp | 12 +
c_binding/lsm_mgmt.cpp | 167 +++++++++++
c_binding/lsm_plugin_ipc.cpp | 135 ++++++++-
doc/man/lsmcli.1.in | 41 +++
plugin/hpsa/hpsa.py | 212 +++++++++++++-
plugin/megaraid/megaraid.py | 257 ++++++++++++++++-
plugin/ontap/ontap.py | 21 ++
plugin/sim/simarray.py | 309 +++++++++++++--------
plugin/sim/simulator.py | 11 +
plugin/simc/simc_lsmplugin.c | 44 ++-
python_binding/lsm/_client.py | 288 +++++++++++++++++++
python_binding/lsm/_common.py | 3 +
python_binding/lsm/_data.py | 9 +
test/cmdtest.py | 75 +++++
test/plugin_test.py | 60 ++++
test/tester.c | 131 +++++++++
tools/lsmcli/cmdline.py | 84 +++++-
tools/lsmcli/data_display.py | 77 ++++-
24 files changed, 2054 insertions(+), 123 deletions(-)
--
1.8.3.1
Gris Ge
2015-04-16 15:05:24 UTC
Permalink
* Python part:
* New python method: lsm.Client.pool_raid_info(pool,flags)
Query the RAID information for certain volume.
Returns [raid_type, member_type, member_ids]
* For sub-pool which allocated space from other pool(like NetApp ONTAP
volume which allocate space from ONTAP aggregate), the member_type
will be lsm.Pool.MEMBER_TYPE_POOL.
* For disk RAID pool which use the whole disk as RAID member(like NetApp
ONTAP aggregate or EMC VNX pool) which the member_type will be
lsm.Pool.MEMBER_TYPE_DISK.

* New constants:
* lsm.Pool.MEMBER_TYPE_POOL
* lsm.Pool.MEMBER_TYPE_DISK
* lsm.Pool.MEMBER_TYPE_OTHER
* lsm.Pool.MEMBER_TYPE_UNKNOWN

* New capability for this new method:
* lsm.Capabilities.POOL_RAID_INFO

* C part:
* New C functions:
* lsm_pool_raid_info()
For API user.
* lsm_plug_pool_raid_info()
For plugin.

* New constants:
* LSM_POOL_MEMBER_TYPE_POOL
* LSM_POOL_MEMBER_TYPE_DISK
* LSM_POOL_MEMBER_TYPE_OTHER
* LSM_POOL_MEMBER_TYPE_UNKNOWN
* LSM_CAP_POOL_RAID_INFO

* For detail information, please refer to docstring of
lsm.Client.pool_raid_info() method in python_binding/lsm/_client.py file
and lsm_pool_raid_info() method in
c_binding/include/libstoragemgmt/libstoragemgmt.h.

Changes in V2:

* Use lsm_string_list instead of 'char **' for 'member_ids' argument in C
library.
* Removed 'member_count' argument as lsm_string_list has
lsm_string_list_size() function.
* Fix the incorrect comment of plugin interface 'lsm_plug_volume_raid_info'.

Signed-off-by: Gris Ge <***@redhat.com>
---
c_binding/include/libstoragemgmt/libstoragemgmt.h | 27 +++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 4 +-
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 27 +++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 17 +++++
c_binding/lsm_mgmt.cpp | 49 +++++++++++++
c_binding/lsm_plugin_ipc.cpp | 43 ++++++++++-
python_binding/lsm/_client.py | 85 ++++++++++++++++++++++
python_binding/lsm/_data.py | 6 ++
8 files changed, 256 insertions(+), 2 deletions(-)

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt.h b/c_binding/include/libstoragemgmt/libstoragemgmt.h
index 3d34f75..5d73919 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt.h
@@ -864,6 +864,33 @@ int LSM_DLL_EXPORT lsm_volume_raid_info(
uint32_t *strip_size, uint32_t *disk_count,
uint32_t *min_io_size, uint32_t *opt_io_size, lsm_flag flags);

+/**
+ * Retrieves the RAID info of given pool. New in version 1.2.
+ * @param[in] c Valid connection
+ * @param[in] pool The lsm_pool ptr.
+ * @param[out] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[out] member_type
+ * Enum of lsm_pool_member_type.
+ * @param[out] member_ids
+ * The pointer to lsm_string_list pointer.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_POOL,
+ * the 'member_ids' will contain a list of parent Pool
+ * IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_DISK,
+ * the 'member_ids' will contain a list of disk IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_OTHER or
+ * LSM_POOL_MEMBER_TYPE_UNKNOWN, the member_ids should
+ * be NULL.
+ * Need to use lsm_string_list_free() to free this memory.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_pool_raid_info(
+ lsm_connect *c, lsm_pool *pool, lsm_volume_raid_type *raid_type,
+ lsm_pool_member_type *member_type, lsm_string_list **member_ids,
+ lsm_flag flags);
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
index 18490f3..9e96035 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
@@ -113,7 +113,9 @@ typedef enum {
LSM_CAP_TARGET_PORTS = 216, /**< List target ports */
LSM_CAP_TARGET_PORTS_QUICK_SEARCH = 217, /**< Filtering occurs on array */

- LSM_CAP_DISKS = 220 /**< List disk drives */
+ LSM_CAP_DISKS = 220, /**< List disk drives */
+ LSM_CAP_POOL_RAID_INFO = 221,
+ /**^ Query pool RAID information */

} lsm_capability_type;

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
index b36586c..a3dd865 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
@@ -827,6 +827,32 @@ typedef int (*lsm_plug_volume_raid_info)(lsm_plugin_ptr c, lsm_volume *volume,
uint32_t *disk_count, uint32_t *min_io_size, uint32_t *opt_io_size,
lsm_flag flags);

+/**
+ * Retrieves the RAID info of given pool. New in version 1.2.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] pool The lsm_pool ptr.
+ * @param[out] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[out] member_type
+ * Enum of lsm_pool_member_type.
+ * @param[out] member_ids
+ * The pointer of lsm_string_list pointer.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_POOL,
+ * the 'member_ids' will contain a list of parent Pool
+ * IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_DISK,
+ * the 'member_ids' will contain a list of disk IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_OTHER or
+ * LSM_POOL_MEMBER_TYPE_UNKNOWN, the member_ids should
+ * be NULL.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_pool_raid_info)(
+ lsm_plugin_ptr c, lsm_pool *pool, lsm_volume_raid_type *raid_type,
+ lsm_pool_member_type *member_type, lsm_string_list **member_ids,
+ lsm_flag flags);
+
/** \struct lsm_ops_v1_2
* \brief Functions added in version 1.2
* NOTE: This structure will change during the developement util version 1.2
@@ -835,6 +861,7 @@ typedef int (*lsm_plug_volume_raid_info)(lsm_plugin_ptr c, lsm_volume *volume,
struct lsm_ops_v1_2 {
lsm_plug_volume_raid_info vol_raid_info;
/**^ Query volume RAID information*/
+ lsm_plug_pool_raid_info pool_raid_info;
};

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
index c5607c1..115ec5a 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
@@ -169,6 +169,23 @@ typedef enum {
/**^ Vendor specific RAID type */
} lsm_volume_raid_type;

+/**< \enum lsm_pool_member_type Different types of Pool member*/
+typedef enum {
+ LSM_POOL_MEMBER_TYPE_UNKNOWN = 0,
+ /**^ Plugin failed to detect the RAID member type. */
+ LSM_POOL_MEMBER_TYPE_OTHER = 1,
+ /**^ Vendor specific RAID member type. */
+ LSM_POOL_MEMBER_TYPE_DISK = 2,
+ /**^ Pool is created from RAID group using whole disks. */
+ LSM_POOL_MEMBER_TYPE_POOL = 3,
+ /**^
+ * Current pool(also known as sub-pool) is allocated from other
+ * pool(parent pool).
+ * The 'raid_type' will set to RAID_TYPE_OTHER unless certain RAID system
+ * support RAID using space of parent pools.
+ */
+} lsm_pool_member_type;
+
#define LSM_VOLUME_STRIP_SIZE_UNKNOWN 0
#define LSM_VOLUME_DISK_COUNT_UNKNOWN 0
#define LSM_VOLUME_MIN_IO_SIZE_UNKNOWN 0
diff --git a/c_binding/lsm_mgmt.cpp b/c_binding/lsm_mgmt.cpp
index e6a254f..ce948bf 100644
--- a/c_binding/lsm_mgmt.cpp
+++ b/c_binding/lsm_mgmt.cpp
@@ -802,6 +802,55 @@ int lsm_pool_list(lsm_connect *c, char *search_key, char *search_value,
return rc;
}

+int lsm_pool_raid_info(lsm_connect *c, lsm_pool *pool,
+ lsm_volume_raid_type * raid_type,
+ lsm_pool_member_type *member_type,
+ lsm_string_list **member_ids, lsm_flag flags)
+{
+ if( LSM_FLAG_UNUSED_CHECK(flags) ) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ int rc = LSM_ERR_OK;
+ CONN_SETUP(c);
+
+ if( !LSM_IS_POOL(pool) ) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ if( !raid_type || !member_type || !member_ids) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["pool"] = pool_to_value(pool);
+ p["flags"] = Value(flags);
+ Value parameters(p);
+ try {
+
+ Value response;
+
+ rc = rpc(c, "pool_raid_info", parameters, response);
+ if( LSM_ERR_OK == rc ) {
+ std::vector<Value> j = response.asArray();
+ *raid_type = (lsm_volume_raid_type) j[0].asInt32_t();
+ *member_type = (lsm_pool_member_type) j[1].asInt32_t();
+ if (Value::array_t == j[2].valueType()){
+ if (j[2].asArray().size()){
+ *member_ids = value_to_string_list(j[2]);
+ }else{
+ *member_ids = NULL;
+ }
+ }
+ }
+ } catch( const ValueException &ve ) {
+ rc = logException(c, LSM_ERR_LIB_BUG, "Unexpected type",
+ ve.what());
+ }
+ return rc;
+
+}
+
int lsm_target_port_list(lsm_connect *c, const char *search_key,
const char *search_value,
lsm_target_port **target_ports[],
diff --git a/c_binding/lsm_plugin_ipc.cpp b/c_binding/lsm_plugin_ipc.cpp
index d2a43d4..c2cd294 100644
--- a/c_binding/lsm_plugin_ipc.cpp
+++ b/c_binding/lsm_plugin_ipc.cpp
@@ -1015,6 +1015,46 @@ static int handle_volume_raid_info(lsm_plugin_ptr p, Value &params,
return rc;
}

+static int handle_pool_raid_info(lsm_plugin_ptr p, Value &params,
+ Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->pool_raid_info) {
+ Value v_pool = params["pool"];
+
+ if(IS_CLASS_POOL(v_pool) &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+ lsm_pool *pool = value_to_pool(v_pool);
+ std::vector<Value> result;
+
+ if( pool ) {
+ lsm_volume_raid_type raid_type;
+ lsm_pool_member_type member_type;
+ lsm_string_list *member_ids;
+
+ rc = p->ops_v1_2->pool_raid_info(
+ p, pool, &raid_type, &member_type,
+ &member_ids, LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ result.push_back(Value((int32_t)raid_type));
+ result.push_back(Value((int32_t)member_type));
+ result.push_back(string_list_to_value(member_ids));
+ response = Value(result);
+ }
+
+ lsm_pool_record_free(pool);
+ } else {
+ rc = LSM_ERR_NO_MEMORY;
+ }
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+}
+
static int ag_list(lsm_plugin_ptr p, Value &params, Value &response)
{
int rc = LSM_ERR_NO_SUPPORT;
@@ -2213,7 +2253,8 @@ static std::map<std::string,handler> dispatch = static_map<std::string,handler>
("volume_resize", handle_volume_resize)
("volumes_accessible_by_access_group", vol_accessible_by_ag)
("volumes", handle_volumes)
- ("volume_raid_info", handle_volume_raid_info);
+ ("volume_raid_info", handle_volume_raid_info)
+ ("pool_raid_info", handle_pool_raid_info);

static int process_request(lsm_plugin_ptr p, const std::string &method, Value &request,
Value &response)
diff --git a/python_binding/lsm/_client.py b/python_binding/lsm/_client.py
index a641b1d..1ed6e1e 100644
--- a/python_binding/lsm/_client.py
+++ b/python_binding/lsm/_client.py
@@ -1074,3 +1074,88 @@ def volume_raid_info(self, volume, flags=FLAG_RSVD):
No support.
"""
return self._tp.rpc('volume_raid_info', _del_self(locals()))
+
+ @_return_requires([int, int, [unicode]])
+ def pool_raid_info(self, pool, flags=FLAG_RSVD):
+ """Query the RAID information of certain pool.
+
+ New in version 1.2.
+
+ Query the RAID type, RAID member type, RAID member ids
+
+ This method requires this capability:
+ lsm.Capabilities.POOL_RAID_INFO
+
+ Args:
+ pool (lsm.Pool object): Pool to query
+ flags (int): Reserved for future use. Should be set as
+ lsm.Client.FLAG_RSVD
+ Returns:
+ [raid_type, member_type, member_ids]
+
+ raid_type (int): RAID Type of requested pool.
+ Could be one of these values:
+ Volume.RAID_TYPE_RAID0
+ Stripe
+ Volume.RAID_TYPE_RAID1
+ Two disks Mirror
+ Volume.RAID_TYPE_RAID3
+ Byte-level striping with dedicated parity
+ Volume.RAID_TYPE_RAID4
+ Block-level striping with dedicated parity
+ Volume.RAID_TYPE_RAID5
+ Block-level striping with distributed parity
+ Volume.RAID_TYPE_RAID6
+ Block-level striping with two distributed parities,
+ aka, RAID-DP
+ Volume.RAID_TYPE_RAID10
+ Stripe of mirrors
+ Volume.RAID_TYPE_RAID15
+ Parity of mirrors
+ Volume.RAID_TYPE_RAID16
+ Dual parity of mirrors
+ Volume.RAID_TYPE_RAID50
+ Stripe of parities Volume.RAID_TYPE_RAID60
+ Stripe of dual parities
+ Volume.RAID_TYPE_RAID51
+ Mirror of parities
+ Volume.RAID_TYPE_RAID61
+ Mirror of dual parities
+ Volume.RAID_TYPE_JBOD
+ Just bunch of disks, no parity, no striping.
+ Volume.RAID_TYPE_UNKNOWN
+ The plugin failed to detect the volume's RAID type.
+ Volume.RAID_TYPE_MIXED
+ This volume contains multiple RAID settings.
+ Volume.RAID_TYPE_OTHER
+ Vendor specific RAID type
+ member_type(int):
+ Could be one of these values:
+ Pool.MEMBER_TYPE_POOL
+ # Current pool(also known as sub-pool) is allocated
+ # from other pool(parent pool).
+ # The 'raid_type' will set to RAID_TYPE_OTHER
+ # unless certain RAID system support RAID using space
+ # of parent pools.
+ Pool.MEMBER_TYPE_DISK
+ # Pool is created from RAID group using whole disks.
+ Pool.MEMBER_TYPE_OTHER
+ # Vendor specific RAID member type.
+ Pool.MEMBER_TYPE_UNKNOWN
+ # Plugin failed to detect the RAID member type.
+ member_ids(list(string)):
+ When 'member_type' is Pool.MEMBER_TYPE_POOL,
+ the 'member_ids' will contain a list of parent Pool IDs.
+ When 'member_type' is Pool.MEMBER_TYPE_DISK,
+ the 'member_ids' will contain a list of disk IDs.
+ When 'member_type' is Pool.MEMBER_TYPE_OTHER or
+ Pool.MEMBER_TYPE_UNKNOWN, the member_ids should be an
+ empty list.
+ Raises:
+ LsmError:
+ ErrorNumber.NO_SUPPORT
+ No support.
+ ErrorNumber.NOT_FOUND_POOL
+ Pool not found.
+ """
+ return self._tp.rpc('pool_raid_info', _del_self(locals()))
diff --git a/python_binding/lsm/_data.py b/python_binding/lsm/_data.py
index 23681dd..2d60562 100644
--- a/python_binding/lsm/_data.py
+++ b/python_binding/lsm/_data.py
@@ -412,6 +412,11 @@ class Pool(IData):
STATUS_INITIALIZING = 1 << 14
STATUS_GROWING = 1 << 15

+ MEMBER_TYPE_UNKNOWN = 0
+ MEMBER_TYPE_OTHER = 1
+ MEMBER_TYPE_DISK = 2
+ MEMBER_TYPE_POOL = 3
+
def __init__(self, _id, _name, _element_type, _unsupported_actions,
_total_space, _free_space,
_status, _status_info, _system_id, _plugin_data=None):
@@ -754,6 +759,7 @@ class Capabilities(IData):
TARGET_PORTS_QUICK_SEARCH = 217

DISKS = 220
+ POOL_RAID_INFO = 221

def _to_dict(self):
return {'class': self.__class__.__name__,
--
1.8.3.1
Gris Ge
2015-04-16 15:05:25 UTC
Permalink
* New command 'pool-raid-info' to utilize lsm.Client.pool_raid_info() method.

* Manpage updated.

* New alias 'pri' for 'pool-raid-info' command.

Signed-off-by: Gris Ge <***@redhat.com>
---
doc/man/lsmcli.1.in | 9 +++++++++
tools/lsmcli/cmdline.py | 19 ++++++++++++++++++-
tools/lsmcli/data_display.py | 41 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/doc/man/lsmcli.1.in b/doc/man/lsmcli.1.in
index 381d676..368f452 100644
--- a/doc/man/lsmcli.1.in
+++ b/doc/man/lsmcli.1.in
@@ -351,6 +351,13 @@ Query RAID information for given volume.
\fB--vol\fR \fI<VOL_ID>\fR
Required. The ID of volume to query.

+.SS pool-raid-info
+.TP 15
+Query RAID information for given pool.
+.TP
+\fB--pool\fR \fI<POOL_ID>\fR
+Required. The ID of pool to query.
+
.SS access-group-create
.TP 15
Create an access group.
@@ -589,6 +596,8 @@ Alias of 'volume-mask'
Alias of 'volume-unmask'
.SS vri
Alias of 'volume-raid-info'
+.SS pri
+Alias of 'pool-raid-info'
.SS ac
Alias of 'access-group-create'
.SS aa
diff --git a/tools/lsmcli/cmdline.py b/tools/lsmcli/cmdline.py
index 9b2b682..978d797 100644
--- a/tools/lsmcli/cmdline.py
+++ b/tools/lsmcli/cmdline.py
@@ -39,7 +39,8 @@

from lsm.lsmcli.data_display import (
DisplayData, PlugData, out,
- vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo)
+ vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo,
+ PoolRAIDInfo)


## Wraps the invocation to the command line
@@ -376,6 +377,14 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
),

dict(
+ name='pool-raid-info',
+ help='Query Pool RAID infomation',
+ args=[
+ dict(pool_id_opt),
+ ],
+ ),
+
+ dict(
name='access-group-create',
help='Create an access group',
args=[
@@ -637,6 +646,7 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
['ar', 'access-group-remove'],
['ad', 'access-group-delete'],
['vri', 'volume-raid-info'],
+ ['pri', 'pool-raid-info'],
)


@@ -1334,6 +1344,13 @@ def volume_raid_info(self, args):
VolumeRAIDInfo(
lsm_vol.id, *self.c.volume_raid_info(lsm_vol))])

+ def pool_raid_info(self, args):
+ lsm_pool = _get_item(self.c.pools(), args.pool, "Pool")
+ self.display_data(
+ [
+ PoolRAIDInfo(
+ lsm_pool.id, *self.c.pool_raid_info(lsm_pool))])
+
## Displays file system dependants
def fs_dependants(self, args):
fs = _get_item(self.c.fs(), args.fs, "File System")
diff --git a/tools/lsmcli/data_display.py b/tools/lsmcli/data_display.py
index 19f7114..115f15a 100644
--- a/tools/lsmcli/data_display.py
+++ b/tools/lsmcli/data_display.py
@@ -279,6 +279,26 @@ def raid_type_to_str(raid_type):
return _enum_type_to_str(raid_type, VolumeRAIDInfo._RAID_TYPE_MAP)


+class PoolRAIDInfo(object):
+ _MEMBER_TYPE_MAP = {
+ Pool.MEMBER_TYPE_UNKNOWN: 'Unknown',
+ Pool.MEMBER_TYPE_OTHER: 'Unknown',
+ Pool.MEMBER_TYPE_POOL: 'Pool',
+ Pool.MEMBER_TYPE_DISK: 'Disk',
+ }
+
+ def __init__(self, pool_id, raid_type, member_type, member_ids):
+ self.pool_id = pool_id
+ self.raid_type = raid_type
+ self.member_type = member_type
+ self.member_ids = member_ids
+
+ @staticmethod
+ def member_type_to_str(member_type):
+ return _enum_type_to_str(
+ member_type, PoolRAIDInfo._MEMBER_TYPE_MAP)
+
+
class DisplayData(object):

def __init__(self):
@@ -557,6 +577,27 @@ def __init__(self):
'value_conv_human': VOL_RAID_INFO_VALUE_CONV_HUMAN,
}

+ POOL_RAID_INFO_HEADER = OrderedDict()
+ POOL_RAID_INFO_HEADER['pool_id'] = 'Pool ID'
+ POOL_RAID_INFO_HEADER['raid_type'] = 'RAID Type'
+ POOL_RAID_INFO_HEADER['member_type'] = 'Member Type'
+ POOL_RAID_INFO_HEADER['member_ids'] = 'Member IDs'
+
+ POOL_RAID_INFO_COLUMN_SKIP_KEYS = []
+
+ POOL_RAID_INFO_VALUE_CONV_ENUM = {
+ 'raid_type': VolumeRAIDInfo.raid_type_to_str,
+ 'member_type': PoolRAIDInfo.member_type_to_str,
+ }
+ POOL_RAID_INFO_VALUE_CONV_HUMAN = []
+
+ VALUE_CONVERT[PoolRAIDInfo] = {
+ 'headers': POOL_RAID_INFO_HEADER,
+ 'column_skip_keys': POOL_RAID_INFO_COLUMN_SKIP_KEYS,
+ 'value_conv_enum': POOL_RAID_INFO_VALUE_CONV_ENUM,
+ 'value_conv_human': POOL_RAID_INFO_VALUE_CONV_HUMAN,
+ }
+
@staticmethod
def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
flag_human, flag_enum):
--
1.8.3.1
Gris Ge
2015-04-16 15:05:26 UTC
Permalink
* cmdtest.py: Test lsmcli 'pool-raid-info' command.
* plugin_test.py: Capability based python API test.
* tester.c: C API test for both simulator plugin and simulator C plugin.

Changes in V2:

* Updated tester.c to use lsm_string_list for 'member_ids' argument.

Signed-off-by: Gris Ge <***@redhat.com>
---
test/cmdtest.py | 20 ++++++++++++++++++++
test/plugin_test.py | 10 ++++++++++
test/tester.c | 30 ++++++++++++++++++++++++++++++
3 files changed, 60 insertions(+)

diff --git a/test/cmdtest.py b/test/cmdtest.py
index e80e027..97c5b88 100755
--- a/test/cmdtest.py
+++ b/test/cmdtest.py
@@ -696,6 +696,24 @@ def volume_raid_info_test(cap, system_id):
exit(10)
return

+def pool_raid_info_test(cap, system_id):
+ if cap['POOL_RAID_INFO']:
+ out = call([cmd, '-t' + sep, 'list', '--type', 'POOLS'])[1]
+ pool_list = parse(out)
+ for pool in pool_list:
+ out = call(
+ [cmd, '-t' + sep, 'pool-raid-info', '--pool', pool[0]])[1]
+ r = parse(out)
+ if len(r[0]) != 4:
+ print "pool-raid-info got expected output: %s" % out
+ exit(10)
+ if r[0][0] != pool[0]:
+ print "pool-raid-info output pool ID is not requested " \
+ "pool ID %s" % out
+ exit(10)
+ return
+
+
def run_all_tests(cap, system_id):
test_display(cap, system_id)
test_plugin_list(cap, system_id)
@@ -709,6 +727,8 @@ def run_all_tests(cap, system_id):

volume_raid_info_test(cap, system_id)

+ pool_raid_info_test(cap,system_id)
+
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("-c", "--command", action="store", type="string",
diff --git a/test/plugin_test.py b/test/plugin_test.py
index 69a45b7..895bec3 100755
--- a/test/plugin_test.py
+++ b/test/plugin_test.py
@@ -1283,6 +1283,16 @@ def test_create_delete_exports(self):
self.c.export_remove(exp)
self._fs_delete(fs)

+ def test_pool_raid_info(self):
+ for s in self.systems:
+ cap = self.c.capabilities(s)
+ if supported(cap, [Cap.POOL_RAID_INFO]):
+ for pool in self.c.pools():
+ (raid_type, member_type, member_ids) = \
+ self.c.pool_raid_info(pool)
+ self.assertTrue(type(raid_type) is int)
+ self.assertTrue(type(member_type) is int)
+ self.assertTrue(type(member_ids) is list)

def dump_results():
"""
diff --git a/test/tester.c b/test/tester.c
index 1622a75..63c2b5d 100644
--- a/test/tester.c
+++ b/test/tester.c
@@ -2887,6 +2887,35 @@ START_TEST(test_volume_raid_info)
}
END_TEST

+START_TEST(test_pool_raid_info)
+{
+ int rc;
+ lsm_pool **pools = NULL;
+ uint32_t poolCount = 0;
+ G(rc, lsm_pool_list, c, NULL, NULL, &pools, &poolCount,
+ LSM_CLIENT_FLAG_RSVD);
+
+ lsm_volume_raid_type raid_type;
+ lsm_pool_member_type member_type;
+ lsm_string_list *member_ids;
+
+ int i;
+ uint32_t y;
+ for (i = 0; i < poolCount; i++) {
+ G(
+ rc, lsm_pool_raid_info, c, pools[i], &raid_type, &member_type,
+ &member_ids, LSM_CLIENT_FLAG_RSVD);
+ for(y = 0; y < lsm_string_list_size(member_ids); y++){
+ // Simulator user reading the member id.
+ const char *cur_member_id = lsm_string_list_elem_get(
+ member_ids, y);
+ }
+ lsm_string_list_free(member_ids);
+ G(rc, lsm_pool_record_free, pools[i]);
+ }
+}
+END_TEST
+
Suite * lsm_suite(void)
{
Suite *s = suite_create("libStorageMgmt");
@@ -2923,6 +2952,7 @@ Suite * lsm_suite(void)
tcase_add_test(basic, test_nfs_exports);
tcase_add_test(basic, test_invalid_input);
tcase_add_test(basic, test_volume_raid_info);
+ tcase_add_test(basic, test_pool_raid_info);

suite_add_tcase(s, basic);
return s;
--
1.8.3.1
Gris Ge
2015-04-16 15:05:27 UTC
Permalink
* Add lsm.Client.pool_raid_info() support.

* Use lsm.Pool.MEMBER_TYPE_XXX to replace PoolRAID.MEMBER_TYPE_XXX.

* Removed unused disk_type_to_member_type() and member_type_is_disk()
methods.

* Data version bumped to 3.2.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 98 ++++++++++++++++---------------------------------
plugin/sim/simulator.py | 3 ++
2 files changed, 35 insertions(+), 66 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index ae14848..b03a146 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -67,54 +67,6 @@ def _random_vpd():


class PoolRAID(object):
- MEMBER_TYPE_UNKNOWN = 0
- MEMBER_TYPE_DISK = 1
- MEMBER_TYPE_DISK_MIX = 10
- MEMBER_TYPE_DISK_ATA = 11
- MEMBER_TYPE_DISK_SATA = 12
- MEMBER_TYPE_DISK_SAS = 13
- MEMBER_TYPE_DISK_FC = 14
- MEMBER_TYPE_DISK_SOP = 15
- MEMBER_TYPE_DISK_SCSI = 16
- MEMBER_TYPE_DISK_NL_SAS = 17
- MEMBER_TYPE_DISK_HDD = 18
- MEMBER_TYPE_DISK_SSD = 19
- MEMBER_TYPE_DISK_HYBRID = 110
- MEMBER_TYPE_DISK_LUN = 111
-
- MEMBER_TYPE_POOL = 2
-
- _MEMBER_TYPE_2_DISK_TYPE = {
- MEMBER_TYPE_DISK: Disk.TYPE_UNKNOWN,
- MEMBER_TYPE_DISK_MIX: Disk.TYPE_UNKNOWN,
- MEMBER_TYPE_DISK_ATA: Disk.TYPE_ATA,
- MEMBER_TYPE_DISK_SATA: Disk.TYPE_SATA,
- MEMBER_TYPE_DISK_SAS: Disk.TYPE_SAS,
- MEMBER_TYPE_DISK_FC: Disk.TYPE_FC,
- MEMBER_TYPE_DISK_SOP: Disk.TYPE_SOP,
- MEMBER_TYPE_DISK_SCSI: Disk.TYPE_SCSI,
- MEMBER_TYPE_DISK_NL_SAS: Disk.TYPE_NL_SAS,
- MEMBER_TYPE_DISK_HDD: Disk.TYPE_HDD,
- MEMBER_TYPE_DISK_SSD: Disk.TYPE_SSD,
- MEMBER_TYPE_DISK_HYBRID: Disk.TYPE_HYBRID,
- MEMBER_TYPE_DISK_LUN: Disk.TYPE_LUN,
- }
-
- @staticmethod
- def member_type_is_disk(member_type):
- """
- Returns True if defined 'member_type' is disk.
- False when else.
- """
- return member_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE
-
- @staticmethod
- def disk_type_to_member_type(disk_type):
- for m_type, d_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE.items():
- if disk_type == d_type:
- return m_type
- return PoolRAID.MEMBER_TYPE_UNKNOWN
-
_RAID_DISK_CHK = {
Volume.RAID_TYPE_JBOD: lambda x: x > 0,
Volume.RAID_TYPE_RAID0: lambda x: x > 0,
@@ -171,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.1"
+ VERSION = "3.2"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -916,6 +868,12 @@ def sim_syss(self):
"""
return self._get_table('systems', BackStore.SYS_KEY_LIST)

+ def sim_disk_ids_of_pool(self, sim_pool_id):
+ return list(
+ d['id']
+ for d in self._data_find(
+ 'disks', 'owner_pool_id="%s"' % sim_pool_id, ['id']))
+
def sim_disks(self):
"""
Return a list of sim_disk dict.
@@ -935,20 +893,6 @@ def sim_pool_of_id(self, sim_pool_id):

def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
element_type, unsupported_actions=0):
- # Detect disk type
- disk_type = None
- for sim_disk in self.sim_disks():
- if sim_disk['id'] not in sim_disk_ids:
- continue
- if disk_type is None:
- disk_type = sim_disk['disk_type']
- elif disk_type != sim_disk['disk_type']:
- disk_type = None
- break
- member_type = PoolRAID.MEMBER_TYPE_DISK
- if disk_type is not None:
- member_type = PoolRAID.disk_type_to_member_type(disk_type)
-
self._data_add(
'pools',
{
@@ -958,7 +902,7 @@ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
'element_type': element_type,
'unsupported_actions': unsupported_actions,
'raid_type': raid_type,
- 'member_type': member_type,
+ 'member_type': Pool.MEMBER_TYPE_DISK,
})

data_disk_count = PoolRAID.data_disk_count(
@@ -991,7 +935,7 @@ def sim_pool_create_sub_pool(self, name, parent_pool_id, size,
'element_type': element_type,
'unsupported_actions': unsupported_actions,
'raid_type': Volume.RAID_TYPE_OTHER,
- 'member_type': PoolRAID.MEMBER_TYPE_POOL,
+ 'member_type': Pool.MEMBER_TYPE_POOL,
'parent_pool_id': parent_pool_id,
'total_space': size,
})
@@ -2240,7 +2184,7 @@ def volume_raid_info(self, lsm_vol):
opt_io_size = Volume.OPT_IO_SIZE_UNKNOWN
disk_count = Volume.DISK_COUNT_UNKNOWN

- if sim_pool['member_type'] == PoolRAID.MEMBER_TYPE_POOL:
+ if sim_pool['member_type'] == Pool.MEMBER_TYPE_POOL:
parent_sim_pool = self.bs_obj.sim_pool_of_id(
sim_pool['parent_pool_id'])
raid_type = parent_sim_pool['raid_type']
@@ -2278,3 +2222,25 @@ def volume_raid_info(self, lsm_vol):
opt_io_size = int(data_disk_count * BackStore.STRIP_SIZE)

return [raid_type, strip_size, disk_count, min_io_size, opt_io_size]
+
+ @_handle_errors
+ def pool_raid_info(self, lsm_pool):
+ sim_pool = self.bs_obj.sim_pool_of_id(
+ SimArray._lsm_id_to_sim_id(
+ lsm_pool.id,
+ LsmError(ErrorNumber.NOT_FOUND_POOL, "Pool not found")))
+ member_type = sim_pool['member_type']
+ member_ids = []
+ if member_type == Pool.MEMBER_TYPE_POOL:
+ member_ids = [
+ SimArray._sim_id_to_lsm_id(
+ sim_pool['parent_pool_id'], 'POOL')]
+ elif member_type == Pool.MEMBER_TYPE_DISK:
+ member_ids = list(
+ SimArray._sim_id_to_lsm_id(sim_disk_id, 'DISK')
+ for sim_disk_id in self.bs_obj.sim_disk_ids_of_pool(
+ sim_pool['id']))
+ else:
+ member_type = Pool.MEMBER_TYPE_UNKNOWN
+
+ return sim_pool['raid_type'], member_type, member_ids
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index d562cd6..4638778 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
@@ -292,3 +292,6 @@ def target_ports(self, search_key=None, search_value=None, flags=0):

def volume_raid_info(self, volume, flags=0):
return self.sim_array.volume_raid_info(volume)
+
+ def pool_raid_info(self, pool, flags=0):
+ return self.sim_array.pool_raid_info(pool)
--
1.8.3.1
Gris Ge
2015-04-16 15:05:28 UTC
Permalink
* Add simply lsm_pool_raid_info() by return unknown data.

* Set LSM_CAP_POOL_RAID_INFO.

Changes in V2:

* Use lsm_string_list for member_ids of lsm_pool_raid_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/simc/simc_lsmplugin.c | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/plugin/simc/simc_lsmplugin.c b/plugin/simc/simc_lsmplugin.c
index 422a064..24d3b8c 100644
--- a/plugin/simc/simc_lsmplugin.c
+++ b/plugin/simc/simc_lsmplugin.c
@@ -392,6 +392,7 @@ static int cap(lsm_plugin_ptr c, lsm_system *system,
LSM_CAP_EXPORT_FS,
LSM_CAP_EXPORT_REMOVE,
LSM_CAP_VOLUME_RAID_INFO,
+ LSM_CAP_POOL_RAID_INFO,
-1
);

@@ -980,8 +981,29 @@ static int volume_raid_info(lsm_plugin_ptr c, lsm_volume *volume,
return rc;
}

+static int pool_raid_info(
+ lsm_plugin_ptr c, lsm_pool *pool,
+ lsm_volume_raid_type *raid_type, lsm_pool_member_type *member_type,
+ lsm_string_list **member_ids, lsm_flag flags)
+{
+ int rc = LSM_ERR_OK;
+ struct plugin_data *pd = (struct plugin_data*)lsm_private_data_get(c);
+ lsm_pool *p = find_pool(pd, lsm_pool_id_get(pool));
+
+ if( !p) {
+ rc = lsm_log_error_basic(c, LSM_ERR_NOT_FOUND_POOL,
+ "Pool not found!");
+ }
+
+ *raid_type = LSM_VOLUME_RAID_TYPE_UNKNOWN;
+ *member_type = LSM_POOL_MEMBER_TYPE_UNKNOWN;
+ *member_ids = NULL;
+ return rc;
+}
+
static struct lsm_ops_v1_2 ops_v1_2 = {
- volume_raid_info
+ volume_raid_info,
+ pool_raid_info,
};

static int volume_enable_disable(lsm_plugin_ptr c, lsm_volume *v,
--
1.8.3.1
Gris Ge
2015-04-16 15:05:29 UTC
Permalink
* Add lsm.Client.pool_raid_info() support:
* Treat NetApp ONTAP aggregate as disk RAID group.
Use na_disk['aggregate'] property to determine disk ownership.
* Treat NetApp ONTAP volume as sub-pool.
Use na_vol['containing-aggregate'] property to determine sub-pool
ownership.

* Set lsm.Capabilities.POOL_RAID_INFO.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/ontap/ontap.py | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/plugin/ontap/ontap.py b/plugin/ontap/ontap.py
index f9f1f56..c3fe2e2 100644
--- a/plugin/ontap/ontap.py
+++ b/plugin/ontap/ontap.py
@@ -545,6 +545,7 @@ def capabilities(self, system, flags=0):
cap.set(Capabilities.TARGET_PORTS)
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_RAID_INFO)
return cap

@handle_ontap_errors
@@ -1330,3 +1331,23 @@ def volume_raid_info(self, volume, flags=0):
return [
raid_type, Ontap._STRIP_SIZE, disk_count, Ontap._STRIP_SIZE,
Ontap._OPT_IO_SIZE]
+
+ @handle_ontap_errors
+ def pool_raid_info(self, pool, flags=0):
+ if pool.element_type & Pool.ELEMENT_TYPE_VOLUME:
+ # We got a NetApp volume
+ raid_type = Volume.RAID_TYPE_OTHER
+ member_type = Pool.MEMBER_TYPE_POOL
+ na_vol = self.f.volumes(volume_name=pool.name)[0]
+ disk_ids = [na_vol['containing-aggregate']]
+ else:
+ # We got a NetApp aggregate
+ member_type = Pool.MEMBER_TYPE_DISK
+ na_aggr = self.f.aggregates(aggr_name=pool.name)[0]
+ raid_type = Ontap._raid_type_of_na_aggr(na_aggr)
+ disk_ids = list(
+ Ontap._disk_id(d)
+ for d in self.f.disks()
+ if 'aggregate' in d and d['aggregate'] == pool.name)
+
+ return raid_type, member_type, disk_ids
--
1.8.3.1
Gris Ge
2015-04-16 15:05:31 UTC
Permalink
* Add lsm.Client.pool_raid_info() support:
* Base on 'hpssacli ctrl slot=0 show config detail' command.

* Set lsm.Capabilities.POOL_RAID_INFO.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/hpsa/hpsa.py | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)

diff --git a/plugin/hpsa/hpsa.py b/plugin/hpsa/hpsa.py
index 76eb4a7..743cbcc 100644
--- a/plugin/hpsa/hpsa.py
+++ b/plugin/hpsa/hpsa.py
@@ -285,6 +285,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_RAID_INFO)
return cap

def _sacli_exec(self, sacli_cmds, flag_convert=True):
@@ -531,3 +532,41 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
"Volume not found")

return [raid_type, strip_size, disk_count, strip_size, stripe_size]
+
+ @_handle_errors
+ def pool_raid_info(self, pool, flags=Client.FLAG_RSVD):
+ """
+ Depend on command:
+ hpssacli ctrl slot=0 show config detail
+ """
+ if not pool.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Ilegal input volume argument: missing plugin_data property")
+
+ (ctrl_num, array_num) = pool.plugin_data.split(":")
+ ctrl_data = self._sacli_exec(
+ ["ctrl", "slot=%s" % ctrl_num, "show", "config", "detail"]
+ ).values()[0]
+
+ disk_ids = []
+ raid_type = Volume.RAID_TYPE_UNKNOWN
+ for key_name in ctrl_data.keys():
+ if key_name == "Array: %s" % array_num:
+ for array_key_name in ctrl_data[key_name].keys():
+ if array_key_name.startswith("Logical Drive: ") and \
+ raid_type == Volume.RAID_TYPE_UNKNOWN:
+ raid_type = _hp_raid_type_to_lsm(
+ ctrl_data[key_name][array_key_name])
+ elif array_key_name.startswith("physicaldrive"):
+ hp_disk = ctrl_data[key_name][array_key_name]
+ if hp_disk['Drive Type'] == 'Data Drive':
+ disk_ids.append(hp_disk['Serial Number'])
+ break
+
+ if len(disk_ids) == 0:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_POOL,
+ "Pool not found")
+
+ return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
--
1.8.3.1
Gris Ge
2015-04-16 15:05:30 UTC
Permalink
* Add lsm.Client.pool_raid_info() support:
* Use command 'storcli /c0/d0 show all J'.
* Use 'DG Drive LIST' section of output for disk_ids.
* Use 'TOPOLOGY' section of output for raid_type.
Since the command used here is not sharing the same data layout with
command "storcli /c0/v0 show all J" which is used by 'volume_raid_info()'
method, we now have two set of codes for detecting RAID type.

* Set lsm.Capabilities.POOL_RAID_INFO.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 54 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 78603fc..ff861b3 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -259,6 +259,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_RAID_INFO)
return cap

def _storcli_exec(self, storcli_cmds, flag_json=True):
@@ -546,3 +547,56 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
return [
raid_type, strip_size, disk_count, strip_size,
strip_size * strip_count]
+
+ @_handle_errors
+ def pool_raid_info(self, pool, flags=Client.FLAG_RSVD):
+ lsi_dg_path = pool.plugin_data
+ # Check whether pool exists.
+ try:
+ dg_show_all_output = self._storcli_exec(
+ [lsi_dg_path , "show", "all"])
+ except ExecError as exec_error:
+ try:
+ json_output = json.loads(exec_error.stdout)
+ detail_error = json_output[
+ 'Controllers'][0]['Command Status']['Detailed Status']
+ except Exception:
+ raise exec_error
+
+ if detail_error and detail_error[0]['Status'] == 'Not found':
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_POOL,
+ "Pool not found")
+ raise
+
+ ctrl_num = lsi_dg_path.split('/')[1][1:]
+ lsm_disk_map = {}
+ disk_ids = []
+ for lsm_disk in self.disks():
+ lsm_disk_map[lsm_disk.plugin_data] = lsm_disk.id
+
+ for dg_disk_info in dg_show_all_output['DG Drive LIST']:
+ cur_lsi_disk_path = "/c%s" % ctrl_num + "/e%s/s%s" % tuple(
+ dg_disk_info['EID:Slt'].split(':'))
+ if cur_lsi_disk_path in lsm_disk_map.keys():
+ disk_ids.append(lsm_disk_map[cur_lsi_disk_path])
+ else:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "pool_raid_info(): Failed to find disk id of %s" %
+ cur_lsi_disk_path)
+
+ raid_type = Volume.RAID_TYPE_UNKNOWN
+ dg_num = lsi_dg_path.split('/')[2][1:]
+ for dg_top in dg_show_all_output['TOPOLOGY']:
+ if dg_top['Arr'] == '-' and \
+ dg_top['Row'] == '-' and \
+ int(dg_top['DG']) == int(dg_num):
+ raid_type = _RAID_TYPE_MAP.get(
+ dg_top['Type'], Volume.RAID_TYPE_UNKNOWN)
+ break
+
+ if raid_type == Volume.RAID_TYPE_RAID1 and len(disk_ids) >= 4:
+ raid_type = Volume.RAID_TYPE_RAID10
+
+ return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
--
1.8.3.1
Gris Ge
2015-04-16 15:05:33 UTC
Permalink
* Change Disk.plugin_data from '/c0/e252/s1' to '0:252:1'.

* Updated pool_raid_info() method for this change.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index ff861b3..6946068 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -29,6 +29,7 @@
# Naming scheme
# mega_sys_path /c0
# mega_disk_path /c0/e64/s0
+# lsi_disk_id 0:64:0


def _handle_errors(method):
@@ -393,7 +394,8 @@ def disks(self, search_key=None, search_value=None,
status = _disk_status_of(
disk_show_basic_dict, disk_show_stat_dict)

- plugin_data = mega_disk_path
+ plugin_data = "%s:%s" % (
+ ctrl_num, disk_show_basic_dict['EID:Slt'])

rc_lsm_disks.append(
Disk(
@@ -576,15 +578,14 @@ def pool_raid_info(self, pool, flags=Client.FLAG_RSVD):
lsm_disk_map[lsm_disk.plugin_data] = lsm_disk.id

for dg_disk_info in dg_show_all_output['DG Drive LIST']:
- cur_lsi_disk_path = "/c%s" % ctrl_num + "/e%s/s%s" % tuple(
- dg_disk_info['EID:Slt'].split(':'))
- if cur_lsi_disk_path in lsm_disk_map.keys():
- disk_ids.append(lsm_disk_map[cur_lsi_disk_path])
+ cur_lsi_disk_id = "%s:%s" % (ctrl_num, dg_disk_info['EID:Slt'])
+ if cur_lsi_disk_id in lsm_disk_map.keys():
+ disk_ids.append(lsm_disk_map[cur_lsi_disk_id])
else:
raise LsmError(
ErrorNumber.PLUGIN_BUG,
"pool_raid_info(): Failed to find disk id of %s" %
- cur_lsi_disk_path)
+ cur_lsi_disk_id)

raid_type = Volume.RAID_TYPE_UNKNOWN
dg_num = lsi_dg_path.split('/')[2][1:]
--
1.8.3.1
Gris Ge
2015-04-16 15:05:35 UTC
Permalink
* New commands:
* volume-create-raid
Create RAID group and allocate all space to single new volume.

* volume-create-raid-cap
Query supported RAID types and strip sizes of volume-create-raid
command.

* Manpage updated.

Signed-off-by: Gris Ge <***@redhat.com>
---
doc/man/lsmcli.1.in | 32 +++++++++++++++++++++
tools/lsmcli/cmdline.py | 67 +++++++++++++++++++++++++++++++++++++++++++-
tools/lsmcli/data_display.py | 36 +++++++++++++++++++++++-
3 files changed, 133 insertions(+), 2 deletions(-)

diff --git a/doc/man/lsmcli.1.in b/doc/man/lsmcli.1.in
index 368f452..5967192 100644
--- a/doc/man/lsmcli.1.in
+++ b/doc/man/lsmcli.1.in
@@ -232,6 +232,34 @@ Optional. Provisioning type. Valid values are: DEFAULT, THIN, FULL.
Provisioning enabled volume. \fBFULL\fR means requiring a fully allocated
volume.

+.SS volume-create-raid
+Creates a volume on hardware RAID on given disks.
+.TP 15
+\fB--name\fR \fI<NAME>\fR
+Required. Volume name. Might be altered or ignored due to hardware RAID card
+vendor limitation.
+.TP
+\fB--raid-type\fR \fI<RAID_TYPE>\fR
+Required. Could be one of these values: \fBRAID0\fR, \fBRAID1\fR, \fBRAID5\fR,
+\fBRAID6\fR, \fBRAID10\fR, \fBRAID50\fR, \fBRAID60\fR. The supported RAID
+types of current RAID card could be queried via command
+"\fBvolume-create-raid-cap\fR".
+.TP
+\fB--disk\fR \fI<DISK_ID>\fR
+Required. Repeatable. The disk ID for new RAID group.
+.TP
+\fB--strip-size\fR \fI<STRIP_SIZE>\fR
+Optional. The size in bytes of strip on each disks. If not defined, will let
+hardware card to use the vendor default value. The supported stripe size of
+current RAID card could be queried via command "\fBvolume-create-raid-cap\fR".
+
+.SS volume-create-raid-cap
+Query support status of volume-create-raid command for current hardware RAID
+card.
+.TP 15
+\fB--sys\fR \fI<SYS_ID>\fR
+Required. ID of the system to query for capabilities.
+
.SS volume-delete
.TP 15
Delete a volume given its ID
@@ -586,6 +614,10 @@ Alias of 'list --type target_ports'
Alias of 'plugin-info'
.SS vc
Alias of 'volume-create'
+.SS vcr
+Alias of 'volume-create-raid'
+.SS vcrc
+Alias of 'volume-create-raid-cap'
.SS vd
Alias of 'volume-delete'
.SS vr
diff --git a/tools/lsmcli/cmdline.py b/tools/lsmcli/cmdline.py
index 978d797..2ec25e6 100644
--- a/tools/lsmcli/cmdline.py
+++ b/tools/lsmcli/cmdline.py
@@ -40,7 +40,7 @@
from lsm.lsmcli.data_display import (
DisplayData, PlugData, out,
vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo,
- PoolRAIDInfo)
+ PoolRAIDInfo, VcrCap)


## Wraps the invocation to the command line
@@ -242,6 +242,37 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
),

dict(
+ name='volume-create-raid',
+ help='Creates a RAIDed volume on hardware RAID',
+ args=[
+ dict(name="--name", help='volume name', metavar='<NAME>'),
+ dict(name="--disk", metavar='<DISK>',
+ help='Free disks for new RAIDed volume.\n'
+ 'This is repeatable argument.',
+ action='append'),
+ dict(name="--raid-type",
+ help="RAID type for the new RAID group. "
+ "Should be one of these:\n %s" %
+ "\n ".join(
+ VolumeRAIDInfo.VOL_CREATE_RAID_TYPES_STR),
+ choices=VolumeRAIDInfo.VOL_CREATE_RAID_TYPES_STR,
+ type=str.upper),
+ ],
+ optional=[
+ dict(name="--strip-size",
+ help="Strip size. " + size_help),
+ ],
+ ),
+
+ dict(
+ name='volume-create-raid-cap',
+ help='Query capablity of creating a RAIDed volume on hardware RAID',
+ args=[
+ dict(sys_id_opt),
+ ],
+ ),
+
+ dict(
name='volume-delete',
help='Deletes a volume given its id',
args=[
@@ -635,6 +666,8 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
['c', 'capabilities'],
['p', 'plugin-info'],
['vc', 'volume-create'],
+ ['vcr', 'volume-create-raid'],
+ ['vcrc', 'volume-create-raid-cap'],
['vd', 'volume-delete'],
['vr', 'volume-resize'],
['vm', 'volume-mask'],
@@ -1351,6 +1384,38 @@ def pool_raid_info(self, args):
PoolRAIDInfo(
lsm_pool.id, *self.c.pool_raid_info(lsm_pool))])

+ def volume_create_raid(self, args):
+ raid_type = VolumeRAIDInfo.raid_type_str_to_lsm(args.raid_type)
+
+ all_lsm_disks = self.c.disks()
+ lsm_disks = [d for d in all_lsm_disks if d.id in args.disk]
+ if len(lsm_disks) != len(args.disk):
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_DISK,
+ "Disk ID %s not found" %
+ ', '.join(set(args.disk) - set(d.id for d in all_lsm_disks)))
+
+ busy_disks = [d.id for d in lsm_disks if not d.status & Disk.STATUS_FREE]
+
+ if len(busy_disks) >= 1:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not free" % ", ".join(busy_disks))
+
+ if args.strip_size:
+ strip_size = size_human_2_size_bytes(args.strip_size)
+ else:
+ strip_size = Volume.VCR_STRIP_SIZE_DEFAULT
+
+ self.display_data([
+ self.c.volume_create_raid(
+ args.name, raid_type, lsm_disks, strip_size)])
+
+ def volume_create_raid_cap(self, args):
+ lsm_sys = _get_item(self.c.systems(), args.sys, "System")
+ self.display_data([
+ VcrCap(lsm_sys.id, *self.c.volume_create_raid_cap_get(lsm_sys))])
+
## Displays file system dependants
def fs_dependants(self, args):
fs = _get_item(self.c.fs(), args.fs, "File System")
diff --git a/tools/lsmcli/data_display.py b/tools/lsmcli/data_display.py
index 115f15a..b9edf68 100644
--- a/tools/lsmcli/data_display.py
+++ b/tools/lsmcli/data_display.py
@@ -265,6 +265,9 @@ class VolumeRAIDInfo(object):
Volume.RAID_TYPE_UNKNOWN: 'UNKNOWN',
}

+ VOL_CREATE_RAID_TYPES_STR = [
+ 'RAID0', 'RAID1', 'RAID5', 'RAID6', 'RAID10', 'RAID50', 'RAID60']
+
def __init__(self, vol_id, raid_type, strip_size, disk_count,
min_io_size, opt_io_size):
self.vol_id = vol_id
@@ -278,6 +281,10 @@ def __init__(self, vol_id, raid_type, strip_size, disk_count,
def raid_type_to_str(raid_type):
return _enum_type_to_str(raid_type, VolumeRAIDInfo._RAID_TYPE_MAP)

+ @staticmethod
+ def raid_type_str_to_lsm(raid_type_str):
+ return _str_to_enum(raid_type_str, VolumeRAIDInfo._RAID_TYPE_MAP)
+

class PoolRAIDInfo(object):
_MEMBER_TYPE_MAP = {
@@ -298,6 +305,11 @@ def member_type_to_str(member_type):
return _enum_type_to_str(
member_type, PoolRAIDInfo._MEMBER_TYPE_MAP)

+class VcrCap(object):
+ def __init__(self, system_id, raid_types, strip_sizes):
+ self.system_id = system_id
+ self.raid_types = raid_types
+ self.strip_sizes = strip_sizes

class DisplayData(object):

@@ -598,6 +610,25 @@ def __init__(self):
'value_conv_human': POOL_RAID_INFO_VALUE_CONV_HUMAN,
}

+ VCR_CAP_HEADER = OrderedDict()
+ VCR_CAP_HEADER['system_id'] = 'System ID'
+ VCR_CAP_HEADER['raid_types'] = 'Supported RAID Types'
+ VCR_CAP_HEADER['strip_sizes'] = 'Supported Strip Sizes'
+
+ VCR_CAP_COLUMN_SKIP_KEYS = []
+
+ VCR_CAP_VALUE_CONV_ENUM = {
+ 'raid_types': lambda i: [VolumeRAIDInfo.raid_type_to_str(x) for x in i]
+ }
+ VCR_CAP_VALUE_CONV_HUMAN = ['strip_sizes']
+
+ VALUE_CONVERT[VcrCap] = {
+ 'headers': VCR_CAP_HEADER,
+ 'column_skip_keys': VCR_CAP_COLUMN_SKIP_KEYS,
+ 'value_conv_enum': VCR_CAP_VALUE_CONV_ENUM,
+ 'value_conv_human': VCR_CAP_VALUE_CONV_HUMAN,
+ }
+
@staticmethod
def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
flag_human, flag_enum):
@@ -607,7 +638,10 @@ def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
value = value_conv_enum[key](value)
if flag_human:
if key in value_conv_human:
- value = size_bytes_2_size_human(value)
+ if type(value) is list:
+ value = list(size_bytes_2_size_human(s) for s in value)
+ else:
+ value = size_bytes_2_size_human(value)
return value

@staticmethod
--
1.8.3.1
Gris Ge
2015-04-16 15:05:34 UTC
Permalink
* New methods:
For detail information, please refer to docstring of python methods.

* Python:
* lsm.Client.volume_raid_create_cap_get(self, system,
flags=lsm.Client.FLAG_RSVD)

Query supported RAID types and strip sizes of volume_create_raid().

* lsm.Client.volume_raid_create(self, name, raid_type, disks,
strip_size,
flags=lsm.Client.FLAG_RSVD)
Create a disk RAID pool and allocate its full space to single
volume.

* C user API:
* int lsm_volume_create_raid_cap_get(
lsm_connect *c, lsm_system *system,
uint32_t **supported_raid_types,
uint32_t *supported_raid_type_count,
uint32_t **supported_strip_sizes,
uint32_t *supported_strip_size_count,
lsm_flag flags);
* int lsm_volume_create_raid(
lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
lsm_disk *disks[], uint32_t disk_count, uint32_t strip_size,
lsm_volume **new_volume, lsm_flag flags);

* C plugin interfaces:
* typedef int (*lsm_plug_volume_create_raid_cap_get)(
lsm_plugin_ptr c, lsm_system *system,
uint32_t **supported_raid_types,
uint32_t *supported_raid_type_count,
uint32_t **supported_strip_sizes, uint32_t
*supported_strip_size_count, lsm_flag flags)

* typedef int (*lsm_plug_volume_create_raid)(
lsm_plugin_ptr c, const char *name,
lsm_volume_raid_type raid_type, lsm_disk *disks[], uint32_t
disk_count, uint32_t strip_size, lsm_volume **new_volume,
lsm_flag flags);

* Added two above into 'lsm_ops_v1_2' struct.

* C internal functions for converting C uint32 array to C++ Value:
* values_to_uint32_array(
Value &value, uint32_t **uint32_array, uint32_t *count)
* uint32_array_to_value(
uint32_t *uint32_array, uint32_t count)

* New capability:
* Python: lsm.Capabilities.VOLUME_CREATE_RAID
* C: LSM_CAP_VOLUME_CREATE_RAID

* New errors:
* Python: lsm.ErrorNumber.NOT_FOUND_DISK, lsm.ErrorNumber.DISK_NOT_FREE
* C: LSM_ERR_NOT_FOUND_DISK, LSM_ERR_DISK_NOT_FREE

* Design notes:
* Why not create pool only and allocate volume from it?
Neither HP Smart Array nor LSI MegaRAID support this.
Both require at least one volume on any pool(RAID group) and
pool(RAID group) only created when creating volume.
This is also the root reason that this method is dedicate for
hardware RAID cards, not for SAN or NAS.

* Why not allowing create multiple volumes but use single volume
consuming all space?
Even both vendors above support creating creating multiple volumes
in simple command, but it will be complex design API and many new
errors. In my humble options, most use case is to create single
volume and let OS create partitions on it.

* Why not store supported RAID type and stripe size in existing
lsm.Client.capabilities() call?
RAID type could be stored in capabilities(), but both plugin
implementation and user code has to do a capability to raid type
conversion. The strip size is number, even two vendors mentioned above
are supporting the same list of strip size, but there is no reason
create this limitation, again, conversion is also required for
capability to strip size integer if saving it in capabilities().

* Why not storing supported RAID type returned by
volume_raid_create_cap_get() in bitmap?
A conversion is require is do so. Just want to make plugin and
API user life easier.

* Why strip size is mandatory argument and where is strip size change
method?
Changing strip size after volume creation will normally take a long
time for data migration. We will not introduce strip size change
method without convincing user case.

Signed-off-by: Gris Ge <***@redhat.com>
---
c_binding/include/libstoragemgmt/libstoragemgmt.h | 62 +++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 1 +
.../include/libstoragemgmt/libstoragemgmt_error.h | 2 +
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 57 ++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 3 +
c_binding/lsm_convert.cpp | 41 +++++
c_binding/lsm_convert.hpp | 12 ++
c_binding/lsm_mgmt.cpp | 118 ++++++++++++
c_binding/lsm_plugin_ipc.cpp | 94 +++++++++-
python_binding/lsm/_client.py | 203 +++++++++++++++++++++
python_binding/lsm/_common.py | 3 +
python_binding/lsm/_data.py | 3 +
12 files changed, 598 insertions(+), 1 deletion(-)

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt.h b/c_binding/include/libstoragemgmt/libstoragemgmt.h
index 5d73919..4b4834f 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt.h
@@ -891,6 +891,68 @@ int LSM_DLL_EXPORT lsm_pool_raid_info(
lsm_pool_member_type *member_type, lsm_string_list **member_ids,
lsm_flag flags);

+/**
+ * Query all supported RAID types and strip sizes which could be used
+ * in lsm_volume_create_raid() functions.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid connection
+ * @param[in] system
+ * The lsm_sys type.
+ * @param[out] supported_raid_types
+ * The pointer of uint32_t array. Containing
+ * lsm_volume_raid_type values.
+ * You need to free this memory by yourself.
+ * @param[out] supported_raid_type_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_raid_types array.
+ * @param[out] supported_strip_sizes
+ * The pointer of uint32_t array. Containing
+ * all supported strip sizes.
+ * You need to free this memory by yourself.
+ * @param[out] supported_strip_size_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_strip_sizes array.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_volume_create_raid_cap_get(
+ lsm_connect *c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags);
+
+/**
+ * Create a disk RAID pool and allocated its full space to single new volume.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid connection
+ * @param[in] name String. Name for the new volume. It might be ignored or
+ * altered on some hardwardware raid cards in order to fit
+ * their limitation.
+ * @param[in] raid_type
+ * Enum of lsm_volume_raid_type. Please refer to the returns
+ * of lsm_volume_create_raid_cap_get() function for
+ * supported strip sizes.
+ * @param[in] disks
+ * An array of lsm_disk types
+ * @param[in] disk_count
+ * The count of lsm_disk in 'disks' argument.
+ * Count starts with 1.
+ * @param[in] strip_size
+ * uint32_t. The strip size in bytes. Please refer to
+ * the returns of lsm_volume_create_raid_cap_get() function
+ * for supported strip sizes.
+ * @param[out] new_volume
+ * Newly created volume, Pointer to the lsm_volume type
+ * pointer.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_volume_create_raid(
+ lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags);
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
index 9e96035..1c3f47b 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
@@ -117,6 +117,7 @@ typedef enum {
LSM_CAP_POOL_RAID_INFO = 221,
/**^ Query pool RAID information */

+ LSM_CAP_VOLUME_CREATE_RAID = 222,
} lsm_capability_type;

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_error.h b/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
index 0b28ddc..29c460a 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
@@ -63,6 +63,7 @@ typedef enum {
LSM_ERR_NOT_FOUND_VOLUME = 205, /**< Specified volume not found */
LSM_ERR_NOT_FOUND_NFS_EXPORT = 206, /**< NFS export not found */
LSM_ERR_NOT_FOUND_SYSTEM = 208, /**< System not found */
+ LSM_ERR_NOT_FOUND_DISK = 209,

LSM_ERR_NOT_LICENSED = 226, /**< Need license for feature */

@@ -90,6 +91,7 @@ typedef enum {

LSM_ERR_EMPTY_ACCESS_GROUP = 511,
LSM_ERR_POOL_NOT_READY = 512,
+ LSM_ERR_DISK_NOT_FREE = 513,

} lsm_error_number;

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
index a3dd865..bdb02e8 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
@@ -853,6 +853,61 @@ typedef int (*lsm_plug_pool_raid_info)(
lsm_pool_member_type *member_type, lsm_string_list **member_ids,
lsm_flag flags);

+/**
+ * Query all supported RAID types and strip sizes which could be used
+ * in lsm_volume_create_raid() functions.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] system
+ * The lsm_sys type.
+ * @param[out] supported_raid_types
+ * The pointer of uint32_t array. Containing
+ * lsm_volume_raid_type values.
+ * @param[out] supported_raid_type_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_raid_types array.
+ * @param[out] supported_strip_sizes
+ * The pointer of uint32_t array. Containing
+ * all supported strip sizes.
+ * @param[out] supported_strip_size_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_strip_sizes array.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_volume_create_raid_cap_get)(
+ lsm_plugin_ptr c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags);
+
+/**
+ * Create a disk RAID pool and allocated its full space to single new volume.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] name String. Name for the new volume. It might be ignored or
+ * altered on some hardwardware raid cards in order to fit
+ * their limitation.
+ * @param[in] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[in] disks
+ * An array of lsm_disk types
+ * @param[in] disk_count
+ * The count of lsm_disk in 'disks' argument.
+ * @param[in] strip_size
+ * uint32_t. The strip size in bytes.
+ * @param[out] new_volume
+ * Newly created volume, Pointer to the lsm_volume type
+ * pointer.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_volume_create_raid)(
+ lsm_plugin_ptr c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags);
+
/** \struct lsm_ops_v1_2
* \brief Functions added in version 1.2
* NOTE: This structure will change during the developement util version 1.2
@@ -862,6 +917,8 @@ struct lsm_ops_v1_2 {
lsm_plug_volume_raid_info vol_raid_info;
/**^ Query volume RAID information*/
lsm_plug_pool_raid_info pool_raid_info;
+ lsm_plug_volume_create_raid_cap_get vol_create_raid_cap_get;
+ lsm_plug_volume_create_raid vol_create_raid;
};

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
index 115ec5a..667c320 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
@@ -297,6 +297,9 @@ typedef enum {
LSM_TARGET_PORT_TYPE_ISCSI = 4
} lsm_target_port_type;

+#define LSM_VOLUME_VCR_STRIP_SIZE_DEFAULT 0
+/** ^ Plugin and hardware RAID will use their default strip size */
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/lsm_convert.cpp b/c_binding/lsm_convert.cpp
index 7baf2e6..9b799ab 100644
--- a/c_binding/lsm_convert.cpp
+++ b/c_binding/lsm_convert.cpp
@@ -617,3 +617,44 @@ Value target_port_to_value(lsm_target_port *tp)
}
return Value();
}
+
+int values_to_uint32_array(
+ Value &value, uint32_t **uint32_array, uint32_t *count)
+{
+ int rc = LSM_ERR_OK;
+ try {
+ std::vector<Value> data = value.asArray();
+ *count = data.size();
+ if (*count){
+ *uint32_array = (uint32_t *)malloc(sizeof(uint32_t) * *count);
+ if (*uint32_array){
+ uint32_t i;
+ for( i = 0; i < *count; i++ ){
+ (*uint32_array)[i] = data[i].asUint32_t();
+ }
+ }else{
+ rc = LSM_ERR_NO_MEMORY;
+ }
+ }
+ } catch( const ValueException &ve) {
+ if( *count ) {
+ free(*uint32_array);
+ *uint32_array = NULL;
+ *count = 0;
+ }
+ rc = LSM_ERR_LIB_BUG;
+ }
+ return rc;
+}
+
+Value uint32_array_to_value(uint32_t *uint32_array, uint32_t count)
+{
+ std::vector<Value> rc;
+ if( uint32_array && count ) {
+ uint32_t i;
+ for( i = 0; i < count; i++ ) {
+ rc.push_back(uint32_array[i]);
+ }
+ }
+ return rc;
+}
diff --git a/c_binding/lsm_convert.hpp b/c_binding/lsm_convert.hpp
index fe1e91d..907a229 100644
--- a/c_binding/lsm_convert.hpp
+++ b/c_binding/lsm_convert.hpp
@@ -284,4 +284,16 @@ lsm_target_port LSM_DLL_LOCAL *value_to_target_port(Value &tp);
*/
Value LSM_DLL_LOCAL target_port_to_value(lsm_target_port *tp);

+/**
+ * Converts a value to array of uint32.
+ */
+int LSM_DLL_LOCAL values_to_uint32_array(
+ Value &value, uint32_t **uint32_array, uint32_t *count);
+
+/**
+ * Converts an array of uint32 to a value.
+ */
+Value LSM_DLL_LOCAL uint32_array_to_value(
+ uint32_t *uint32_array, uint32_t count);
+
#endif
diff --git a/c_binding/lsm_mgmt.cpp b/c_binding/lsm_mgmt.cpp
index ce948bf..7b974f1 100644
--- a/c_binding/lsm_mgmt.cpp
+++ b/c_binding/lsm_mgmt.cpp
@@ -2257,3 +2257,121 @@ int lsm_nfs_export_delete( lsm_connect *c, lsm_nfs_export *e, lsm_flag flags)
int rc = rpc(c, "export_remove", parameters, response);
return rc;
}
+
+int lsm_volume_create_raid_cap_get(
+ lsm_connect *c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags)
+{
+ CONN_SETUP(c);
+
+ if( !supported_raid_types || !supported_raid_type_count ||
+ !supported_strip_sizes || !supported_strip_size_count) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["system"] = system_to_value(system);
+ p["flags"] = Value(flags);
+
+ Value parameters(p);
+ Value response;
+
+ int rc = rpc(c, "volume_create_raid_cap_get", parameters, response);
+ try {
+ std::vector<Value> j = response.asArray();
+
+ rc = values_to_uint32_array(
+ j[0], supported_raid_types, supported_raid_type_count);
+
+ if (rc != LSM_ERR_OK){
+ return rc;
+ }
+
+ rc = values_to_uint32_array(
+ j[1], supported_strip_sizes, supported_strip_size_count);
+ } catch( const ValueException &ve ) {
+ rc = logException(c, LSM_ERR_LIB_BUG, "Unexpected type",
+ ve.what());
+ }
+ return rc;
+}
+
+int lsm_volume_create_raid(
+ lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags)
+{
+ CONN_SETUP(c);
+
+ if (disk_count == 0){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "Require at least one disks", NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID1 && disk_count != 2){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 1 only allows two disks", NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID5 && disk_count < 3){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 5 require 3 or more disks",
+ NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID6 && disk_count < 4){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 5 require 4 or more disks",
+ NULL);
+ }
+
+ if ( disk_count % 2 ){
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID10 && disk_count < 4){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 10 require even disks count and 4 or more disks",
+ NULL);
+ }
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID50 && disk_count < 6){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 50 require even disks count and 6 or more disks",
+ NULL);
+ }
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID60 && disk_count < 8){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 60 require even disks count and 8 or more disks",
+ NULL);
+ }
+ }
+
+
+ if (CHECK_RP(new_volume)){
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["name"] = Value(name);
+ p["raid_type"] = Value((int32_t)raid_type);
+ p["strip_size"] = Value((int32_t)strip_size);
+ p["flags"] = Value(flags);
+ std::vector<Value> disks_value;
+ for (int i = 0; i < disk_count; i++){
+ disks_value.push_back(disk_to_value(disks[i]));
+ }
+ p["disks"] = disks_value;
+
+ Value parameters(p);
+ Value response;
+
+ int rc = rpc(c, "volume_create_raid", parameters, response);
+ if( LSM_ERR_OK == rc ) {
+ *new_volume = value_to_volume(response);
+ }
+ return rc;
+
+}
diff --git a/c_binding/lsm_plugin_ipc.cpp b/c_binding/lsm_plugin_ipc.cpp
index c2cd294..0f86371 100644
--- a/c_binding/lsm_plugin_ipc.cpp
+++ b/c_binding/lsm_plugin_ipc.cpp
@@ -2199,6 +2199,96 @@ static int iscsi_chap(lsm_plugin_ptr p, Value &params, Value &response)
}
return rc;
}
+static int handle_volume_create_raid_cap_get(
+ lsm_plugin_ptr p, Value &params, Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->vol_create_raid_cap_get ) {
+ Value v_system = params["system"];
+
+ if( IS_CLASS_SYSTEM(v_system) &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+ lsm_system *sys = value_to_system(v_system);
+
+ uint32_t *supported_raid_types;
+ uint32_t supported_raid_type_count = 0;
+
+ uint32_t *supported_strip_sizes;
+ uint32_t supported_strip_size_count = 0;
+
+ rc = p->ops_v1_2->vol_create_raid_cap_get(
+ p, sys, &supported_raid_types, &supported_raid_type_count,
+ &supported_strip_sizes, &supported_strip_size_count,
+ LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ std::vector<Value> result;
+ result.push_back(
+ uint32_array_to_value(
+ supported_raid_types, supported_raid_type_count));
+ result.push_back(
+ uint32_array_to_value(
+ supported_strip_sizes, supported_strip_size_count));
+ response = Value(result);
+ }
+
+ free(supported_raid_types);
+ free(supported_strip_sizes);
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+
+}
+
+static int handle_volume_create_raid(
+ lsm_plugin_ptr p, Value &params, Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->vol_create_raid ) {
+ Value v_name = params["name"];
+ Value v_raid_type = params["raid_type"];
+ Value v_strip_size = params["strip_size"];
+ Value v_disks = params["disks"];
+
+ if( Value::string_t == v_name.valueType() &&
+ Value::numeric_t == v_raid_type.valueType() &&
+ Value::numeric_t == v_strip_size.valueType() &&
+ Value::array_t == v_disks.valueType() &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+
+ lsm_disk **disks = NULL;
+ uint32_t disk_count = 0;
+ rc = value_array_to_disks(v_disks, &disks, &disk_count);
+ if( LSM_ERR_OK != rc ) {
+ return rc;
+ }
+
+ const char *name = v_name.asC_str();
+ lsm_volume_raid_type raid_type =
+ (lsm_volume_raid_type) v_raid_type.asInt32_t();
+ uint32_t strip_size = (uint32_t) v_strip_size.asInt32_t();
+
+ lsm_volume *new_vol = NULL;
+
+ rc = p->ops_v1_2->vol_create_raid(
+ p, name, raid_type, disks, disk_count, strip_size,
+ &new_vol, LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ response = volume_to_value(new_vol);
+ }
+
+ lsm_disk_record_array_free(disks, disk_count);
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+}

/**
* map of function pointers
@@ -2254,7 +2344,9 @@ static std::map<std::string,handler> dispatch = static_map<std::string,handler>
("volumes_accessible_by_access_group", vol_accessible_by_ag)
("volumes", handle_volumes)
("volume_raid_info", handle_volume_raid_info)
- ("pool_raid_info", handle_pool_raid_info);
+ ("pool_raid_info", handle_pool_raid_info)
+ ("volume_create_raid", handle_volume_create_raid)
+ ("volume_create_raid_cap_get", handle_volume_create_raid_cap_get);

static int process_request(lsm_plugin_ptr p, const std::string &method, Value &request,
Value &response)
diff --git a/python_binding/lsm/_client.py b/python_binding/lsm/_client.py
index 1ed6e1e..c77a1d4 100644
--- a/python_binding/lsm/_client.py
+++ b/python_binding/lsm/_client.py
@@ -1159,3 +1159,206 @@ def pool_raid_info(self, pool, flags=FLAG_RSVD):
Pool not found.
"""
return self._tp.rpc('pool_raid_info', _del_self(locals()))
+
+ @_return_requires([[int], [int]])
+ def volume_create_raid_cap_get(self, system, flags=FLAG_RSVD):
+ """
+ lsm.Client.volume_create_raid_cap_get(
+ self, system, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ This method is dedicated to local hardware RAID cards.
+ Query out all supported RAID types and strip sizes which could
+ be used by lsm.Client.volume_create_raid() method.
+ Parameters:
+ system (lsm.System)
+ Instance of lsm.System
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ [raid_types, strip_sizes]
+ raid_types ([int])
+ List of integer, possible values are:
+ Volume.RAID_TYPE_RAID0
+ Volume.RAID_TYPE_RAID1
+ Volume.RAID_TYPE_RAID5
+ Volume.RAID_TYPE_RAID6
+ Volume.RAID_TYPE_RAID10
+ Volume.RAID_TYPE_RAID15
+ Volume.RAID_TYPE_RAID16
+ Volume.RAID_TYPE_RAID50
+ strip_sizes ([int])
+ List of integer. Stripe size in bytes.
+ SpecialExceptions:
+ LsmError
+ lsm.ErrorNumber.NO_SUPPORT
+ Method not supported.
+ Sample:
+ lsm_client = lsm.Client('sim://')
+ lsm_sys = lsm_client.systems()[0]
+ disks = lsm_client.disks(
+ search_key='system_id', search_value=lsm_sys.id)
+
+ free_disks = [d for d in disks if d.status == Disk.STATUS_FREE]
+ supported_raid_types, supported_strip_sizes = \
+ lsm_client.volume_create_raid_cap_get(lsm_sys)
+ new_vol = lsm_client.volume_create_raid(
+ 'test_volume_create_raid', supported_raid_types[0],
+ free_disks, supported_strip_sizes[0])
+
+ Capability:
+ lsm.Capabilities.VOLUME_CREATE_RAID
+ This method is mandatory when volume_create_raid() is
+ supported.
+ """
+ return self._tp.rpc('volume_create_raid_cap_get', _del_self(locals()))
+
+ @_return_requires(Volume)
+ def volume_create_raid(self, name, raid_type, disks, strip_size,
+ flags=FLAG_RSVD):
+ """
+ lsm.Client.volume_create_raid(self, name, raid_type, disks,
+ strip_size, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ This method is dedicated to local hardware RAID cards.
+ Create a RAID group pool from disks and allocated all of its
+ storage space to single volume as using requested volume name.
+ When dealing with RAID10, 50 or 60, the first half part of
+ 'disks' will be located in one bottom layer RAID group.
+ The new volume and new pool will created within the same system
+ of provided disks.
+ This method does not allow duplicate call, if duplicate call
+ was isssue, LsmError with ErrorNumber.DISK_NOT_FREE will be raise.
+ User should check disk.status for Disk.STATUS_FREE before invoke
+ this method or ErrorNumber.NAME_CONFLICT for duplication volume
+ name.
+ Parameters:
+ name (string)
+ The name for new volume.
+ The requested volume name might be ignored due to restriction
+ of hardware RAID vendors.
+ raid_type (int)
+ The RAID type for the RAID group, possible values are:
+ Volume.RAID_TYPE_RAID0
+ Volume.RAID_TYPE_RAID1
+ Volume.RAID_TYPE_RAID5
+ Volume.RAID_TYPE_RAID6
+ Volume.RAID_TYPE_RAID10
+ Volume.RAID_TYPE_RAID15
+ Volume.RAID_TYPE_RAID16
+ Volume.RAID_TYPE_RAID50
+ Please check volume_create_raid_cap_get() returns to get
+ supported all raid types of current hardware RAID card.
+ disks ([lsm.Disks,])
+ A list of lsm.Disk objects. Free disks used for new RAID group.
+ strip_size (int)
+ The size in bytes of strip.
+ When setting strip_size to Volume.VCR_STRIP_SIZE_DEFAULT, it
+ allow hardware RAID cards to choose their default value.
+ Please use volume_create_raid_cap_get() method to get all
+ supported strip size of current hardware RAID card.
+ The Volume.VCR_STRIP_SIZE_DEFAULT is always supported when
+ lsm.Capabilities.VOLUME_CREATE_RAID is supported.
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ lsm.Volume
+ The lsm.Volume object for newly created volume.
+ SpecialExceptions:
+ LsmError
+ lsm.ErrorNumber.NO_SUPPORT
+ Method not supported or RAID type not supported.
+ lsm.ErrorNumber.DISK_NOT_FREE
+ Disk is not in Disk.STATUS_FREE status.
+ lsm.ErrorNumber.NOT_FOUND_DISK
+ Disk not found
+ lsm.ErrorNumber.INVALID_ARGUMENT
+ 1. Invalid input argument data.
+ 2. Disks are not from the same system.
+ 3. Disks are not from the same enclosure.
+ 4. Invalid strip_size.
+ 5. Disk count are meet the minimum requirement:
+ RAID1: len(disks) == 2
+ RAID5: len(disks) >= 3
+ RAID6: len(disks) >= 4
+ RAID10: len(disks) % 2 == 0 and len(disks) >= 4
+ RAID50: len(disks) % 2 == 0 and len(disks) >= 6
+ RAID60: len(disks) % 2 == 0 and len(disks) >= 8
+ lsm.ErrorNumber.NAME_CONFLICT
+ Requested name is already be used by other volume.
+ Sample:
+ lsm_client = lsm.Client('sim://')
+ disks = lsm_client.disks()
+ free_disks = [d for d in disks if d.status == Disk.STATUS_FREE]
+ new_vol = lsm_client.volume_create_raid(
+ 'raid0_vol1', Volume.RAID_TYPE_RAID0, free_disks)
+ Capability:
+ lsm.Capabilities.VOLUME_CREATE_RAID
+ Indicate current system support volume_create_raid() method.
+ At least one RAID type should be supported.
+ The strip_size == Volume.VCR_STRIP_SIZE_DEFAULT is supported.
+ lsm.Capabilities.VOLUME_CREATE_RAID_0
+ lsm.Capabilities.VOLUME_CREATE_RAID_1
+ lsm.Capabilities.VOLUME_CREATE_RAID_5
+ lsm.Capabilities.VOLUME_CREATE_RAID_6
+ lsm.Capabilities.VOLUME_CREATE_RAID_10
+ lsm.Capabilities.VOLUME_CREATE_RAID_50
+ lsm.Capabilities.VOLUME_CREATE_RAID_60
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_8_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_16_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_32_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_64_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_128_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_256_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_512_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_1024_KIB
+ """
+ if len(disks) == 0:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: no disk included")
+
+ if raid_type == Volume.RAID_TYPE_RAID1 and len(disks) != 2:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 1 only allow 2 disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID5 and len(disks) < 3:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 5 require 3 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID6 and len(disks) < 4:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 6 require 4 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID10:
+ if len(disks) % 2 or len(disks) < 4:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 10 require even disks count and 4 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID50:
+ if len(disks) % 2 or len(disks) < 6:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 50 require even disks count and 6 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID60:
+ if len(disks) % 2 or len(disks) < 8:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 60 require even disks count and 8 or more disks")
+
+ return self._tp.rpc('volume_create_raid', _del_self(locals()))
diff --git a/python_binding/lsm/_common.py b/python_binding/lsm/_common.py
index 4c87661..be0c734 100644
--- a/python_binding/lsm/_common.py
+++ b/python_binding/lsm/_common.py
@@ -447,6 +447,7 @@ class ErrorNumber(object):
NOT_FOUND_VOLUME = 205
NOT_FOUND_NFS_EXPORT = 206
NOT_FOUND_SYSTEM = 208
+ NOT_FOUND_DISK = 209

NOT_LICENSED = 226

@@ -478,6 +479,8 @@ class ErrorNumber(object):

POOL_NOT_READY = 512 # Pool is not ready for create/resize/etc

+ DISK_NOT_FREE = 513 # Disk is not in DISK.STATUS_FREE status.
+
_LOCALS = locals()

@staticmethod
diff --git a/python_binding/lsm/_data.py b/python_binding/lsm/_data.py
index 2d60562..4c89011 100644
--- a/python_binding/lsm/_data.py
+++ b/python_binding/lsm/_data.py
@@ -306,6 +306,8 @@ class Volume(IData):
MIN_IO_SIZE_UNKNOWN = 0
OPT_IO_SIZE_UNKNOWN = 0

+ VCR_STRIP_SIZE_DEFAULT = 0
+
def __init__(self, _id, _name, _vpd83, _block_size, _num_of_blocks,
_admin_state, _system_id, _pool_id, _plugin_data=None):
self._id = _id # Identifier
@@ -760,6 +762,7 @@ class Capabilities(IData):

DISKS = 220
POOL_RAID_INFO = 221
+ VOLUME_CREATE_RAID = 222

def _to_dict(self):
return {'class': self.__class__.__name__,
--
1.8.3.1
Gris Ge
2015-04-16 15:05:36 UTC
Permalink
* Add support of these methods:
* volume_create_raid()
* volume_create_raid_cap_get()

* Add 'strip_size' property into pool.
* Add 'is_hw_raid_vol' property into volume.
If true, volume_delete() will also delete parent pool.

* Bumped data version to 3.4.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 119 ++++++++++++++++++++++++++++++++++++++++++------
plugin/sim/simulator.py | 8 ++++
2 files changed, 113 insertions(+), 14 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index 5f1c1f8..533ddd8 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -123,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.3"
+ VERSION = "3.4"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -133,7 +133,7 @@ class BackStore(object):
SYS_ID = "sim-01"
SYS_NAME = "LSM simulated storage plug-in"
BLK_SIZE = 512
- STRIP_SIZE = 131072 # 128 KiB
+ DEFAULT_STRIP_SIZE = 128 * 1024 # 128 KiB

_LIST_SPLITTER = '#'

@@ -142,7 +142,8 @@ class BackStore(object):
POOL_KEY_LIST = [
'id', 'name', 'status', 'status_info',
'element_type', 'unsupported_actions', 'raid_type',
- 'member_type', 'parent_pool_id', 'total_space', 'free_space']
+ 'member_type', 'parent_pool_id', 'total_space', 'free_space',
+ 'strip_size']

DISK_KEY_LIST = [
'id', 'name', 'total_space', 'disk_type', 'status',
@@ -150,7 +151,7 @@ class BackStore(object):

VOL_KEY_LIST = [
'id', 'vpd83', 'name', 'total_space', 'consumed_size',
- 'pool_id', 'admin_state', 'thinp']
+ 'pool_id', 'admin_state', 'thinp', 'is_hw_raid_vol']

TGT_KEY_LIST = [
'id', 'port_type', 'service_address', 'network_address',
@@ -172,6 +173,16 @@ class BackStore(object):
'options', 'exp_root_hosts_str', 'exp_rw_hosts_str',
'exp_ro_hosts_str']

+ SUPPORTED_VCR_RAID_TYPES = [
+ Volume.RAID_TYPE_RAID0, Volume.RAID_TYPE_RAID1,
+ Volume.RAID_TYPE_RAID5, Volume.RAID_TYPE_RAID6,
+ Volume.RAID_TYPE_RAID10, Volume.RAID_TYPE_RAID50,
+ Volume.RAID_TYPE_RAID60]
+
+ SUPPORTED_VCR_STRIP_SIZES = [
+ 8 * 1024, 16 * 1024, 32 * 1024, 64 * 1024, 128 * 1024, 256 * 1024,
+ 512 * 1024, 1024 * 1024]
+
def __init__(self, statefile, timeout):
if not os.path.exists(statefile):
os.close(os.open(statefile, os.O_WRONLY | os.O_CREAT))
@@ -218,6 +229,7 @@ def __init__(self, statefile, timeout):
"parent_pool_id INTEGER, " # Indicate this pool is allocated from
# other pool
"member_type INTEGER, "
+ "strip_size INTEGER, "
"total_space LONG);\n") # total_space here is only for
# sub-pool (pool from pool)

@@ -244,6 +256,10 @@ def __init__(self, statefile, timeout):
# support.
"admin_state INTEGER, "
"thinp INTEGER NOT NULL, "
+ "is_hw_raid_vol INTEGER, " # Once its volume deleted, pool will
+ # be delete also. For HW RAID
+ # simulation only.
+
"pool_id INTEGER NOT NULL, "
"FOREIGN KEY(pool_id) "
"REFERENCES pools(id) ON DELETE CASCADE);\n")
@@ -362,6 +378,7 @@ def __init__(self, statefile, timeout):
pool0.raid_type,
pool0.member_type,
pool0.parent_pool_id,
+ pool0.strip_size,
pool1.total_space total_space,
pool1.total_space -
pool2.vol_consumed_size -
@@ -450,6 +467,7 @@ def __init__(self, statefile, timeout):
vol.pool_id,
vol.admin_state,
vol.thinp,
+ vol.is_hw_raid_vol,
vol_mask.ag_id ag_id
FROM
volumes vol
@@ -926,7 +944,11 @@ def sim_pool_of_id(self, sim_pool_id):
ErrorNumber.NOT_FOUND_POOL, "Pool")

def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
- element_type, unsupported_actions=0):
+ element_type, unsupported_actions=0,
+ strip_size=0):
+ if strip_size == 0:
+ strip_size = BackStore.DEFAULT_STRIP_SIZE
+
self._data_add(
'pools',
{
@@ -937,6 +959,7 @@ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
'unsupported_actions': unsupported_actions,
'raid_type': raid_type,
'member_type': Pool.MEMBER_TYPE_DISK,
+ 'strip_size': strip_size,
})

data_disk_count = PoolRAID.data_disk_count(
@@ -1029,7 +1052,9 @@ def _block_rounding(size_bytes):
return (size_bytes + BackStore.BLK_SIZE - 1) / \
BackStore.BLK_SIZE * BackStore.BLK_SIZE

- def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp):
+ def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp,
+ is_hw_raid_vol=0):
+
size_bytes = BackStore._block_rounding(size_bytes)
self._check_pool_free_space(sim_pool_id, size_bytes)
sim_vol = dict()
@@ -1040,6 +1065,7 @@ def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp):
sim_vol['total_space'] = size_bytes
sim_vol['consumed_size'] = size_bytes
sim_vol['admin_state'] = Volume.ADMIN_STATE_ENABLED
+ sim_vol['is_hw_raid_vol'] = is_hw_raid_vol

try:
self._data_add("volumes", sim_vol)
@@ -1055,7 +1081,7 @@ def sim_vol_delete(self, sim_vol_id):
This does not check whether volume exist or not.
"""
# Check existence.
- self.sim_vol_of_id(sim_vol_id)
+ sim_vol = self.sim_vol_of_id(sim_vol_id)

if self._sim_ag_ids_of_masked_vol(sim_vol_id):
raise LsmError(
@@ -1070,8 +1096,11 @@ def sim_vol_delete(self, sim_vol_id):
raise LsmError(
ErrorNumber.PLUGIN_BUG,
"Requested volume is a replication source")
-
- self._data_delete("volumes", 'id="%s"' % sim_vol_id)
+ if sim_vol['is_hw_raid_vol']:
+ # Delete the parent pool instead if found a HW RAIRD volume.
+ self._data_delete("pools", 'id="%s"' % sim_vol['pool_id'])
+ else:
+ self._data_delete("volumes", 'id="%s"' % sim_vol_id)

def sim_vol_mask(self, sim_vol_id, sim_ag_id):
self.sim_vol_of_id(sim_vol_id)
@@ -1759,7 +1788,7 @@ def disks(self):

@_handle_errors
def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
- _internal_use=False):
+ _internal_use=False, _is_hw_raid_vol=0):
"""
The '_internal_use' parameter is only for SimArray internal use.
This method will return the new sim_vol id instead of job_id when
@@ -1769,7 +1798,8 @@ def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
self.bs_obj.trans_begin()

new_sim_vol_id = self.bs_obj.sim_vol_create(
- vol_name, size_bytes, SimArray._sim_pool_id_of(pool_id), thinp)
+ vol_name, size_bytes, SimArray._sim_pool_id_of(pool_id),
+ thinp, is_hw_raid_vol=_is_hw_raid_vol)

if _internal_use:
return new_sim_vol_id
@@ -2251,9 +2281,9 @@ def volume_raid_info(self, lsm_vol):
min_io_size = BackStore.BLK_SIZE
opt_io_size = BackStore.BLK_SIZE
else:
- strip_size = BackStore.STRIP_SIZE
- min_io_size = BackStore.STRIP_SIZE
- opt_io_size = int(data_disk_count * BackStore.STRIP_SIZE)
+ strip_size = sim_pool['strip_size']
+ min_io_size = strip_size
+ opt_io_size = int(data_disk_count * strip_size)

return [raid_type, strip_size, disk_count, min_io_size, opt_io_size]

@@ -2278,3 +2308,64 @@ def pool_raid_info(self, lsm_pool):
member_type = Pool.MEMBER_TYPE_UNKNOWN

return sim_pool['raid_type'], member_type, member_ids
+
+ @_handle_errors
+ def volume_create_raid_cap_get(self, system):
+ if system.id != BackStore.SYS_ID:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+ return (
+ BackStore.SUPPORTED_VCR_RAID_TYPES,
+ BackStore.SUPPORTED_VCR_STRIP_SIZES)
+
+ @_handle_errors
+ def volume_create_raid(self, name, raid_type, disks, strip_size):
+ if raid_type not in BackStore.SUPPORTED_VCR_RAID_TYPES:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'raid_type' is not supported")
+
+ if strip_size == Volume.VCR_STRIP_SIZE_DEFAULT:
+ strip_size = BackStore.DEFAULT_STRIP_SIZE
+ elif strip_size not in BackStore.SUPPORTED_VCR_STRIP_SIZES:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'strip_size' is not supported")
+
+ self.bs_obj.trans_begin()
+ pool_name = "Pool for volume %s" % name
+ sim_disk_ids = [
+ SimArray._lsm_id_to_sim_id(
+ d.id,
+ LsmError(ErrorNumber.NOT_FOUND_DISK, "Disk not found"))
+ for d in disks]
+
+ for disk in disks:
+ if not disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in DISK.STATUS_FREE mode" % disk.id)
+ try:
+ sim_pool_id = self.bs_obj.sim_pool_create_from_disk(
+ name=pool_name,
+ raid_type=raid_type,
+ sim_disk_ids=sim_disk_ids,
+ element_type=Pool.ELEMENT_TYPE_VOLUME,
+ unsupported_actions=Pool.UNSUPPORTED_VOLUME_GROW |
+ Pool.UNSUPPORTED_VOLUME_SHRINK,
+ strip_size=strip_size)
+ except sqlite3.IntegrityError as sql_error:
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other volume" % name)
+
+
+ sim_pool = self.bs_obj.sim_pool_of_id(sim_pool_id)
+ sim_vol_id = self.volume_create(
+ SimArray._sim_id_to_lsm_id(sim_pool_id, 'POOL'), name,
+ sim_pool['free_space'], Volume.PROVISION_FULL,
+ _internal_use=True, _is_hw_raid_vol=1)
+ sim_vol = self.bs_obj.sim_vol_of_id(sim_vol_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_vol_2_lsm(sim_vol)
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index 4638778..4010eb3 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
@@ -295,3 +295,11 @@ def volume_raid_info(self, volume, flags=0):

def pool_raid_info(self, pool, flags=0):
return self.sim_array.pool_raid_info(pool)
+
+ def volume_create_raid_cap_get(self, system, flags=0):
+ return self.sim_array.volume_create_raid_cap_get(system)
+
+ def volume_create_raid(self, name, raid_type, disks, strip_size,
+ flags=0):
+ return self.sim_array.volume_create_raid(
+ name, raid_type, disks, strip_size)
--
1.8.3.1
Gris Ge
2015-04-16 15:05:37 UTC
Permalink
* Updated lsm_ops_v1_2 structure by adding two methods:
* lsm_plug_volume_create_raid_cap_get()
* lsm_plug_volume_create_raid()

* These two methods simply return no support.

* Tested C plugin interface by returning fake data, but changed back to
no support. We will implement the real method in the future.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/simc/simc_lsmplugin.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)

diff --git a/plugin/simc/simc_lsmplugin.c b/plugin/simc/simc_lsmplugin.c
index 24d3b8c..2af2dfc 100644
--- a/plugin/simc/simc_lsmplugin.c
+++ b/plugin/simc/simc_lsmplugin.c
@@ -1001,9 +1001,29 @@ static int pool_raid_info(
return rc;
}

+static int volume_create_raid_cap_get(
+ lsm_plugin_ptr c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags)
+{
+ return LSM_ERR_NO_SUPPORT;
+}
+
+static int volume_create_raid(
+ lsm_plugin_ptr c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags)
+{
+ return LSM_ERR_NO_SUPPORT;
+}
+
static struct lsm_ops_v1_2 ops_v1_2 = {
volume_raid_info,
pool_raid_info,
+ volume_create_raid_cap_get,
+ volume_create_raid,
};

static int volume_enable_disable(lsm_plugin_ptr c, lsm_volume *v,
--
1.8.3.1
Gris Ge
2015-04-16 15:05:39 UTC
Permalink
* Add support of these methods:
* volume_create_raid()
* volume_create_raid_cap_get()

* Using storcli command to create RAID group volume:

storcli /c0 add vd RAID10 drives=252:1-4 pdperarray=2 J

* Fixed incorrect lsm.System.plugin_data.

* Add new capability lsm.Capabilities.VOLUME_CREATE_RAID.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 200 ++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 194 insertions(+), 6 deletions(-)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 6946068..4fd2a03 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -19,6 +19,7 @@
import json
import re
import errno
+import math

from lsm import (uri_parse, search_property, size_human_2_size_bytes,
Capabilities, LsmError, ErrorNumber, System, Client,
@@ -188,6 +189,27 @@ def _mega_raid_type_to_lsm(vd_basic_info, vd_prop_info):
return raid_type


+def _lsm_raid_type_to_mega(lsm_raid_type):
+ if lsm_raid_type == Volume.RAID_TYPE_RAID0:
+ return 'RAID0'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID1:
+ return 'RAID1'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID5:
+ return 'RAID5'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID6:
+ return 'RAID6'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID10:
+ return 'RAID10'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID50:
+ return 'RAID50'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID60:
+ return 'RAID60'
+ else:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "RAID type %d not supported" % lsm_raid_type)
+
+
class MegaRAID(IPlugin):
_DEFAULT_BIN_PATHS = [
"/opt/MegaRAID/storcli/storcli64", "/opt/MegaRAID/storcli/storcli"]
@@ -251,16 +273,13 @@ def time_out_get(self, flags=Client.FLAG_RSVD):

@_handle_errors
def capabilities(self, system, flags=Client.FLAG_RSVD):
- cur_lsm_syss = self.systems()
- if system.id not in list(s.id for s in cur_lsm_syss):
- raise LsmError(
- ErrorNumber.NOT_FOUND_SYSTEM,
- "System not found")
cap = Capabilities()
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.VOLUME_RAID_INFO)
cap.set(Capabilities.POOL_RAID_INFO)
+ cap.set(Capabilities.VOLUME_CREATE_RAID)
+
return cap

def _storcli_exec(self, storcli_cmds, flag_json=True):
@@ -347,7 +366,7 @@ def systems(self, flags=Client.FLAG_RSVD):
)
(status, status_info) = self._lsm_status_of_ctrl(
ctrl_show_all_output)
- plugin_data = "/c%d"
+ plugin_data = "/c%d" % ctrl_num
# Since PCI slot sequence might change.
# This string just stored for quick system verification.

@@ -601,3 +620,172 @@ def pool_raid_info(self, pool, flags=Client.FLAG_RSVD):
raid_type = Volume.RAID_TYPE_RAID10

return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
+
+ def _vcr_cap_get(self, mega_sys_path):
+ cap_output = self._storcli_exec(
+ [mega_sys_path, "show", "all"])['Capabilities']
+
+ mega_raid_types = \
+ cap_output['RAID Level Supported'].replace(', \n', '').split(', ')
+
+ supported_raid_types = []
+ if 'RAID0' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID0)
+
+ if 'RAID1' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID1)
+
+ if 'RAID5' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID5)
+
+ if 'RAID6' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID6)
+
+ if 'RAID10' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID10)
+
+ if 'RAID50' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID50)
+
+ if 'RAID60' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID60)
+
+ min_strip_size = _mega_size_to_lsm(cap_output['Min Strip Size'])
+ max_strip_size = _mega_size_to_lsm(cap_output['Max Strip Size'])
+
+ supported_strip_sizes = list(
+ min_strip_size * (2 ** i)
+ for i in range(
+ 0,
+ int(
+ math.log(int(max_strip_size / min_strip_size)) /
+ math.log(2)) + 1))
+
+ # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ # The math above is to generate a list like:
+ # min_strip_size, ... n^2 , max_strip_size
+
+ return supported_raid_types, supported_strip_sizes
+
+ @_handle_errors
+ def volume_create_raid_cap_get(self, system, flags=Client.FLAG_RSVD):
+ """
+ Depend on the 'Capabilities' section of "storcli /c0 show all" output.
+ """
+ cur_lsm_syss = list(s for s in self.systems() if s.id == system.id)
+ if len(cur_lsm_syss) != 1:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+
+ lsm_sys = cur_lsm_syss[0]
+ return self._vcr_cap_get(lsm_sys.plugin_data)
+
+ @_handle_errors
+ def volume_create_raid(self, name, raid_type, disks, strip_size,
+ flags=Client.FLAG_RSVD):
+ """
+ Work flow:
+ 1. Create RAID volume
+ storcli /c0 add vd RAID10 drives=252:1-4 pdperarray=2 J
+ 2. Find out pool/DG base on one disk.
+ storcli /c0/e252/s1 show J
+ 3. Find out the volume/VD base on pool/DG using self.volumes()
+ """
+ mega_raid_type = _lsm_raid_type_to_mega(raid_type)
+ ctrl_num = None
+ slot_nums = []
+ enclosure_num = None
+
+ for disk in disks:
+ if not disk.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: missing plugin_data "
+ "property")
+ # Disk should from the same controller.
+ (cur_ctrl_num, cur_enclosure_num, slot_num) = \
+ disk.plugin_data.split(':')
+
+ if ctrl_num and cur_ctrl_num != ctrl_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same controller/system.")
+
+ if enclosure_num and cur_enclosure_num != enclosure_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same disk enclosure.")
+
+ ctrl_num = int(cur_ctrl_num)
+ enclosure_num = cur_enclosure_num
+ slot_nums.append(slot_num)
+
+ # Handle request volume name, LSI only allow 15 characters.
+ name = re.sub('[^0-9a-zA-Z_\-]+', '', name)[:15]
+
+ cmds = [
+ "/c%s" % ctrl_num, "add", "vd", mega_raid_type,
+ 'size=all', "name=%s" % name,
+ "drives=%s:%s" % (enclosure_num, ','.join(slot_nums))]
+
+ if raid_type == Volume.RAID_TYPE_RAID10 or \
+ raid_type == Volume.RAID_TYPE_RAID50 or \
+ raid_type == Volume.RAID_TYPE_RAID60:
+ cmds.append("pdperarray=%d" % int(len(disks) / 2))
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT:
+ cmds.append("strip=%d" % int(strip_size / 1024))
+
+ try:
+ self._storcli_exec(cmds)
+ except ExecError:
+ req_disk_ids = [d.id for d in disks]
+ for cur_disk in self.disks():
+ if cur_disk.id in req_disk_ids and \
+ not cur_disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in STATUS_FREE state" % cur_disk.id)
+ # Check whether got unsupported RAID type or stripe size
+ supported_raid_types, supported_strip_sizes = \
+ self._vcr_cap_get("/c%s" % ctrl_num)
+
+ if raid_type not in supported_raid_types:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'raid_type' is not supported")
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT and \
+ strip_size not in supported_strip_sizes:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'strip_size' is not supported")
+
+ raise
+
+ # Find out the DG ID from one disk.
+ dg_show_output = self._storcli_exec(
+ ["/c%s/e%s/s%s" % tuple(disks[0].plugin_data.split(":")), "show"])
+
+ dg_id = dg_show_output['Drive Information'][0]['DG']
+ if dg_id == '-':
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_create_raid(): No error found in output, "
+ "but RAID is not created: %s" % dg_show_output.items())
+ else:
+ dg_id = int(dg_id)
+
+ pool_id = _pool_id_of(dg_id, self._sys_id_of_ctrl_num(ctrl_num))
+
+ lsm_vols = self.volumes(search_key='pool_id', search_value=pool_id)
+ if len(lsm_vols) != 1:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_create_raid(): Got unexpected volume count(not 1) "
+ "when creating RAID volume")
+
+ return lsm_vols[0]
--
1.8.3.1
Gris Ge
2015-04-16 15:05:40 UTC
Permalink
* lsmcli test: new test function volume_create_raid_test().

* Plugin test: new test function test_volume_create_raid().

* C unit test: test_volume_create_raid_cap_get() and
test_volume_create_raid()

Signed-off-by: Gris Ge <***@redhat.com>
---
test/cmdtest.py | 55 ++++++++++++++++++++++++++++
test/plugin_test.py | 50 ++++++++++++++++++++++++++
test/tester.c | 101 ++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 206 insertions(+)

diff --git a/test/cmdtest.py b/test/cmdtest.py
index 97c5b88..4e1f731 100755
--- a/test/cmdtest.py
+++ b/test/cmdtest.py
@@ -713,6 +713,59 @@ def pool_raid_info_test(cap, system_id):
exit(10)
return

+def volume_create_raid_test(cap, system_id):
+ if cap['VOLUME_CREATE_RAID']:
+ out = call(
+ [cmd, '-t' + sep, 'volume-create-raid-cap', '--sys', system_id])[1]
+
+ if 'RAID1' not in [r[1] for r in parse(out)]:
+ return
+
+ out = call([cmd, '-t' + sep, 'list', '--type', 'disks'])[1]
+ free_disk_ids = []
+ disk_list = parse(out)
+ for disk in disk_list:
+ if 'Free' in disk:
+ if len(free_disk_ids) == 2:
+ break
+ free_disk_ids.append(disk[0])
+
+ if len(free_disk_ids) != 2:
+ print "Require two free disks to test volume-create-raid"
+ exit(10)
+
+ out = call([
+ cmd, '-t' + sep, 'volume-create-raid', '--disk', free_disk_ids[0],
+ '--disk', free_disk_ids[1], '--name', 'test_volume_create_raid',
+ '--raid-type', 'raid1'])[1]
+
+ volume = parse(out)
+ vol_id = volume[0][0]
+ pool_id = volume[0][-2]
+
+ if cap['VOLUME_RAID_INFO']:
+ out = call(
+ [cmd, '-t' + sep, 'volume-raid-info', '--vol', vol_id])[1]
+ if parse(out)[0][1] != 'RAID1':
+ print "New volume is not RAID 1"
+ exit(10)
+
+ if cap['POOL_RAID_INFO']:
+ out = call(
+ [cmd, '-t' + sep, 'pool-raid-info', '--pool', pool_id])[1]
+ if parse(out)[0][1] != 'RAID1':
+ print "New pool is not RAID 1"
+ exit(10)
+ for disk_id in free_disk_ids:
+ if disk_id not in [p[3] for p in parse(out)]:
+ print "New pool does not contain requested disks"
+ exit(10)
+
+ if cap['VOLUME_DELETE']:
+ volume_delete(vol_id)
+
+ return
+

def run_all_tests(cap, system_id):
test_display(cap, system_id)
@@ -729,6 +782,8 @@ def run_all_tests(cap, system_id):

pool_raid_info_test(cap,system_id)

+ volume_create_raid_test(cap, system_id)
+
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("-c", "--command", action="store", type="string",
diff --git a/test/plugin_test.py b/test/plugin_test.py
index 895bec3..ef0bbfc 100755
--- a/test/plugin_test.py
+++ b/test/plugin_test.py
@@ -1294,6 +1294,56 @@ def test_pool_raid_info(self):
self.assertTrue(type(member_type) is int)
self.assertTrue(type(member_ids) is list)

+ def _skip_current_test(self, messsage):
+ """
+ If skipTest is supported, skip this test with provided message.
+ Sliently return if not supported.
+ """
+ if hasattr(unittest.TestCase, 'skipTest') is True:
+ self.skipTest(messsage)
+ return
+
+ def test_volume_create_raid(self):
+ for s in self.systems:
+ cap = self.c.capabilities(s)
+ # TODO(Gris Ge): Add test code for other RAID type and strip size
+ if supported(cap, [Cap.VOLUME_CREATE_RAID]):
+ supported_raid_types, supported_strip_sizes = \
+ self.c.volume_create_raid_cap_get(s)
+ if lsm.Volume.RAID_TYPE_RAID1 not in supported_raid_types:
+ self._skip_current_test(
+ "Skip test: current system does not support "
+ "creating RAID1 volume")
+
+ # Find two free disks
+ free_disks = []
+ for disk in self.c.disks():
+ if len(free_disks) == 2:
+ break
+ if disk.status & lsm.Disk.STATUS_FREE:
+ free_disks.append(disk)
+
+ if len(free_disks) != 2:
+ self._skip_current_test(
+ "Skip test: Failed to find two free disks for RAID 1")
+ return
+
+ new_vol = self.c.volume_create_raid(
+ rs('v'), lsm.Volume.RAID_TYPE_RAID1, free_disks,
+ lsm.Volume.VCR_STRIP_SIZE_DEFAULT)
+
+ self.assertTrue(new_vol is not None)
+
+ # TODO(Gris Ge): Use volume_raid_info() and pool_raid_info()
+ # to verify size, raid_type, member type, member ids.
+
+ if supported(cap, [Cap.VOLUME_DELETE]):
+ self._volume_delete(new_vol)
+
+ else:
+ self._skip_current_test(
+ "Skip test: not support of VOLUME_CREATE_RAID")
+
def dump_results():
"""
unittest.main exits when done so we need to register this handler to
diff --git a/test/tester.c b/test/tester.c
index 63c2b5d..842d4e0 100644
--- a/test/tester.c
+++ b/test/tester.c
@@ -2916,6 +2916,105 @@ START_TEST(test_pool_raid_info)
}
END_TEST

+START_TEST(test_volume_create_raid_cap_get)
+{
+ if (which_plugin == 1){
+ // silently skip on simc which does not support this method yet.
+ return;
+ }
+ int rc;
+ lsm_system **sys = NULL;
+ uint32_t sys_count = 0;
+ lsm_storage_capabilities *cap = NULL;
+
+ G(rc, lsm_system_list, c, &sys, &sys_count, LSM_CLIENT_FLAG_RSVD);
+ fail_unless( sys_count >= 1, "count = %d", sys_count);
+ printf("sys_count: %d\n", sys_count);
+
+ if( sys_count > 0 ) {
+ uint32_t *supported_raid_types;
+ uint32_t supported_raid_type_count = 0;
+ uint32_t *supported_strip_sizes;
+ uint32_t supported_strip_size_count = 0;
+
+ G(
+ rc, lsm_volume_create_raid_cap_get, c, sys[0],
+ &supported_raid_types, &supported_raid_type_count,
+ &supported_strip_sizes, &supported_strip_size_count,
+ 0);
+
+ free(supported_raid_types);
+ free(supported_strip_sizes);
+
+ }
+ G(rc, lsm_system_record_array_free, sys, sys_count);
+}
+END_TEST
+
+START_TEST(test_volume_create_raid)
+{
+ if (which_plugin == 1){
+ // silently skip on simc which does not support this method yet.
+ return;
+ }
+ int rc;
+
+ lsm_disk **disks = NULL;
+ uint32_t disk_count = 0;
+
+ G(rc, lsm_disk_list, c, NULL, NULL, &disks, &disk_count, 0);
+
+ // Try to create two disks RAID 1.
+ uint32_t free_disk_count = 0;
+ lsm_disk *free_disks[2];
+ int i;
+ for (i = 0; i< disk_count; i++){
+ if (lsm_disk_status_get(disks[i]) & LSM_DISK_STATUS_FREE){
+ free_disks[free_disk_count++] = disks[i];
+ if (free_disk_count == 2){
+ break;
+ }
+ }
+ }
+ fail_unless(free_disk_count == 2, "Failed to find two free disks");
+
+ lsm_volume *new_volume = NULL;
+
+ G(rc, lsm_volume_create_raid, c, "test_volume_create_raid",
+ LSM_VOLUME_RAID_TYPE_RAID1, free_disks, free_disk_count,
+ LSM_VOLUME_VCR_STRIP_SIZE_DEFAULT, &new_volume, LSM_CLIENT_FLAG_RSVD);
+
+ char *job_del = NULL;
+ int del_rc = lsm_volume_delete(
+ c, new_volume, &job_del, LSM_CLIENT_FLAG_RSVD);
+
+ fail_unless( del_rc == LSM_ERR_OK || del_rc == LSM_ERR_JOB_STARTED,
+ "lsm_volume_delete %d (%s)", rc, error(lsm_error_last_get(c)));
+
+ if( LSM_ERR_JOB_STARTED == del_rc ) {
+ wait_for_job_vol(c, &job_del);
+ }
+
+ G(rc, lsm_disk_record_array_free, disks, disk_count);
+ // The new pool should be automatically be deleted when volume got
+ // deleted.
+ lsm_pool **pools = NULL;
+ uint32_t count = 0;
+ G(
+ rc, lsm_pool_list, c, "id", lsm_volume_pool_id_get(new_volume),
+ &pools, &count, LSM_CLIENT_FLAG_RSVD);
+
+ fail_unless(
+ count == 0,
+ "New HW RAID pool still exists, it should be deleted along with "
+ "lsm_volume_delete()");
+
+ lsm_pool_record_array_free(pools, count);
+
+ G(rc, lsm_volume_record_free, new_volume);
+}
+END_TEST
+
Suite * lsm_suite(void)
{
Suite *s = suite_create("libStorageMgmt");
@@ -2953,6 +3052,8 @@ Suite * lsm_suite(void)
tcase_add_test(basic, test_invalid_input);
tcase_add_test(basic, test_volume_raid_info);
tcase_add_test(basic, test_pool_raid_info);
+ tcase_add_test(basic, test_volume_create_raid_cap_get);
+ tcase_add_test(basic, test_volume_create_raid);

suite_add_tcase(s, basic);
return s;
--
1.8.3.1
Gris Ge
2015-04-16 15:05:32 UTC
Permalink
* When certain pool is RAID0 or other RAID type with multiple data disks,
the free space generated by 'pools_view' sqlite view will be incorrect
as volume consumed size is counted serial times.
The fix is generate temporary table for total space, volume consumed space,
fs consumed space, sub-pool consumed space.

* Since we changed the 'pools_view' view, bumped the version to 3.3.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 98 +++++++++++++++++++++++++++++++++-----------------
1 file changed, 66 insertions(+), 32 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index b03a146..5f1c1f8 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -123,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.2"
+ VERSION = "3.3"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -353,40 +353,74 @@ def __init__(self, statefile, timeout):
"""
CREATE VIEW pools_view AS
SELECT
- pool.id,
- pool.name,
- pool.status,
- pool.status_info,
- pool.element_type,
- pool.unsupported_actions,
- pool.raid_type,
- pool.member_type,
- pool.parent_pool_id,
- ifnull(pool.total_space,
- ifnull(SUM(disk.total_space), 0))
- total_space,
- ifnull(pool.total_space,
- ifnull(SUM(disk.total_space), 0)) -
- ifnull(SUM(volume.consumed_size), 0) -
- ifnull(SUM(fs.consumed_size), 0) -
- ifnull(SUM(pool2.total_space), 0)
- free_space
-
+ pool0.id,
+ pool0.name,
+ pool0.status,
+ pool0.status_info,
+ pool0.element_type,
+ pool0.unsupported_actions,
+ pool0.raid_type,
+ pool0.member_type,
+ pool0.parent_pool_id,
+ pool1.total_space total_space,
+ pool1.total_space -
+ pool2.vol_consumed_size -
+ pool3.fs_consumed_size -
+ pool4.sub_pool_consumed_size free_space
FROM
- pools pool
- LEFT JOIN disks disk
- ON pool.id = disk.owner_pool_id AND
- disk.role = 'DATA'
- LEFT JOIN volumes volume
- ON volume.pool_id = pool.id
- LEFT JOIN fss fs
- ON fs.pool_id = pool.id
- LEFT JOIN pools pool2
- ON pool2.parent_pool_id = pool.id
+ pools pool0
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(pool.total_space,
+ ifnull(SUM(disk.total_space), 0))
+ total_space
+ FROM pools pool
+ LEFT JOIN disks disk
+ ON pool.id = disk.owner_pool_id AND
+ disk.role = 'DATA'
+ GROUP BY
+ pool.id
+ ) pool1 ON pool0.id = pool1.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(volume.consumed_size), 0)
+ vol_consumed_size
+ FROM pools pool
+ LEFT JOIN volumes volume
+ ON volume.pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool2 ON pool0.id = pool2.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(fs.consumed_size), 0)
+ fs_consumed_size
+ FROM pools pool
+ LEFT JOIN fss fs
+ ON fs.pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool3 ON pool0.id = pool3.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(sub_pool.total_space), 0)
+ sub_pool_consumed_size
+ FROM pools pool
+ LEFT JOIN pools sub_pool
+ ON sub_pool.parent_pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool4 ON pool0.id = pool4.id

GROUP BY
- pool.id
- ;
+ pool0.id;
""")
sql_cmd += (
"""
--
1.8.3.1
Gris Ge
2015-04-16 15:05:38 UTC
Permalink
* Add support of these methods:
* volume_create_raid()
* volume_create_raid_cap_get()

* Using hpssacli command to create RAID group volume:

hpssacli ctrl slot=0 create type=ld \
drives=1i:1:13,1i:1:14 size=max raid=1+0 ss=64

* Add new capability lsm.Capabilities.VOLUME_CREATE_RAID.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/hpsa/hpsa.py | 175 +++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 172 insertions(+), 3 deletions(-)

diff --git a/plugin/hpsa/hpsa.py b/plugin/hpsa/hpsa.py
index 743cbcc..0a24219 100644
--- a/plugin/hpsa/hpsa.py
+++ b/plugin/hpsa/hpsa.py
@@ -195,7 +195,7 @@ def _disk_status_of(hp_disk, flag_free):
return disk_status


-def _hp_raid_type_to_lsm(hp_ld):
+def _hp_raid_level_to_lsm(hp_ld):
"""
Based on this property:
Fault Tolerance: 0/1/5/6/1+0
@@ -217,6 +217,27 @@ def _hp_raid_type_to_lsm(hp_ld):
return Volume.RAID_TYPE_UNKNOWN


+def _lsm_raid_type_to_hp(raid_type):
+ if raid_type == Volume.RAID_TYPE_RAID0:
+ return '0'
+ elif raid_type == Volume.RAID_TYPE_RAID1:
+ return '1'
+ elif raid_type == Volume.RAID_TYPE_10:
+ return '1+0'
+ elif raid_type == Volume.RAID_TYPE_5:
+ return '5'
+ elif raid_type == Volume.RAID_TYPE_6:
+ return '6'
+ elif raid_type == Volume.RAID_TYPE_6:
+ return '6'
+ elif raid_type == Volume.RAID_TYPE_60:
+ return '60'
+ else:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Not supported raid type %d" % raid_type)
+
+
class SmartArray(IPlugin):
_DEFAULT_BIN_PATHS = [
"/usr/sbin/hpssacli", "/opt/hp/hpssacli/bld/hpssacli"]
@@ -286,6 +307,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
cap.set(Capabilities.POOL_RAID_INFO)
+ cap.set(Capabilities.VOLUME_CREATE_RAID)
return cap

def _sacli_exec(self, sacli_cmds, flag_convert=True):
@@ -511,7 +533,7 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
for array_key_name in ctrl_data[key_name].keys():
if array_key_name == "Logical Drive: %s" % ld_num:
hp_ld = ctrl_data[key_name][array_key_name]
- raid_type = _hp_raid_type_to_lsm(hp_ld)
+ raid_type = _hp_raid_level_to_lsm(hp_ld)
strip_size = _hp_size_to_lsm(hp_ld['Strip Size'])
stripe_size = _hp_size_to_lsm(hp_ld['Full Stripe Size'])
elif array_key_name.startswith("physicaldrive"):
@@ -556,7 +578,7 @@ def pool_raid_info(self, pool, flags=Client.FLAG_RSVD):
for array_key_name in ctrl_data[key_name].keys():
if array_key_name.startswith("Logical Drive: ") and \
raid_type == Volume.RAID_TYPE_UNKNOWN:
- raid_type = _hp_raid_type_to_lsm(
+ raid_type = _hp_raid_level_to_lsm(
ctrl_data[key_name][array_key_name])
elif array_key_name.startswith("physicaldrive"):
hp_disk = ctrl_data[key_name][array_key_name]
@@ -570,3 +592,150 @@ def pool_raid_info(self, pool, flags=Client.FLAG_RSVD):
"Pool not found")

return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
+
+ def _vcr_cap_get(self, ctrl_num):
+ supported_raid_types = [
+ Volume.RAID_TYPE_RAID0, Volume.RAID_TYPE_RAID1,
+ Volume.RAID_TYPE_RAID5, Volume.RAID_TYPE_RAID50,
+ Volume.RAID_TYPE_RAID10]
+
+ supported_strip_sizes = [
+ 8 * 1024, 16 * 1024, 32 * 1024, 64 * 1024,
+ 128 * 1024, 256 * 1024, 512 * 1024, 1024 * 1024]
+
+ ctrl_conf = self._sacli_exec([
+ "ctrl", "slot=%s" % ctrl_num, "show", "config", "detail"]
+ ).values()[0]
+
+ if 'RAID 6 (ADG) Status' in ctrl_conf and \
+ ctrl_conf['RAID 6 (ADG) Status'] == 'Enabled':
+ supported_raid_types.extend(
+ [Volume.RAID_TYPE_6, Volume.RAID_TYPE_60])
+
+ return supported_raid_types, supported_strip_sizes
+
+ @_handle_errors
+ def volume_create_raid_cap_get(self, system, flags=Client.FLAG_RSVD):
+ """
+ Depends on this command:
+ hpssacli ctrl slot=0 show config detail
+ All hpsa support RAID 1, 10, 5, 50.
+ If "RAID 6 (ADG) Status: Enabled", it will support RAID 6 and 60.
+ For HP tribile mirror(RAID 1adm and RAID10adm), LSM does support
+ that yet.
+ No command output or document indication special or exceptional
+ support of strip size, assuming all hpsa cards support documented
+ strip sizes.
+ """
+ if not system.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Ilegal input system argument: missing plugin_data property")
+ return self._vcr_cap_get(system.plugin_data)
+
+ @_handle_errors
+ def volume_create_raid(self, name, raid_type, disks, strip_size,
+ flags=Client.FLAG_RSVD):
+ """
+ Depends on these commands:
+ 1. Create LD
+ hpssacli ctrl slot=0 create type=ld \
+ drives=1i:1:13,1i:1:14 size=max raid=1+0 ss=64
+
+ 2. Find out the system ID.
+
+ 3. Find out the pool fist disk belong.
+ hpssacli ctrl slot=0 pd 1i:1:13 show
+
+ 4. List all volumes for this new pool.
+ self.volumes(search_key='pool_id', search_value=pool_id)
+
+ The 'name' argument will be ignored.
+ TODO(Gris Ge): These code only tested for creating 1 disk RAID 0.
+ """
+ hp_raid_level = _lsm_raid_type_to_hp(raid_type)
+ hp_disk_ids = []
+ ctrl_num = None
+ for disk in disks:
+ if not disk.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: missing plugin_data "
+ "property")
+ (cur_ctrl_num, hp_disk_id) = disk.plugin_data.split(':', 1)
+ if ctrl_num and cur_ctrl_num != ctrl_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same controller/system.")
+
+ ctrl_num = cur_ctrl_num
+ hp_disk_ids.append(hp_disk_id)
+
+ cmds = [
+ "ctrl", "slot=%s" % ctrl_num, "create", "type=ld",
+ "drives=%s" % ','.join(hp_disk_ids), 'size=max',
+ 'raid=%s' % hp_raid_level]
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT:
+ cmds.append("ss=%d" % int(strip_size / 1024))
+
+ try:
+ self._sacli_exec(cmds, flag_convert=False)
+ except ExecError:
+ # Check whether disk is free
+ requested_disk_ids = [d.id for d in disks]
+ for cur_disk in self.disks():
+ if cur_disk.id in requested_disk_ids and \
+ not cur_disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in STATUS_FREE state" % cur_disk.id)
+
+ # Check whether got unsupported raid type or strip size
+ supported_raid_types, supported_strip_sizes = \
+ self._vcr_cap_get(ctrl_num)
+
+ if raid_type not in supported_raid_types:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided raid_type is not supported")
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT and \
+ strip_size not in supported_strip_sizes:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided strip_size is not supported")
+
+ raise
+
+ # Find out the system id to gernerate pool_id
+ sys_output = self._sacli_exec(
+ ['ctrl', "slot=%s" % ctrl_num, 'show'])
+
+ sys_id = sys_output.values()[0]['Serial Number']
+ # API code already checked empty 'disks', we will for sure get
+ # valid 'ctrl_num' and 'hp_disk_ids'.
+
+ pd_output = self._sacli_exec(
+ ['ctrl', "slot=%s" % ctrl_num, 'pd', hp_disk_ids[0], 'show'])
+
+ if pd_output.values()[0].keys()[0].lower().startswith("array "):
+ hp_array_id = pd_output.values()[0].keys()[0][len("array "):]
+ hp_array_id = "Array:%s" % hp_array_id
+ else:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_create_raid(): Failed to find out the array ID of "
+ "new array: %s" % pd_output.items())
+
+ pool_id = _pool_id_of(sys_id, hp_array_id)
+
+ lsm_vols = self.volumes(search_key='pool_id', search_value=pool_id)
+
+ if len(lsm_vols) != 1:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_create_raid(): Got unexpected count(not 1) of new "
+ "volumes: %s" % lsm_vols)
+ return lsm_vols[0]
--
1.8.3.1
Tony Asleson
2015-04-16 21:56:55 UTC
Permalink
Hi Gris,

Initial overall comments, individual code comments on github. Need to
try it out on some hardware.
Post by Gris Ge
* Putting two patch sets into one. As `pool_raid_info()` patch set is
just preparation for `volume_create_raid()` patch set. It's better to explain
design choice with both.
* RAID type
* Member type: disk RAID pool or sub-pool.
* Members: disks or parent pools.
This function is named similar to the volume_raid_info, but it returns
entirely different data. Should we rename it to pool_member_info so if
we add a method to return data similar to volume_raid_info the naming
scheme won't be confusing?
Post by Gris Ge
* Why we need pool_raid_info() when we already have volume_raid_info()?
The `pool_raid_info()` is focus on RAID disk or sub-pool layout.
What is the use case for pool_raid_info?

I see the use case for creating a raid as:

- retrieve all disks, only use the ones that are empty
- query different raid types and strip sizes
- create the volume, using one of the available raid types and stripe
sizes

Thanks!

Regards,
Tony
Gris Ge
2015-04-17 03:31:54 UTC
Permalink
Post by Tony Asleson
Hi Gris,
Initial overall comments, individual code comments on github. Need to
try it out on some hardware.
Post by Gris Ge
* RAID type
* Member type: disk RAID pool or sub-pool.
* Members: disks or parent pools.
This function is named similar to the volume_raid_info, but it
returns entirely different data. Should we rename it to
pool_member_info so if we add a method to return data similar to
volume_raid_info the naming scheme won't be confusing?
Yeah. Make sense, I will change it pool_member_info.
Post by Tony Asleson
Post by Gris Ge
* Why we need pool_raid_info() when we already have volume_raid_info()?
The `pool_raid_info()` is focus on RAID disk or sub-pool layout.
What is the use case for pool_raid_info?
Initially, I just want to make sure the changes made by
volume_create_raid() is queryable and also suitable for SAN/NAS.

Other use case:
* Query pool data availability(RAID),
performance(disk type, RAID type, disk count).
* Call pool_member_info().
* If got Pool.MEMBER_TYPE_POOL, query parent pool.
* Call disks() to get disk type. [1]

[1] Maybe we could find a way to expose disk property of
rotation per minute, link speed and etc.
--
Gris Ge
Gris Ge
2015-04-18 11:23:40 UTC
Permalink
* Putting two patch sets into one. As `pool_raid_info()` patch set is
just preparation for `volume_create_raid()` patch set. It's better to explain
design choice with both.

* New method `pool_raid_info()` to query pool member information:
* RAID type
* Member type: disk RAID pool or sub-pool.
* Members: disks or parent pools.

* New method `volume_create_raid()` to create RAID group volume.

* New method `volume_create_raid_cap_get()` to query RAID group volume create
supported RAID type and strip sizes.

* Design notes:
* Why we need pool_raid_info() when we already have volume_raid_info()?
The `pool_raid_info()` is focus on RAID disk or sub-pool layout.

The `volume_raid_info()` is focus on providing information about host
I/O performance and availability(raid).

* Why don't create single sharable method for SAN/NAS/DAS about RAID
creation?
Please refer to design note of patch 'New feature: Create RAID volume
on hardware RAID.'

* Why no RAID creation method for SAN and NAS?
As we tried before, due to complex RAID settings of SAN and NAS
storage, I didn't find a simple design to support all of them.

* Please refer to design note of patch 'New feature: Create RAID volume'
also.

* Please comment in github page, thank you.
https://github.com/libstorage/libstoragemgmt/pull/11

Changes in V3:

* Rename pool_raid_info to pool_member_info.
* Rebased patches for the rename.
* Fix C code warning about comparing signed and unsigned integer.
* Initialized pointers in lsm_plugin_ipc.cpp and tester.c.
* Removed unused variables in tester.c.
* volume_create_raid() tested on:
* MegaRAID: RAID 0/1/5/10. # No RAID 6 license.
* HP SmartArray: RAID 0. # No additional disk.
* Document pull request:
https://github.com/libstorage/libstoragemgmt-doc/pull/3

Gris Ge (17):
New method: lsm.Client.pool_raid_info()/lsm_pool_raid_info()
lsmcli: Add new command pool-raid-info
Tests: Add test for lsm.Client.pool_raid_info() and
lsm_pool_raid_info().
Simulator Plugin: Add lsm.Client.pool_raid_info() support.
Simulator C plugin: Add lsm_pool_raid_info() support.
ONTAP Plugin: Add lsm.Client.pool_raid_info() support.
MegaRAID Plugin: Add lsm.Client.pool_raid_info() support.
HP Smart Array Plugin: Add lsm.Client.pool_raid_info() support.
Simulator Plugin: Fix pool free space calculation.
MegaRAID Plugin: Change Disk.plugin_data to parse friendly format.
New feature: Create RAID volume on hardware RAID.
lsmcli: New commands for hardware RAID volume creation.
Simulator Plugin: Add hardware RAID volume creation support.
Simulator C plugin: Sync changes of lsm_ops_v1_2 about
lsm_plug_volume_create_raid_cap_get
HP SmartArray Plugin: Add volume creation support.
MegaRAID Plugin: Add volume creation support.
Test: Hardware RAID volume creation.

c_binding/include/libstoragemgmt/libstoragemgmt.h | 89 ++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 5 +-
.../include/libstoragemgmt/libstoragemgmt_error.h | 2 +
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 84 ++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 20 ++
c_binding/lsm_convert.cpp | 41 +++
c_binding/lsm_convert.hpp | 12 +
c_binding/lsm_mgmt.cpp | 167 +++++++++++
c_binding/lsm_plugin_ipc.cpp | 135 ++++++++-
doc/man/lsmcli.1.in | 41 +++
plugin/hpsa/hpsa.py | 212 +++++++++++++-
plugin/megaraid/megaraid.py | 251 ++++++++++++++++-
plugin/ontap/ontap.py | 21 ++
plugin/sim/simarray.py | 309 +++++++++++++--------
plugin/sim/simulator.py | 11 +
plugin/simc/simc_lsmplugin.c | 44 ++-
python_binding/lsm/_client.py | 288 +++++++++++++++++++
python_binding/lsm/_common.py | 3 +
python_binding/lsm/_data.py | 9 +
test/cmdtest.py | 75 +++++
test/plugin_test.py | 60 ++++
test/tester.c | 128 +++++++++
tools/lsmcli/cmdline.py | 84 +++++-
tools/lsmcli/data_display.py | 77 ++++-
24 files changed, 2050 insertions(+), 118 deletions(-)
--
1.8.3.1
Gris Ge
2015-04-18 11:23:42 UTC
Permalink
* New command 'pool-raid-info' to utilize lsm.Client.pool_raid_info() method.

* Manpage updated.

* New alias 'pri' for 'pool-raid-info' command.

Changes in V2:

* Renamed command to 'pool-member-info' and alias 'pmi'

Signed-off-by: Gris Ge <***@redhat.com>
---
doc/man/lsmcli.1.in | 9 +++++++++
tools/lsmcli/cmdline.py | 19 ++++++++++++++++++-
tools/lsmcli/data_display.py | 41 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/doc/man/lsmcli.1.in b/doc/man/lsmcli.1.in
index 381d676..6faf690 100644
--- a/doc/man/lsmcli.1.in
+++ b/doc/man/lsmcli.1.in
@@ -351,6 +351,13 @@ Query RAID information for given volume.
\fB--vol\fR \fI<VOL_ID>\fR
Required. The ID of volume to query.

+.SS pool-member-info
+.TP 15
+Query RAID information for given pool.
+.TP
+\fB--pool\fR \fI<POOL_ID>\fR
+Required. The ID of pool to query.
+
.SS access-group-create
.TP 15
Create an access group.
@@ -589,6 +596,8 @@ Alias of 'volume-mask'
Alias of 'volume-unmask'
.SS vri
Alias of 'volume-raid-info'
+.SS pmi
+Alias of 'pool-member-info'
.SS ac
Alias of 'access-group-create'
.SS aa
diff --git a/tools/lsmcli/cmdline.py b/tools/lsmcli/cmdline.py
index 9b2b682..d26da80 100644
--- a/tools/lsmcli/cmdline.py
+++ b/tools/lsmcli/cmdline.py
@@ -39,7 +39,8 @@

from lsm.lsmcli.data_display import (
DisplayData, PlugData, out,
- vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo)
+ vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo,
+ PoolRAIDInfo)


## Wraps the invocation to the command line
@@ -376,6 +377,14 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
),

dict(
+ name='pool-member-info',
+ help='Query Pool membership infomation',
+ args=[
+ dict(pool_id_opt),
+ ],
+ ),
+
+ dict(
name='access-group-create',
help='Create an access group',
args=[
@@ -637,6 +646,7 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
['ar', 'access-group-remove'],
['ad', 'access-group-delete'],
['vri', 'volume-raid-info'],
+ ['pmi', 'pool-member-info'],
)


@@ -1334,6 +1344,13 @@ def volume_raid_info(self, args):
VolumeRAIDInfo(
lsm_vol.id, *self.c.volume_raid_info(lsm_vol))])

+ def pool_member_info(self, args):
+ lsm_pool = _get_item(self.c.pools(), args.pool, "Pool")
+ self.display_data(
+ [
+ PoolRAIDInfo(
+ lsm_pool.id, *self.c.pool_member_info(lsm_pool))])
+
## Displays file system dependants
def fs_dependants(self, args):
fs = _get_item(self.c.fs(), args.fs, "File System")
diff --git a/tools/lsmcli/data_display.py b/tools/lsmcli/data_display.py
index 19f7114..115f15a 100644
--- a/tools/lsmcli/data_display.py
+++ b/tools/lsmcli/data_display.py
@@ -279,6 +279,26 @@ def raid_type_to_str(raid_type):
return _enum_type_to_str(raid_type, VolumeRAIDInfo._RAID_TYPE_MAP)


+class PoolRAIDInfo(object):
+ _MEMBER_TYPE_MAP = {
+ Pool.MEMBER_TYPE_UNKNOWN: 'Unknown',
+ Pool.MEMBER_TYPE_OTHER: 'Unknown',
+ Pool.MEMBER_TYPE_POOL: 'Pool',
+ Pool.MEMBER_TYPE_DISK: 'Disk',
+ }
+
+ def __init__(self, pool_id, raid_type, member_type, member_ids):
+ self.pool_id = pool_id
+ self.raid_type = raid_type
+ self.member_type = member_type
+ self.member_ids = member_ids
+
+ @staticmethod
+ def member_type_to_str(member_type):
+ return _enum_type_to_str(
+ member_type, PoolRAIDInfo._MEMBER_TYPE_MAP)
+
+
class DisplayData(object):

def __init__(self):
@@ -557,6 +577,27 @@ def __init__(self):
'value_conv_human': VOL_RAID_INFO_VALUE_CONV_HUMAN,
}

+ POOL_RAID_INFO_HEADER = OrderedDict()
+ POOL_RAID_INFO_HEADER['pool_id'] = 'Pool ID'
+ POOL_RAID_INFO_HEADER['raid_type'] = 'RAID Type'
+ POOL_RAID_INFO_HEADER['member_type'] = 'Member Type'
+ POOL_RAID_INFO_HEADER['member_ids'] = 'Member IDs'
+
+ POOL_RAID_INFO_COLUMN_SKIP_KEYS = []
+
+ POOL_RAID_INFO_VALUE_CONV_ENUM = {
+ 'raid_type': VolumeRAIDInfo.raid_type_to_str,
+ 'member_type': PoolRAIDInfo.member_type_to_str,
+ }
+ POOL_RAID_INFO_VALUE_CONV_HUMAN = []
+
+ VALUE_CONVERT[PoolRAIDInfo] = {
+ 'headers': POOL_RAID_INFO_HEADER,
+ 'column_skip_keys': POOL_RAID_INFO_COLUMN_SKIP_KEYS,
+ 'value_conv_enum': POOL_RAID_INFO_VALUE_CONV_ENUM,
+ 'value_conv_human': POOL_RAID_INFO_VALUE_CONV_HUMAN,
+ }
+
@staticmethod
def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
flag_human, flag_enum):
--
1.8.3.1
Gris Ge
2015-04-18 11:23:41 UTC
Permalink
* Python part:
* New python method: lsm.Client.pool_raid_info(pool,flags)
Query the RAID information for certain volume.
Returns [raid_type, member_type, member_ids]
* For sub-pool which allocated space from other pool(like NetApp ONTAP
volume which allocate space from ONTAP aggregate), the member_type
will be lsm.Pool.MEMBER_TYPE_POOL.
* For disk RAID pool which use the whole disk as RAID member(like NetApp
ONTAP aggregate or EMC VNX pool) which the member_type will be
lsm.Pool.MEMBER_TYPE_DISK.

* New constants:
* lsm.Pool.MEMBER_TYPE_POOL
* lsm.Pool.MEMBER_TYPE_DISK
* lsm.Pool.MEMBER_TYPE_OTHER
* lsm.Pool.MEMBER_TYPE_UNKNOWN

* New capability for this new method:
* lsm.Capabilities.POOL_RAID_INFO

* C part:
* New C functions:
* lsm_pool_raid_info()
For API user.
* lsm_plug_pool_raid_info()
For plugin.

* New constants:
* LSM_POOL_MEMBER_TYPE_POOL
* LSM_POOL_MEMBER_TYPE_DISK
* LSM_POOL_MEMBER_TYPE_OTHER
* LSM_POOL_MEMBER_TYPE_UNKNOWN
* LSM_CAP_POOL_RAID_INFO

* For detail information, please refer to docstring of
lsm.Client.pool_raid_info() method in python_binding/lsm/_client.py file
and lsm_pool_raid_info() method in
c_binding/include/libstoragemgmt/libstoragemgmt.h.

Changes in V2:

* Use lsm_string_list instead of 'char **' for 'member_ids' argument in C
library.
* Removed 'member_count' argument as lsm_string_list has
lsm_string_list_size() function.
* Fix the incorrect comment of plugin interface 'lsm_plug_volume_raid_info'.

Changes in V3:

* Method name changed to lsm.Client.pool_member_info() and
lsm_pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
c_binding/include/libstoragemgmt/libstoragemgmt.h | 27 +++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 4 +-
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 27 +++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 17 +++++
c_binding/lsm_mgmt.cpp | 49 +++++++++++++
c_binding/lsm_plugin_ipc.cpp | 43 ++++++++++-
python_binding/lsm/_client.py | 85 ++++++++++++++++++++++
python_binding/lsm/_data.py | 6 ++
8 files changed, 256 insertions(+), 2 deletions(-)

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt.h b/c_binding/include/libstoragemgmt/libstoragemgmt.h
index 3d34f75..a717961 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt.h
@@ -864,6 +864,33 @@ int LSM_DLL_EXPORT lsm_volume_raid_info(
uint32_t *strip_size, uint32_t *disk_count,
uint32_t *min_io_size, uint32_t *opt_io_size, lsm_flag flags);

+/**
+ * Retrieves the mebership of given pool. New in version 1.2.
+ * @param[in] c Valid connection
+ * @param[in] pool The lsm_pool ptr.
+ * @param[out] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[out] member_type
+ * Enum of lsm_pool_member_type.
+ * @param[out] member_ids
+ * The pointer to lsm_string_list pointer.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_POOL,
+ * the 'member_ids' will contain a list of parent Pool
+ * IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_DISK,
+ * the 'member_ids' will contain a list of disk IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_OTHER or
+ * LSM_POOL_MEMBER_TYPE_UNKNOWN, the member_ids should
+ * be NULL.
+ * Need to use lsm_string_list_free() to free this memory.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_pool_member_info(
+ lsm_connect *c, lsm_pool *pool, lsm_volume_raid_type *raid_type,
+ lsm_pool_member_type *member_type, lsm_string_list **member_ids,
+ lsm_flag flags);
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
index 18490f3..943f4a4 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
@@ -113,7 +113,9 @@ typedef enum {
LSM_CAP_TARGET_PORTS = 216, /**< List target ports */
LSM_CAP_TARGET_PORTS_QUICK_SEARCH = 217, /**< Filtering occurs on array */

- LSM_CAP_DISKS = 220 /**< List disk drives */
+ LSM_CAP_DISKS = 220, /**< List disk drives */
+ LSM_CAP_POOL_MEMBER_INFO = 221,
+ /**^ Query pool member information */

} lsm_capability_type;

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
index b36586c..cdc8dc9 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
@@ -827,6 +827,32 @@ typedef int (*lsm_plug_volume_raid_info)(lsm_plugin_ptr c, lsm_volume *volume,
uint32_t *disk_count, uint32_t *min_io_size, uint32_t *opt_io_size,
lsm_flag flags);

+/**
+ * Retrieves the membership of given pool. New in version 1.2.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] pool The lsm_pool ptr.
+ * @param[out] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[out] member_type
+ * Enum of lsm_pool_member_type.
+ * @param[out] member_ids
+ * The pointer of lsm_string_list pointer.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_POOL,
+ * the 'member_ids' will contain a list of parent Pool
+ * IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_DISK,
+ * the 'member_ids' will contain a list of disk IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_OTHER or
+ * LSM_POOL_MEMBER_TYPE_UNKNOWN, the member_ids should
+ * be NULL.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_pool_member_info)(
+ lsm_plugin_ptr c, lsm_pool *pool, lsm_volume_raid_type *raid_type,
+ lsm_pool_member_type *member_type, lsm_string_list **member_ids,
+ lsm_flag flags);
+
/** \struct lsm_ops_v1_2
* \brief Functions added in version 1.2
* NOTE: This structure will change during the developement util version 1.2
@@ -835,6 +861,7 @@ typedef int (*lsm_plug_volume_raid_info)(lsm_plugin_ptr c, lsm_volume *volume,
struct lsm_ops_v1_2 {
lsm_plug_volume_raid_info vol_raid_info;
/**^ Query volume RAID information*/
+ lsm_plug_pool_member_info pool_member_info;
};

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
index c5607c1..115ec5a 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
@@ -169,6 +169,23 @@ typedef enum {
/**^ Vendor specific RAID type */
} lsm_volume_raid_type;

+/**< \enum lsm_pool_member_type Different types of Pool member*/
+typedef enum {
+ LSM_POOL_MEMBER_TYPE_UNKNOWN = 0,
+ /**^ Plugin failed to detect the RAID member type. */
+ LSM_POOL_MEMBER_TYPE_OTHER = 1,
+ /**^ Vendor specific RAID member type. */
+ LSM_POOL_MEMBER_TYPE_DISK = 2,
+ /**^ Pool is created from RAID group using whole disks. */
+ LSM_POOL_MEMBER_TYPE_POOL = 3,
+ /**^
+ * Current pool(also known as sub-pool) is allocated from other
+ * pool(parent pool).
+ * The 'raid_type' will set to RAID_TYPE_OTHER unless certain RAID system
+ * support RAID using space of parent pools.
+ */
+} lsm_pool_member_type;
+
#define LSM_VOLUME_STRIP_SIZE_UNKNOWN 0
#define LSM_VOLUME_DISK_COUNT_UNKNOWN 0
#define LSM_VOLUME_MIN_IO_SIZE_UNKNOWN 0
diff --git a/c_binding/lsm_mgmt.cpp b/c_binding/lsm_mgmt.cpp
index e6a254f..aa610cf 100644
--- a/c_binding/lsm_mgmt.cpp
+++ b/c_binding/lsm_mgmt.cpp
@@ -802,6 +802,55 @@ int lsm_pool_list(lsm_connect *c, char *search_key, char *search_value,
return rc;
}

+int lsm_pool_member_info(lsm_connect *c, lsm_pool *pool,
+ lsm_volume_raid_type * raid_type,
+ lsm_pool_member_type *member_type,
+ lsm_string_list **member_ids, lsm_flag flags)
+{
+ if( LSM_FLAG_UNUSED_CHECK(flags) ) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ int rc = LSM_ERR_OK;
+ CONN_SETUP(c);
+
+ if( !LSM_IS_POOL(pool) ) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ if( !raid_type || !member_type || !member_ids) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["pool"] = pool_to_value(pool);
+ p["flags"] = Value(flags);
+ Value parameters(p);
+ try {
+
+ Value response;
+
+ rc = rpc(c, "pool_member_info", parameters, response);
+ if( LSM_ERR_OK == rc ) {
+ std::vector<Value> j = response.asArray();
+ *raid_type = (lsm_volume_raid_type) j[0].asInt32_t();
+ *member_type = (lsm_pool_member_type) j[1].asInt32_t();
+ if (Value::array_t == j[2].valueType()){
+ if (j[2].asArray().size()){
+ *member_ids = value_to_string_list(j[2]);
+ }else{
+ *member_ids = NULL;
+ }
+ }
+ }
+ } catch( const ValueException &ve ) {
+ rc = logException(c, LSM_ERR_LIB_BUG, "Unexpected type",
+ ve.what());
+ }
+ return rc;
+
+}
+
int lsm_target_port_list(lsm_connect *c, const char *search_key,
const char *search_value,
lsm_target_port **target_ports[],
diff --git a/c_binding/lsm_plugin_ipc.cpp b/c_binding/lsm_plugin_ipc.cpp
index d2a43d4..27e44a9 100644
--- a/c_binding/lsm_plugin_ipc.cpp
+++ b/c_binding/lsm_plugin_ipc.cpp
@@ -1015,6 +1015,46 @@ static int handle_volume_raid_info(lsm_plugin_ptr p, Value &params,
return rc;
}

+static int handle_pool_member_info(lsm_plugin_ptr p, Value &params,
+ Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->pool_member_info) {
+ Value v_pool = params["pool"];
+
+ if(IS_CLASS_POOL(v_pool) &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+ lsm_pool *pool = value_to_pool(v_pool);
+ std::vector<Value> result;
+
+ if( pool ) {
+ lsm_volume_raid_type raid_type;
+ lsm_pool_member_type member_type;
+ lsm_string_list *member_ids;
+
+ rc = p->ops_v1_2->pool_member_info(
+ p, pool, &raid_type, &member_type,
+ &member_ids, LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ result.push_back(Value((int32_t)raid_type));
+ result.push_back(Value((int32_t)member_type));
+ result.push_back(string_list_to_value(member_ids));
+ response = Value(result);
+ }
+
+ lsm_pool_record_free(pool);
+ } else {
+ rc = LSM_ERR_NO_MEMORY;
+ }
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+}
+
static int ag_list(lsm_plugin_ptr p, Value &params, Value &response)
{
int rc = LSM_ERR_NO_SUPPORT;
@@ -2213,7 +2253,8 @@ static std::map<std::string,handler> dispatch = static_map<std::string,handler>
("volume_resize", handle_volume_resize)
("volumes_accessible_by_access_group", vol_accessible_by_ag)
("volumes", handle_volumes)
- ("volume_raid_info", handle_volume_raid_info);
+ ("volume_raid_info", handle_volume_raid_info)
+ ("pool_member_info", handle_pool_member_info);

static int process_request(lsm_plugin_ptr p, const std::string &method, Value &request,
Value &response)
diff --git a/python_binding/lsm/_client.py b/python_binding/lsm/_client.py
index a641b1d..5679108 100644
--- a/python_binding/lsm/_client.py
+++ b/python_binding/lsm/_client.py
@@ -1074,3 +1074,88 @@ def volume_raid_info(self, volume, flags=FLAG_RSVD):
No support.
"""
return self._tp.rpc('volume_raid_info', _del_self(locals()))
+
+ @_return_requires([int, int, [unicode]])
+ def pool_member_info(self, pool, flags=FLAG_RSVD):
+ """Query the membership information of certain pool.
+
+ New in version 1.2.
+
+ Query the RAID type, member type and member ids.
+
+ This method requires this capability:
+ lsm.Capabilities.POOL_MEMBER_INFO
+
+ Args:
+ pool (lsm.Pool object): Pool to query
+ flags (int): Reserved for future use. Should be set as
+ lsm.Client.FLAG_RSVD
+ Returns:
+ [raid_type, member_type, member_ids]
+
+ raid_type (int): RAID Type of requested pool.
+ Could be one of these values:
+ Volume.RAID_TYPE_RAID0
+ Stripe
+ Volume.RAID_TYPE_RAID1
+ Two disks Mirror
+ Volume.RAID_TYPE_RAID3
+ Byte-level striping with dedicated parity
+ Volume.RAID_TYPE_RAID4
+ Block-level striping with dedicated parity
+ Volume.RAID_TYPE_RAID5
+ Block-level striping with distributed parity
+ Volume.RAID_TYPE_RAID6
+ Block-level striping with two distributed parities,
+ aka, RAID-DP
+ Volume.RAID_TYPE_RAID10
+ Stripe of mirrors
+ Volume.RAID_TYPE_RAID15
+ Parity of mirrors
+ Volume.RAID_TYPE_RAID16
+ Dual parity of mirrors
+ Volume.RAID_TYPE_RAID50
+ Stripe of parities Volume.RAID_TYPE_RAID60
+ Stripe of dual parities
+ Volume.RAID_TYPE_RAID51
+ Mirror of parities
+ Volume.RAID_TYPE_RAID61
+ Mirror of dual parities
+ Volume.RAID_TYPE_JBOD
+ Just bunch of disks, no parity, no striping.
+ Volume.RAID_TYPE_UNKNOWN
+ The plugin failed to detect the volume's RAID type.
+ Volume.RAID_TYPE_MIXED
+ This volume contains multiple RAID settings.
+ Volume.RAID_TYPE_OTHER
+ Vendor specific RAID type
+ member_type(int):
+ Could be one of these values:
+ Pool.MEMBER_TYPE_POOL
+ # Current pool(also known as sub-pool) is allocated
+ # from other pool(parent pool).
+ # The 'raid_type' will set to RAID_TYPE_OTHER
+ # unless certain RAID system support RAID using space
+ # of parent pools.
+ Pool.MEMBER_TYPE_DISK
+ # Pool is created from RAID group using whole disks.
+ Pool.MEMBER_TYPE_OTHER
+ # Vendor specific RAID member type.
+ Pool.MEMBER_TYPE_UNKNOWN
+ # Plugin failed to detect the RAID member type.
+ member_ids(list(string)):
+ When 'member_type' is Pool.MEMBER_TYPE_POOL,
+ the 'member_ids' will contain a list of parent Pool IDs.
+ When 'member_type' is Pool.MEMBER_TYPE_DISK,
+ the 'member_ids' will contain a list of disk IDs.
+ When 'member_type' is Pool.MEMBER_TYPE_OTHER or
+ Pool.MEMBER_TYPE_UNKNOWN, the member_ids should be an
+ empty list.
+ Raises:
+ LsmError:
+ ErrorNumber.NO_SUPPORT
+ No support.
+ ErrorNumber.NOT_FOUND_POOL
+ Pool not found.
+ """
+ return self._tp.rpc('pool_member_info', _del_self(locals()))
diff --git a/python_binding/lsm/_data.py b/python_binding/lsm/_data.py
index 23681dd..2f42eaf 100644
--- a/python_binding/lsm/_data.py
+++ b/python_binding/lsm/_data.py
@@ -412,6 +412,11 @@ class Pool(IData):
STATUS_INITIALIZING = 1 << 14
STATUS_GROWING = 1 << 15

+ MEMBER_TYPE_UNKNOWN = 0
+ MEMBER_TYPE_OTHER = 1
+ MEMBER_TYPE_DISK = 2
+ MEMBER_TYPE_POOL = 3
+
def __init__(self, _id, _name, _element_type, _unsupported_actions,
_total_space, _free_space,
_status, _status_info, _system_id, _plugin_data=None):
@@ -754,6 +759,7 @@ class Capabilities(IData):
TARGET_PORTS_QUICK_SEARCH = 217

DISKS = 220
+ POOL_MEMBER_INFO = 221

def _to_dict(self):
return {'class': self.__class__.__name__,
--
1.8.3.1
Gris Ge
2015-04-18 11:23:45 UTC
Permalink
* Add simply lsm_pool_raid_info() by return unknown data.

* Set LSM_CAP_POOL_RAID_INFO.

Changes in V2:

* Use lsm_string_list for member_ids of lsm_pool_raid_info().

Changes in V3:

* Renamed to lsm_pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/simc/simc_lsmplugin.c | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/plugin/simc/simc_lsmplugin.c b/plugin/simc/simc_lsmplugin.c
index 422a064..6aa7bdb 100644
--- a/plugin/simc/simc_lsmplugin.c
+++ b/plugin/simc/simc_lsmplugin.c
@@ -392,6 +392,7 @@ static int cap(lsm_plugin_ptr c, lsm_system *system,
LSM_CAP_EXPORT_FS,
LSM_CAP_EXPORT_REMOVE,
LSM_CAP_VOLUME_RAID_INFO,
+ LSM_CAP_POOL_MEMBER_INFO,
-1
);

@@ -980,8 +981,29 @@ static int volume_raid_info(lsm_plugin_ptr c, lsm_volume *volume,
return rc;
}

+static int pool_member_info(
+ lsm_plugin_ptr c, lsm_pool *pool,
+ lsm_volume_raid_type *raid_type, lsm_pool_member_type *member_type,
+ lsm_string_list **member_ids, lsm_flag flags)
+{
+ int rc = LSM_ERR_OK;
+ struct plugin_data *pd = (struct plugin_data*)lsm_private_data_get(c);
+ lsm_pool *p = find_pool(pd, lsm_pool_id_get(pool));
+
+ if( !p) {
+ rc = lsm_log_error_basic(c, LSM_ERR_NOT_FOUND_POOL,
+ "Pool not found!");
+ }
+
+ *raid_type = LSM_VOLUME_RAID_TYPE_UNKNOWN;
+ *member_type = LSM_POOL_MEMBER_TYPE_UNKNOWN;
+ *member_ids = NULL;
+ return rc;
+}
+
static struct lsm_ops_v1_2 ops_v1_2 = {
- volume_raid_info
+ volume_raid_info,
+ pool_member_info,
};

static int volume_enable_disable(lsm_plugin_ptr c, lsm_volume *v,
--
1.8.3.1
Gris Ge
2015-04-18 11:23:46 UTC
Permalink
* Add lsm.Client.pool_raid_info() support:
* Treat NetApp ONTAP aggregate as disk RAID group.
Use na_disk['aggregate'] property to determine disk ownership.
* Treat NetApp ONTAP volume as sub-pool.
Use na_vol['containing-aggregate'] property to determine sub-pool
ownership.

* Set lsm.Capabilities.POOL_RAID_INFO.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/ontap/ontap.py | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/plugin/ontap/ontap.py b/plugin/ontap/ontap.py
index f9f1f56..5a01494 100644
--- a/plugin/ontap/ontap.py
+++ b/plugin/ontap/ontap.py
@@ -545,6 +545,7 @@ def capabilities(self, system, flags=0):
cap.set(Capabilities.TARGET_PORTS)
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_MEMBER_INFO)
return cap

@handle_ontap_errors
@@ -1330,3 +1331,23 @@ def volume_raid_info(self, volume, flags=0):
return [
raid_type, Ontap._STRIP_SIZE, disk_count, Ontap._STRIP_SIZE,
Ontap._OPT_IO_SIZE]
+
+ @handle_ontap_errors
+ def pool_member_info(self, pool, flags=0):
+ if pool.element_type & Pool.ELEMENT_TYPE_VOLUME:
+ # We got a NetApp volume
+ raid_type = Volume.RAID_TYPE_OTHER
+ member_type = Pool.MEMBER_TYPE_POOL
+ na_vol = self.f.volumes(volume_name=pool.name)[0]
+ disk_ids = [na_vol['containing-aggregate']]
+ else:
+ # We got a NetApp aggregate
+ member_type = Pool.MEMBER_TYPE_DISK
+ na_aggr = self.f.aggregates(aggr_name=pool.name)[0]
+ raid_type = Ontap._raid_type_of_na_aggr(na_aggr)
+ disk_ids = list(
+ Ontap._disk_id(d)
+ for d in self.f.disks()
+ if 'aggregate' in d and d['aggregate'] == pool.name)
+
+ return raid_type, member_type, disk_ids
--
1.8.3.1
Gris Ge
2015-04-18 11:23:47 UTC
Permalink
* Add lsm.Client.pool_raid_info() support:
* Use command 'storcli /c0/d0 show all J'.
* Use 'DG Drive LIST' section of output for disk_ids.
* Use 'TOPOLOGY' section of output for raid_type.
Since the command used here is not sharing the same data layout with
command "storcli /c0/v0 show all J" which is used by 'volume_raid_info()'
method, we now have two set of codes for detecting RAID type.

* Set lsm.Capabilities.POOL_RAID_INFO.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 54 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 78603fc..256ddd6 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -259,6 +259,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_MEMBER_INFO)
return cap

def _storcli_exec(self, storcli_cmds, flag_json=True):
@@ -546,3 +547,56 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
return [
raid_type, strip_size, disk_count, strip_size,
strip_size * strip_count]
+
+ @_handle_errors
+ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
+ lsi_dg_path = pool.plugin_data
+ # Check whether pool exists.
+ try:
+ dg_show_all_output = self._storcli_exec(
+ [lsi_dg_path , "show", "all"])
+ except ExecError as exec_error:
+ try:
+ json_output = json.loads(exec_error.stdout)
+ detail_error = json_output[
+ 'Controllers'][0]['Command Status']['Detailed Status']
+ except Exception:
+ raise exec_error
+
+ if detail_error and detail_error[0]['Status'] == 'Not found':
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_POOL,
+ "Pool not found")
+ raise
+
+ ctrl_num = lsi_dg_path.split('/')[1][1:]
+ lsm_disk_map = {}
+ disk_ids = []
+ for lsm_disk in self.disks():
+ lsm_disk_map[lsm_disk.plugin_data] = lsm_disk.id
+
+ for dg_disk_info in dg_show_all_output['DG Drive LIST']:
+ cur_lsi_disk_path = "/c%s" % ctrl_num + "/e%s/s%s" % tuple(
+ dg_disk_info['EID:Slt'].split(':'))
+ if cur_lsi_disk_path in lsm_disk_map.keys():
+ disk_ids.append(lsm_disk_map[cur_lsi_disk_path])
+ else:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "pool_member_info(): Failed to find disk id of %s" %
+ cur_lsi_disk_path)
+
+ raid_type = Volume.RAID_TYPE_UNKNOWN
+ dg_num = lsi_dg_path.split('/')[2][1:]
+ for dg_top in dg_show_all_output['TOPOLOGY']:
+ if dg_top['Arr'] == '-' and \
+ dg_top['Row'] == '-' and \
+ int(dg_top['DG']) == int(dg_num):
+ raid_type = _RAID_TYPE_MAP.get(
+ dg_top['Type'], Volume.RAID_TYPE_UNKNOWN)
+ break
+
+ if raid_type == Volume.RAID_TYPE_RAID1 and len(disk_ids) >= 4:
+ raid_type = Volume.RAID_TYPE_RAID10
+
+ return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
--
1.8.3.1
Gris Ge
2015-04-18 11:23:48 UTC
Permalink
* Add lsm.Client.pool_raid_info() support:
* Base on 'hpssacli ctrl slot=0 show config detail' command.

* Set lsm.Capabilities.POOL_RAID_INFO.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/hpsa/hpsa.py | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)

diff --git a/plugin/hpsa/hpsa.py b/plugin/hpsa/hpsa.py
index 76eb4a7..75f507f 100644
--- a/plugin/hpsa/hpsa.py
+++ b/plugin/hpsa/hpsa.py
@@ -285,6 +285,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_MEMBER_INFO)
return cap

def _sacli_exec(self, sacli_cmds, flag_convert=True):
@@ -531,3 +532,41 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
"Volume not found")

return [raid_type, strip_size, disk_count, strip_size, stripe_size]
+
+ @_handle_errors
+ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
+ """
+ Depend on command:
+ hpssacli ctrl slot=0 show config detail
+ """
+ if not pool.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Ilegal input volume argument: missing plugin_data property")
+
+ (ctrl_num, array_num) = pool.plugin_data.split(":")
+ ctrl_data = self._sacli_exec(
+ ["ctrl", "slot=%s" % ctrl_num, "show", "config", "detail"]
+ ).values()[0]
+
+ disk_ids = []
+ raid_type = Volume.RAID_TYPE_UNKNOWN
+ for key_name in ctrl_data.keys():
+ if key_name == "Array: %s" % array_num:
+ for array_key_name in ctrl_data[key_name].keys():
+ if array_key_name.startswith("Logical Drive: ") and \
+ raid_type == Volume.RAID_TYPE_UNKNOWN:
+ raid_type = _hp_raid_type_to_lsm(
+ ctrl_data[key_name][array_key_name])
+ elif array_key_name.startswith("physicaldrive"):
+ hp_disk = ctrl_data[key_name][array_key_name]
+ if hp_disk['Drive Type'] == 'Data Drive':
+ disk_ids.append(hp_disk['Serial Number'])
+ break
+
+ if len(disk_ids) == 0:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_POOL,
+ "Pool not found")
+
+ return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
--
1.8.3.1
Gris Ge
2015-04-18 11:23:49 UTC
Permalink
* When certain pool is RAID0 or other RAID type with multiple data disks,
the free space generated by 'pools_view' sqlite view will be incorrect
as volume consumed size is counted serial times.
The fix is generate temporary table for total space, volume consumed space,
fs consumed space, sub-pool consumed space.

* Since we changed the 'pools_view' view, bumped the version to 3.3.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 98 +++++++++++++++++++++++++++++++++-----------------
1 file changed, 66 insertions(+), 32 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index de43701..a0809f6 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -123,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.2"
+ VERSION = "3.3"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -353,40 +353,74 @@ def __init__(self, statefile, timeout):
"""
CREATE VIEW pools_view AS
SELECT
- pool.id,
- pool.name,
- pool.status,
- pool.status_info,
- pool.element_type,
- pool.unsupported_actions,
- pool.raid_type,
- pool.member_type,
- pool.parent_pool_id,
- ifnull(pool.total_space,
- ifnull(SUM(disk.total_space), 0))
- total_space,
- ifnull(pool.total_space,
- ifnull(SUM(disk.total_space), 0)) -
- ifnull(SUM(volume.consumed_size), 0) -
- ifnull(SUM(fs.consumed_size), 0) -
- ifnull(SUM(pool2.total_space), 0)
- free_space
-
+ pool0.id,
+ pool0.name,
+ pool0.status,
+ pool0.status_info,
+ pool0.element_type,
+ pool0.unsupported_actions,
+ pool0.raid_type,
+ pool0.member_type,
+ pool0.parent_pool_id,
+ pool1.total_space total_space,
+ pool1.total_space -
+ pool2.vol_consumed_size -
+ pool3.fs_consumed_size -
+ pool4.sub_pool_consumed_size free_space
FROM
- pools pool
- LEFT JOIN disks disk
- ON pool.id = disk.owner_pool_id AND
- disk.role = 'DATA'
- LEFT JOIN volumes volume
- ON volume.pool_id = pool.id
- LEFT JOIN fss fs
- ON fs.pool_id = pool.id
- LEFT JOIN pools pool2
- ON pool2.parent_pool_id = pool.id
+ pools pool0
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(pool.total_space,
+ ifnull(SUM(disk.total_space), 0))
+ total_space
+ FROM pools pool
+ LEFT JOIN disks disk
+ ON pool.id = disk.owner_pool_id AND
+ disk.role = 'DATA'
+ GROUP BY
+ pool.id
+ ) pool1 ON pool0.id = pool1.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(volume.consumed_size), 0)
+ vol_consumed_size
+ FROM pools pool
+ LEFT JOIN volumes volume
+ ON volume.pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool2 ON pool0.id = pool2.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(fs.consumed_size), 0)
+ fs_consumed_size
+ FROM pools pool
+ LEFT JOIN fss fs
+ ON fs.pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool3 ON pool0.id = pool3.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(sub_pool.total_space), 0)
+ sub_pool_consumed_size
+ FROM pools pool
+ LEFT JOIN pools sub_pool
+ ON sub_pool.parent_pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool4 ON pool0.id = pool4.id

GROUP BY
- pool.id
- ;
+ pool0.id;
""")
sql_cmd += (
"""
--
1.8.3.1
Gris Ge
2015-04-18 11:23:50 UTC
Permalink
* Change Disk.plugin_data from '/c0/e252/s1' to '0:252:1'.

* Updated pool_raid_info() method for this change.

Changes in V3:

* Rebased on pool_raid_info to pool_member_info rename change.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 256ddd6..46e7916 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -29,6 +29,7 @@
# Naming scheme
# mega_sys_path /c0
# mega_disk_path /c0/e64/s0
+# lsi_disk_id 0:64:0


def _handle_errors(method):
@@ -393,7 +394,8 @@ def disks(self, search_key=None, search_value=None,
status = _disk_status_of(
disk_show_basic_dict, disk_show_stat_dict)

- plugin_data = mega_disk_path
+ plugin_data = "%s:%s" % (
+ ctrl_num, disk_show_basic_dict['EID:Slt'])

rc_lsm_disks.append(
Disk(
@@ -576,15 +578,14 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
lsm_disk_map[lsm_disk.plugin_data] = lsm_disk.id

for dg_disk_info in dg_show_all_output['DG Drive LIST']:
- cur_lsi_disk_path = "/c%s" % ctrl_num + "/e%s/s%s" % tuple(
- dg_disk_info['EID:Slt'].split(':'))
- if cur_lsi_disk_path in lsm_disk_map.keys():
- disk_ids.append(lsm_disk_map[cur_lsi_disk_path])
+ cur_lsi_disk_id = "%s:%s" % (ctrl_num, dg_disk_info['EID:Slt'])
+ if cur_lsi_disk_id in lsm_disk_map.keys():
+ disk_ids.append(lsm_disk_map[cur_lsi_disk_id])
else:
raise LsmError(
ErrorNumber.PLUGIN_BUG,
"pool_member_info(): Failed to find disk id of %s" %
- cur_lsi_disk_path)
+ cur_lsi_disk_id)

raid_type = Volume.RAID_TYPE_UNKNOWN
dg_num = lsi_dg_path.split('/')[2][1:]
--
1.8.3.1
Gris Ge
2015-04-18 11:23:52 UTC
Permalink
* New commands:
* volume-create-raid
Create RAID group and allocate all space to single new volume.

* volume-create-raid-cap
Query supported RAID types and strip sizes of volume-create-raid
command.

* Manpage updated.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Signed-off-by: Gris Ge <***@redhat.com>
---
doc/man/lsmcli.1.in | 32 +++++++++++++++++++++
tools/lsmcli/cmdline.py | 67 +++++++++++++++++++++++++++++++++++++++++++-
tools/lsmcli/data_display.py | 36 +++++++++++++++++++++++-
3 files changed, 133 insertions(+), 2 deletions(-)

diff --git a/doc/man/lsmcli.1.in b/doc/man/lsmcli.1.in
index 6faf690..05deb21 100644
--- a/doc/man/lsmcli.1.in
+++ b/doc/man/lsmcli.1.in
@@ -232,6 +232,34 @@ Optional. Provisioning type. Valid values are: DEFAULT, THIN, FULL.
Provisioning enabled volume. \fBFULL\fR means requiring a fully allocated
volume.

+.SS volume-create-raid
+Creates a volume on hardware RAID on given disks.
+.TP 15
+\fB--name\fR \fI<NAME>\fR
+Required. Volume name. Might be altered or ignored due to hardware RAID card
+vendor limitation.
+.TP
+\fB--raid-type\fR \fI<RAID_TYPE>\fR
+Required. Could be one of these values: \fBRAID0\fR, \fBRAID1\fR, \fBRAID5\fR,
+\fBRAID6\fR, \fBRAID10\fR, \fBRAID50\fR, \fBRAID60\fR. The supported RAID
+types of current RAID card could be queried via command
+"\fBvolume-create-raid-cap\fR".
+.TP
+\fB--disk\fR \fI<DISK_ID>\fR
+Required. Repeatable. The disk ID for new RAID group.
+.TP
+\fB--strip-size\fR \fI<STRIP_SIZE>\fR
+Optional. The size in bytes of strip on each disks. If not defined, will let
+hardware card to use the vendor default value. The supported stripe size of
+current RAID card could be queried via command "\fBvolume-create-raid-cap\fR".
+
+.SS volume-create-raid-cap
+Query support status of volume-create-raid command for current hardware RAID
+card.
+.TP 15
+\fB--sys\fR \fI<SYS_ID>\fR
+Required. ID of the system to query for capabilities.
+
.SS volume-delete
.TP 15
Delete a volume given its ID
@@ -586,6 +614,10 @@ Alias of 'list --type target_ports'
Alias of 'plugin-info'
.SS vc
Alias of 'volume-create'
+.SS vcr
+Alias of 'volume-create-raid'
+.SS vcrc
+Alias of 'volume-create-raid-cap'
.SS vd
Alias of 'volume-delete'
.SS vr
diff --git a/tools/lsmcli/cmdline.py b/tools/lsmcli/cmdline.py
index d26da80..42c2365 100644
--- a/tools/lsmcli/cmdline.py
+++ b/tools/lsmcli/cmdline.py
@@ -40,7 +40,7 @@
from lsm.lsmcli.data_display import (
DisplayData, PlugData, out,
vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo,
- PoolRAIDInfo)
+ PoolRAIDInfo, VcrCap)


## Wraps the invocation to the command line
@@ -242,6 +242,37 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
),

dict(
+ name='volume-create-raid',
+ help='Creates a RAIDed volume on hardware RAID',
+ args=[
+ dict(name="--name", help='volume name', metavar='<NAME>'),
+ dict(name="--disk", metavar='<DISK>',
+ help='Free disks for new RAIDed volume.\n'
+ 'This is repeatable argument.',
+ action='append'),
+ dict(name="--raid-type",
+ help="RAID type for the new RAID group. "
+ "Should be one of these:\n %s" %
+ "\n ".join(
+ VolumeRAIDInfo.VOL_CREATE_RAID_TYPES_STR),
+ choices=VolumeRAIDInfo.VOL_CREATE_RAID_TYPES_STR,
+ type=str.upper),
+ ],
+ optional=[
+ dict(name="--strip-size",
+ help="Strip size. " + size_help),
+ ],
+ ),
+
+ dict(
+ name='volume-create-raid-cap',
+ help='Query capablity of creating a RAIDed volume on hardware RAID',
+ args=[
+ dict(sys_id_opt),
+ ],
+ ),
+
+ dict(
name='volume-delete',
help='Deletes a volume given its id',
args=[
@@ -635,6 +666,8 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
['c', 'capabilities'],
['p', 'plugin-info'],
['vc', 'volume-create'],
+ ['vcr', 'volume-create-raid'],
+ ['vcrc', 'volume-create-raid-cap'],
['vd', 'volume-delete'],
['vr', 'volume-resize'],
['vm', 'volume-mask'],
@@ -1351,6 +1384,38 @@ def pool_member_info(self, args):
PoolRAIDInfo(
lsm_pool.id, *self.c.pool_member_info(lsm_pool))])

+ def volume_create_raid(self, args):
+ raid_type = VolumeRAIDInfo.raid_type_str_to_lsm(args.raid_type)
+
+ all_lsm_disks = self.c.disks()
+ lsm_disks = [d for d in all_lsm_disks if d.id in args.disk]
+ if len(lsm_disks) != len(args.disk):
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_DISK,
+ "Disk ID %s not found" %
+ ', '.join(set(args.disk) - set(d.id for d in all_lsm_disks)))
+
+ busy_disks = [d.id for d in lsm_disks if not d.status & Disk.STATUS_FREE]
+
+ if len(busy_disks) >= 1:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not free" % ", ".join(busy_disks))
+
+ if args.strip_size:
+ strip_size = size_human_2_size_bytes(args.strip_size)
+ else:
+ strip_size = Volume.VCR_STRIP_SIZE_DEFAULT
+
+ self.display_data([
+ self.c.volume_create_raid(
+ args.name, raid_type, lsm_disks, strip_size)])
+
+ def volume_create_raid_cap(self, args):
+ lsm_sys = _get_item(self.c.systems(), args.sys, "System")
+ self.display_data([
+ VcrCap(lsm_sys.id, *self.c.volume_create_raid_cap_get(lsm_sys))])
+
## Displays file system dependants
def fs_dependants(self, args):
fs = _get_item(self.c.fs(), args.fs, "File System")
diff --git a/tools/lsmcli/data_display.py b/tools/lsmcli/data_display.py
index 115f15a..b9edf68 100644
--- a/tools/lsmcli/data_display.py
+++ b/tools/lsmcli/data_display.py
@@ -265,6 +265,9 @@ class VolumeRAIDInfo(object):
Volume.RAID_TYPE_UNKNOWN: 'UNKNOWN',
}

+ VOL_CREATE_RAID_TYPES_STR = [
+ 'RAID0', 'RAID1', 'RAID5', 'RAID6', 'RAID10', 'RAID50', 'RAID60']
+
def __init__(self, vol_id, raid_type, strip_size, disk_count,
min_io_size, opt_io_size):
self.vol_id = vol_id
@@ -278,6 +281,10 @@ def __init__(self, vol_id, raid_type, strip_size, disk_count,
def raid_type_to_str(raid_type):
return _enum_type_to_str(raid_type, VolumeRAIDInfo._RAID_TYPE_MAP)

+ @staticmethod
+ def raid_type_str_to_lsm(raid_type_str):
+ return _str_to_enum(raid_type_str, VolumeRAIDInfo._RAID_TYPE_MAP)
+

class PoolRAIDInfo(object):
_MEMBER_TYPE_MAP = {
@@ -298,6 +305,11 @@ def member_type_to_str(member_type):
return _enum_type_to_str(
member_type, PoolRAIDInfo._MEMBER_TYPE_MAP)

+class VcrCap(object):
+ def __init__(self, system_id, raid_types, strip_sizes):
+ self.system_id = system_id
+ self.raid_types = raid_types
+ self.strip_sizes = strip_sizes

class DisplayData(object):

@@ -598,6 +610,25 @@ def __init__(self):
'value_conv_human': POOL_RAID_INFO_VALUE_CONV_HUMAN,
}

+ VCR_CAP_HEADER = OrderedDict()
+ VCR_CAP_HEADER['system_id'] = 'System ID'
+ VCR_CAP_HEADER['raid_types'] = 'Supported RAID Types'
+ VCR_CAP_HEADER['strip_sizes'] = 'Supported Strip Sizes'
+
+ VCR_CAP_COLUMN_SKIP_KEYS = []
+
+ VCR_CAP_VALUE_CONV_ENUM = {
+ 'raid_types': lambda i: [VolumeRAIDInfo.raid_type_to_str(x) for x in i]
+ }
+ VCR_CAP_VALUE_CONV_HUMAN = ['strip_sizes']
+
+ VALUE_CONVERT[VcrCap] = {
+ 'headers': VCR_CAP_HEADER,
+ 'column_skip_keys': VCR_CAP_COLUMN_SKIP_KEYS,
+ 'value_conv_enum': VCR_CAP_VALUE_CONV_ENUM,
+ 'value_conv_human': VCR_CAP_VALUE_CONV_HUMAN,
+ }
+
@staticmethod
def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
flag_human, flag_enum):
@@ -607,7 +638,10 @@ def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
value = value_conv_enum[key](value)
if flag_human:
if key in value_conv_human:
- value = size_bytes_2_size_human(value)
+ if type(value) is list:
+ value = list(size_bytes_2_size_human(s) for s in value)
+ else:
+ value = size_bytes_2_size_human(value)
return value

@staticmethod
--
1.8.3.1
Gris Ge
2015-04-18 11:23:51 UTC
Permalink
* New methods:
For detail information, please refer to docstring of python methods.

* Python:
* lsm.Client.volume_raid_create_cap_get(self, system,
flags=lsm.Client.FLAG_RSVD)

Query supported RAID types and strip sizes of volume_create_raid().

* lsm.Client.volume_raid_create(self, name, raid_type, disks,
strip_size,
flags=lsm.Client.FLAG_RSVD)
Create a disk RAID pool and allocate its full space to single
volume.

* C user API:
* int lsm_volume_create_raid_cap_get(
lsm_connect *c, lsm_system *system,
uint32_t **supported_raid_types,
uint32_t *supported_raid_type_count,
uint32_t **supported_strip_sizes,
uint32_t *supported_strip_size_count,
lsm_flag flags);
* int lsm_volume_create_raid(
lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
lsm_disk *disks[], uint32_t disk_count, uint32_t strip_size,
lsm_volume **new_volume, lsm_flag flags);

* C plugin interfaces:
* typedef int (*lsm_plug_volume_create_raid_cap_get)(
lsm_plugin_ptr c, lsm_system *system,
uint32_t **supported_raid_types,
uint32_t *supported_raid_type_count,
uint32_t **supported_strip_sizes, uint32_t
*supported_strip_size_count, lsm_flag flags)

* typedef int (*lsm_plug_volume_create_raid)(
lsm_plugin_ptr c, const char *name,
lsm_volume_raid_type raid_type, lsm_disk *disks[], uint32_t
disk_count, uint32_t strip_size, lsm_volume **new_volume,
lsm_flag flags);

* Added two above into 'lsm_ops_v1_2' struct.

* C internal functions for converting C uint32 array to C++ Value:
* values_to_uint32_array(
Value &value, uint32_t **uint32_array, uint32_t *count)
* uint32_array_to_value(
uint32_t *uint32_array, uint32_t count)

* New capability:
* Python: lsm.Capabilities.VOLUME_CREATE_RAID
* C: LSM_CAP_VOLUME_CREATE_RAID

* New errors:
* Python: lsm.ErrorNumber.NOT_FOUND_DISK, lsm.ErrorNumber.DISK_NOT_FREE
* C: LSM_ERR_NOT_FOUND_DISK, LSM_ERR_DISK_NOT_FREE

* Design notes:
* Why not create pool only and allocate volume from it?
Neither HP Smart Array nor LSI MegaRAID support this.
Both require at least one volume on any pool(RAID group) and
pool(RAID group) only created when creating volume.
This is also the root reason that this method is dedicate for
hardware RAID cards, not for SAN or NAS.

* Why not allowing create multiple volumes but use single volume
consuming all space?
Even both vendors above support creating creating multiple volumes
in simple command, but it will be complex design API and many new
errors. In my humble options, most use case is to create single
volume and let OS create partitions on it.

* Why not store supported RAID type and stripe size in existing
lsm.Client.capabilities() call?
RAID type could be stored in capabilities(), but both plugin
implementation and user code has to do a capability to raid type
conversion. The strip size is number, even two vendors mentioned above
are supporting the same list of strip size, but there is no reason
create this limitation, again, conversion is also required for
capability to strip size integer if saving it in capabilities().

* Why not storing supported RAID type returned by
volume_raid_create_cap_get() in bitmap?
A conversion is require is do so. Just want to make plugin and
API user life easier.

* Why strip size is mandatory argument and where is strip size change
method?
Changing strip size after volume creation will normally take a long
time for data migration. We will not introduce strip size change
method without convincing user case.

Changes in V2:

* Rebased on pool_raid_info->pool_member_info rename changes.
* Fix 'comparison between signed and unsigned integer' in
lsm_volume_create_raid() of lsm_mgmt.cpp.
* Initialize pointers in handle_volume_create_raid_cap_get() of
lsm_plugin_ipc.cpp.

Signed-off-by: Gris Ge <***@redhat.com>
---
c_binding/include/libstoragemgmt/libstoragemgmt.h | 62 +++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 1 +
.../include/libstoragemgmt/libstoragemgmt_error.h | 2 +
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 57 ++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 3 +
c_binding/lsm_convert.cpp | 41 +++++
c_binding/lsm_convert.hpp | 12 ++
c_binding/lsm_mgmt.cpp | 118 ++++++++++++
c_binding/lsm_plugin_ipc.cpp | 94 +++++++++-
python_binding/lsm/_client.py | 203 +++++++++++++++++++++
python_binding/lsm/_common.py | 3 +
python_binding/lsm/_data.py | 3 +
12 files changed, 598 insertions(+), 1 deletion(-)

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt.h b/c_binding/include/libstoragemgmt/libstoragemgmt.h
index a717961..69ce5d8 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt.h
@@ -891,6 +891,68 @@ int LSM_DLL_EXPORT lsm_pool_member_info(
lsm_pool_member_type *member_type, lsm_string_list **member_ids,
lsm_flag flags);

+/**
+ * Query all supported RAID types and strip sizes which could be used
+ * in lsm_volume_create_raid() functions.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid connection
+ * @param[in] system
+ * The lsm_sys type.
+ * @param[out] supported_raid_types
+ * The pointer of uint32_t array. Containing
+ * lsm_volume_raid_type values.
+ * You need to free this memory by yourself.
+ * @param[out] supported_raid_type_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_raid_types array.
+ * @param[out] supported_strip_sizes
+ * The pointer of uint32_t array. Containing
+ * all supported strip sizes.
+ * You need to free this memory by yourself.
+ * @param[out] supported_strip_size_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_strip_sizes array.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_volume_create_raid_cap_get(
+ lsm_connect *c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags);
+
+/**
+ * Create a disk RAID pool and allocated its full space to single new volume.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid connection
+ * @param[in] name String. Name for the new volume. It might be ignored or
+ * altered on some hardwardware raid cards in order to fit
+ * their limitation.
+ * @param[in] raid_type
+ * Enum of lsm_volume_raid_type. Please refer to the returns
+ * of lsm_volume_create_raid_cap_get() function for
+ * supported strip sizes.
+ * @param[in] disks
+ * An array of lsm_disk types
+ * @param[in] disk_count
+ * The count of lsm_disk in 'disks' argument.
+ * Count starts with 1.
+ * @param[in] strip_size
+ * uint32_t. The strip size in bytes. Please refer to
+ * the returns of lsm_volume_create_raid_cap_get() function
+ * for supported strip sizes.
+ * @param[out] new_volume
+ * Newly created volume, Pointer to the lsm_volume type
+ * pointer.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_volume_create_raid(
+ lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags);
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
index 943f4a4..6c6bb58 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
@@ -117,6 +117,7 @@ typedef enum {
LSM_CAP_POOL_MEMBER_INFO = 221,
/**^ Query pool member information */

+ LSM_CAP_VOLUME_CREATE_RAID = 222,
} lsm_capability_type;

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_error.h b/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
index 0b28ddc..29c460a 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
@@ -63,6 +63,7 @@ typedef enum {
LSM_ERR_NOT_FOUND_VOLUME = 205, /**< Specified volume not found */
LSM_ERR_NOT_FOUND_NFS_EXPORT = 206, /**< NFS export not found */
LSM_ERR_NOT_FOUND_SYSTEM = 208, /**< System not found */
+ LSM_ERR_NOT_FOUND_DISK = 209,

LSM_ERR_NOT_LICENSED = 226, /**< Need license for feature */

@@ -90,6 +91,7 @@ typedef enum {

LSM_ERR_EMPTY_ACCESS_GROUP = 511,
LSM_ERR_POOL_NOT_READY = 512,
+ LSM_ERR_DISK_NOT_FREE = 513,

} lsm_error_number;

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
index cdc8dc9..587230b 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
@@ -853,6 +853,61 @@ typedef int (*lsm_plug_pool_member_info)(
lsm_pool_member_type *member_type, lsm_string_list **member_ids,
lsm_flag flags);

+/**
+ * Query all supported RAID types and strip sizes which could be used
+ * in lsm_volume_create_raid() functions.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] system
+ * The lsm_sys type.
+ * @param[out] supported_raid_types
+ * The pointer of uint32_t array. Containing
+ * lsm_volume_raid_type values.
+ * @param[out] supported_raid_type_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_raid_types array.
+ * @param[out] supported_strip_sizes
+ * The pointer of uint32_t array. Containing
+ * all supported strip sizes.
+ * @param[out] supported_strip_size_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_strip_sizes array.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_volume_create_raid_cap_get)(
+ lsm_plugin_ptr c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags);
+
+/**
+ * Create a disk RAID pool and allocated its full space to single new volume.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] name String. Name for the new volume. It might be ignored or
+ * altered on some hardwardware raid cards in order to fit
+ * their limitation.
+ * @param[in] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[in] disks
+ * An array of lsm_disk types
+ * @param[in] disk_count
+ * The count of lsm_disk in 'disks' argument.
+ * @param[in] strip_size
+ * uint32_t. The strip size in bytes.
+ * @param[out] new_volume
+ * Newly created volume, Pointer to the lsm_volume type
+ * pointer.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_volume_create_raid)(
+ lsm_plugin_ptr c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags);
+
/** \struct lsm_ops_v1_2
* \brief Functions added in version 1.2
* NOTE: This structure will change during the developement util version 1.2
@@ -862,6 +917,8 @@ struct lsm_ops_v1_2 {
lsm_plug_volume_raid_info vol_raid_info;
/**^ Query volume RAID information*/
lsm_plug_pool_member_info pool_member_info;
+ lsm_plug_volume_create_raid_cap_get vol_create_raid_cap_get;
+ lsm_plug_volume_create_raid vol_create_raid;
};

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
index 115ec5a..667c320 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
@@ -297,6 +297,9 @@ typedef enum {
LSM_TARGET_PORT_TYPE_ISCSI = 4
} lsm_target_port_type;

+#define LSM_VOLUME_VCR_STRIP_SIZE_DEFAULT 0
+/** ^ Plugin and hardware RAID will use their default strip size */
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/lsm_convert.cpp b/c_binding/lsm_convert.cpp
index 7baf2e6..9b799ab 100644
--- a/c_binding/lsm_convert.cpp
+++ b/c_binding/lsm_convert.cpp
@@ -617,3 +617,44 @@ Value target_port_to_value(lsm_target_port *tp)
}
return Value();
}
+
+int values_to_uint32_array(
+ Value &value, uint32_t **uint32_array, uint32_t *count)
+{
+ int rc = LSM_ERR_OK;
+ try {
+ std::vector<Value> data = value.asArray();
+ *count = data.size();
+ if (*count){
+ *uint32_array = (uint32_t *)malloc(sizeof(uint32_t) * *count);
+ if (*uint32_array){
+ uint32_t i;
+ for( i = 0; i < *count; i++ ){
+ (*uint32_array)[i] = data[i].asUint32_t();
+ }
+ }else{
+ rc = LSM_ERR_NO_MEMORY;
+ }
+ }
+ } catch( const ValueException &ve) {
+ if( *count ) {
+ free(*uint32_array);
+ *uint32_array = NULL;
+ *count = 0;
+ }
+ rc = LSM_ERR_LIB_BUG;
+ }
+ return rc;
+}
+
+Value uint32_array_to_value(uint32_t *uint32_array, uint32_t count)
+{
+ std::vector<Value> rc;
+ if( uint32_array && count ) {
+ uint32_t i;
+ for( i = 0; i < count; i++ ) {
+ rc.push_back(uint32_array[i]);
+ }
+ }
+ return rc;
+}
diff --git a/c_binding/lsm_convert.hpp b/c_binding/lsm_convert.hpp
index fe1e91d..907a229 100644
--- a/c_binding/lsm_convert.hpp
+++ b/c_binding/lsm_convert.hpp
@@ -284,4 +284,16 @@ lsm_target_port LSM_DLL_LOCAL *value_to_target_port(Value &tp);
*/
Value LSM_DLL_LOCAL target_port_to_value(lsm_target_port *tp);

+/**
+ * Converts a value to array of uint32.
+ */
+int LSM_DLL_LOCAL values_to_uint32_array(
+ Value &value, uint32_t **uint32_array, uint32_t *count);
+
+/**
+ * Converts an array of uint32 to a value.
+ */
+Value LSM_DLL_LOCAL uint32_array_to_value(
+ uint32_t *uint32_array, uint32_t count);
+
#endif
diff --git a/c_binding/lsm_mgmt.cpp b/c_binding/lsm_mgmt.cpp
index aa610cf..c16eacf 100644
--- a/c_binding/lsm_mgmt.cpp
+++ b/c_binding/lsm_mgmt.cpp
@@ -2257,3 +2257,121 @@ int lsm_nfs_export_delete( lsm_connect *c, lsm_nfs_export *e, lsm_flag flags)
int rc = rpc(c, "export_remove", parameters, response);
return rc;
}
+
+int lsm_volume_create_raid_cap_get(
+ lsm_connect *c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags)
+{
+ CONN_SETUP(c);
+
+ if( !supported_raid_types || !supported_raid_type_count ||
+ !supported_strip_sizes || !supported_strip_size_count) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["system"] = system_to_value(system);
+ p["flags"] = Value(flags);
+
+ Value parameters(p);
+ Value response;
+
+ int rc = rpc(c, "volume_create_raid_cap_get", parameters, response);
+ try {
+ std::vector<Value> j = response.asArray();
+
+ rc = values_to_uint32_array(
+ j[0], supported_raid_types, supported_raid_type_count);
+
+ if (rc != LSM_ERR_OK){
+ return rc;
+ }
+
+ rc = values_to_uint32_array(
+ j[1], supported_strip_sizes, supported_strip_size_count);
+ } catch( const ValueException &ve ) {
+ rc = logException(c, LSM_ERR_LIB_BUG, "Unexpected type",
+ ve.what());
+ }
+ return rc;
+}
+
+int lsm_volume_create_raid(
+ lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags)
+{
+ CONN_SETUP(c);
+
+ if (disk_count == 0){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "Require at least one disks", NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID1 && disk_count != 2){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 1 only allows two disks", NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID5 && disk_count < 3){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 5 require 3 or more disks",
+ NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID6 && disk_count < 4){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 5 require 4 or more disks",
+ NULL);
+ }
+
+ if ( disk_count % 2 ){
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID10 && disk_count < 4){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 10 require even disks count and 4 or more disks",
+ NULL);
+ }
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID50 && disk_count < 6){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 50 require even disks count and 6 or more disks",
+ NULL);
+ }
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID60 && disk_count < 8){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 60 require even disks count and 8 or more disks",
+ NULL);
+ }
+ }
+
+
+ if (CHECK_RP(new_volume)){
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["name"] = Value(name);
+ p["raid_type"] = Value((int32_t)raid_type);
+ p["strip_size"] = Value((int32_t)strip_size);
+ p["flags"] = Value(flags);
+ std::vector<Value> disks_value;
+ for (uint32_t i = 0; i < disk_count; i++){
+ disks_value.push_back(disk_to_value(disks[i]));
+ }
+ p["disks"] = disks_value;
+
+ Value parameters(p);
+ Value response;
+
+ int rc = rpc(c, "volume_create_raid", parameters, response);
+ if( LSM_ERR_OK == rc ) {
+ *new_volume = value_to_volume(response);
+ }
+ return rc;
+
+}
diff --git a/c_binding/lsm_plugin_ipc.cpp b/c_binding/lsm_plugin_ipc.cpp
index 27e44a9..f00220f 100644
--- a/c_binding/lsm_plugin_ipc.cpp
+++ b/c_binding/lsm_plugin_ipc.cpp
@@ -2199,6 +2199,96 @@ static int iscsi_chap(lsm_plugin_ptr p, Value &params, Value &response)
}
return rc;
}
+static int handle_volume_create_raid_cap_get(
+ lsm_plugin_ptr p, Value &params, Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->vol_create_raid_cap_get ) {
+ Value v_system = params["system"];
+
+ if( IS_CLASS_SYSTEM(v_system) &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+ lsm_system *sys = value_to_system(v_system);
+
+ uint32_t *supported_raid_types = NULL;
+ uint32_t supported_raid_type_count = 0;
+
+ uint32_t *supported_strip_sizes = NULL;
+ uint32_t supported_strip_size_count = 0;
+
+ rc = p->ops_v1_2->vol_create_raid_cap_get(
+ p, sys, &supported_raid_types, &supported_raid_type_count,
+ &supported_strip_sizes, &supported_strip_size_count,
+ LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ std::vector<Value> result;
+ result.push_back(
+ uint32_array_to_value(
+ supported_raid_types, supported_raid_type_count));
+ result.push_back(
+ uint32_array_to_value(
+ supported_strip_sizes, supported_strip_size_count));
+ response = Value(result);
+ }
+
+ free(supported_raid_types);
+ free(supported_strip_sizes);
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+
+}
+
+static int handle_volume_create_raid(
+ lsm_plugin_ptr p, Value &params, Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->vol_create_raid ) {
+ Value v_name = params["name"];
+ Value v_raid_type = params["raid_type"];
+ Value v_strip_size = params["strip_size"];
+ Value v_disks = params["disks"];
+
+ if( Value::string_t == v_name.valueType() &&
+ Value::numeric_t == v_raid_type.valueType() &&
+ Value::numeric_t == v_strip_size.valueType() &&
+ Value::array_t == v_disks.valueType() &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+
+ lsm_disk **disks = NULL;
+ uint32_t disk_count = 0;
+ rc = value_array_to_disks(v_disks, &disks, &disk_count);
+ if( LSM_ERR_OK != rc ) {
+ return rc;
+ }
+
+ const char *name = v_name.asC_str();
+ lsm_volume_raid_type raid_type =
+ (lsm_volume_raid_type) v_raid_type.asInt32_t();
+ uint32_t strip_size = (uint32_t) v_strip_size.asInt32_t();
+
+ lsm_volume *new_vol = NULL;
+
+ rc = p->ops_v1_2->vol_create_raid(
+ p, name, raid_type, disks, disk_count, strip_size,
+ &new_vol, LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ response = volume_to_value(new_vol);
+ }
+
+ lsm_disk_record_array_free(disks, disk_count);
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+}

/**
* map of function pointers
@@ -2254,7 +2344,9 @@ static std::map<std::string,handler> dispatch = static_map<std::string,handler>
("volumes_accessible_by_access_group", vol_accessible_by_ag)
("volumes", handle_volumes)
("volume_raid_info", handle_volume_raid_info)
- ("pool_member_info", handle_pool_member_info);
+ ("pool_member_info", handle_pool_member_info)
+ ("volume_create_raid", handle_volume_create_raid)
+ ("volume_create_raid_cap_get", handle_volume_create_raid_cap_get);

static int process_request(lsm_plugin_ptr p, const std::string &method, Value &request,
Value &response)
diff --git a/python_binding/lsm/_client.py b/python_binding/lsm/_client.py
index 5679108..b2bc9d3 100644
--- a/python_binding/lsm/_client.py
+++ b/python_binding/lsm/_client.py
@@ -1159,3 +1159,206 @@ def pool_member_info(self, pool, flags=FLAG_RSVD):
Pool not found.
"""
return self._tp.rpc('pool_member_info', _del_self(locals()))
+
+ @_return_requires([[int], [int]])
+ def volume_create_raid_cap_get(self, system, flags=FLAG_RSVD):
+ """
+ lsm.Client.volume_create_raid_cap_get(
+ self, system, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ This method is dedicated to local hardware RAID cards.
+ Query out all supported RAID types and strip sizes which could
+ be used by lsm.Client.volume_create_raid() method.
+ Parameters:
+ system (lsm.System)
+ Instance of lsm.System
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ [raid_types, strip_sizes]
+ raid_types ([int])
+ List of integer, possible values are:
+ Volume.RAID_TYPE_RAID0
+ Volume.RAID_TYPE_RAID1
+ Volume.RAID_TYPE_RAID5
+ Volume.RAID_TYPE_RAID6
+ Volume.RAID_TYPE_RAID10
+ Volume.RAID_TYPE_RAID15
+ Volume.RAID_TYPE_RAID16
+ Volume.RAID_TYPE_RAID50
+ strip_sizes ([int])
+ List of integer. Stripe size in bytes.
+ SpecialExceptions:
+ LsmError
+ lsm.ErrorNumber.NO_SUPPORT
+ Method not supported.
+ Sample:
+ lsm_client = lsm.Client('sim://')
+ lsm_sys = lsm_client.systems()[0]
+ disks = lsm_client.disks(
+ search_key='system_id', search_value=lsm_sys.id)
+
+ free_disks = [d for d in disks if d.status == Disk.STATUS_FREE]
+ supported_raid_types, supported_strip_sizes = \
+ lsm_client.volume_create_raid_cap_get(lsm_sys)
+ new_vol = lsm_client.volume_create_raid(
+ 'test_volume_create_raid', supported_raid_types[0],
+ free_disks, supported_strip_sizes[0])
+
+ Capability:
+ lsm.Capabilities.VOLUME_CREATE_RAID
+ This method is mandatory when volume_create_raid() is
+ supported.
+ """
+ return self._tp.rpc('volume_create_raid_cap_get', _del_self(locals()))
+
+ @_return_requires(Volume)
+ def volume_create_raid(self, name, raid_type, disks, strip_size,
+ flags=FLAG_RSVD):
+ """
+ lsm.Client.volume_create_raid(self, name, raid_type, disks,
+ strip_size, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ This method is dedicated to local hardware RAID cards.
+ Create a RAID group pool from disks and allocated all of its
+ storage space to single volume as using requested volume name.
+ When dealing with RAID10, 50 or 60, the first half part of
+ 'disks' will be located in one bottom layer RAID group.
+ The new volume and new pool will created within the same system
+ of provided disks.
+ This method does not allow duplicate call, if duplicate call
+ was isssue, LsmError with ErrorNumber.DISK_NOT_FREE will be raise.
+ User should check disk.status for Disk.STATUS_FREE before invoke
+ this method or ErrorNumber.NAME_CONFLICT for duplication volume
+ name.
+ Parameters:
+ name (string)
+ The name for new volume.
+ The requested volume name might be ignored due to restriction
+ of hardware RAID vendors.
+ raid_type (int)
+ The RAID type for the RAID group, possible values are:
+ Volume.RAID_TYPE_RAID0
+ Volume.RAID_TYPE_RAID1
+ Volume.RAID_TYPE_RAID5
+ Volume.RAID_TYPE_RAID6
+ Volume.RAID_TYPE_RAID10
+ Volume.RAID_TYPE_RAID15
+ Volume.RAID_TYPE_RAID16
+ Volume.RAID_TYPE_RAID50
+ Please check volume_create_raid_cap_get() returns to get
+ supported all raid types of current hardware RAID card.
+ disks ([lsm.Disks,])
+ A list of lsm.Disk objects. Free disks used for new RAID group.
+ strip_size (int)
+ The size in bytes of strip.
+ When setting strip_size to Volume.VCR_STRIP_SIZE_DEFAULT, it
+ allow hardware RAID cards to choose their default value.
+ Please use volume_create_raid_cap_get() method to get all
+ supported strip size of current hardware RAID card.
+ The Volume.VCR_STRIP_SIZE_DEFAULT is always supported when
+ lsm.Capabilities.VOLUME_CREATE_RAID is supported.
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ lsm.Volume
+ The lsm.Volume object for newly created volume.
+ SpecialExceptions:
+ LsmError
+ lsm.ErrorNumber.NO_SUPPORT
+ Method not supported or RAID type not supported.
+ lsm.ErrorNumber.DISK_NOT_FREE
+ Disk is not in Disk.STATUS_FREE status.
+ lsm.ErrorNumber.NOT_FOUND_DISK
+ Disk not found
+ lsm.ErrorNumber.INVALID_ARGUMENT
+ 1. Invalid input argument data.
+ 2. Disks are not from the same system.
+ 3. Disks are not from the same enclosure.
+ 4. Invalid strip_size.
+ 5. Disk count are meet the minimum requirement:
+ RAID1: len(disks) == 2
+ RAID5: len(disks) >= 3
+ RAID6: len(disks) >= 4
+ RAID10: len(disks) % 2 == 0 and len(disks) >= 4
+ RAID50: len(disks) % 2 == 0 and len(disks) >= 6
+ RAID60: len(disks) % 2 == 0 and len(disks) >= 8
+ lsm.ErrorNumber.NAME_CONFLICT
+ Requested name is already be used by other volume.
+ Sample:
+ lsm_client = lsm.Client('sim://')
+ disks = lsm_client.disks()
+ free_disks = [d for d in disks if d.status == Disk.STATUS_FREE]
+ new_vol = lsm_client.volume_create_raid(
+ 'raid0_vol1', Volume.RAID_TYPE_RAID0, free_disks)
+ Capability:
+ lsm.Capabilities.VOLUME_CREATE_RAID
+ Indicate current system support volume_create_raid() method.
+ At least one RAID type should be supported.
+ The strip_size == Volume.VCR_STRIP_SIZE_DEFAULT is supported.
+ lsm.Capabilities.VOLUME_CREATE_RAID_0
+ lsm.Capabilities.VOLUME_CREATE_RAID_1
+ lsm.Capabilities.VOLUME_CREATE_RAID_5
+ lsm.Capabilities.VOLUME_CREATE_RAID_6
+ lsm.Capabilities.VOLUME_CREATE_RAID_10
+ lsm.Capabilities.VOLUME_CREATE_RAID_50
+ lsm.Capabilities.VOLUME_CREATE_RAID_60
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_8_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_16_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_32_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_64_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_128_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_256_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_512_KIB
+ lsm.Capabilities.VOLUME_CREATE_RAID_STRIP_SIZE_1024_KIB
+ """
+ if len(disks) == 0:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: no disk included")
+
+ if raid_type == Volume.RAID_TYPE_RAID1 and len(disks) != 2:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 1 only allow 2 disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID5 and len(disks) < 3:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 5 require 3 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID6 and len(disks) < 4:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 6 require 4 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID10:
+ if len(disks) % 2 or len(disks) < 4:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 10 require even disks count and 4 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID50:
+ if len(disks) % 2 or len(disks) < 6:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 50 require even disks count and 6 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID60:
+ if len(disks) % 2 or len(disks) < 8:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 60 require even disks count and 8 or more disks")
+
+ return self._tp.rpc('volume_create_raid', _del_self(locals()))
diff --git a/python_binding/lsm/_common.py b/python_binding/lsm/_common.py
index 4c87661..be0c734 100644
--- a/python_binding/lsm/_common.py
+++ b/python_binding/lsm/_common.py
@@ -447,6 +447,7 @@ class ErrorNumber(object):
NOT_FOUND_VOLUME = 205
NOT_FOUND_NFS_EXPORT = 206
NOT_FOUND_SYSTEM = 208
+ NOT_FOUND_DISK = 209

NOT_LICENSED = 226

@@ -478,6 +479,8 @@ class ErrorNumber(object):

POOL_NOT_READY = 512 # Pool is not ready for create/resize/etc

+ DISK_NOT_FREE = 513 # Disk is not in DISK.STATUS_FREE status.
+
_LOCALS = locals()

@staticmethod
diff --git a/python_binding/lsm/_data.py b/python_binding/lsm/_data.py
index 2f42eaf..0aa4137 100644
--- a/python_binding/lsm/_data.py
+++ b/python_binding/lsm/_data.py
@@ -306,6 +306,8 @@ class Volume(IData):
MIN_IO_SIZE_UNKNOWN = 0
OPT_IO_SIZE_UNKNOWN = 0

+ VCR_STRIP_SIZE_DEFAULT = 0
+
def __init__(self, _id, _name, _vpd83, _block_size, _num_of_blocks,
_admin_state, _system_id, _pool_id, _plugin_data=None):
self._id = _id # Identifier
@@ -760,6 +762,7 @@ class Capabilities(IData):

DISKS = 220
POOL_MEMBER_INFO = 221
+ VOLUME_CREATE_RAID = 222

def _to_dict(self):
return {'class': self.__class__.__name__,
--
1.8.3.1
Gris Ge
2015-04-18 11:23:54 UTC
Permalink
* Updated lsm_ops_v1_2 structure by adding two methods:
* lsm_plug_volume_create_raid_cap_get()
* lsm_plug_volume_create_raid()

* These two methods simply return no support.

* Tested C plugin interface by returning fake data, but changed back to
no support. We will implement the real method in the future.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/simc/simc_lsmplugin.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)

diff --git a/plugin/simc/simc_lsmplugin.c b/plugin/simc/simc_lsmplugin.c
index 6aa7bdb..ca4815c 100644
--- a/plugin/simc/simc_lsmplugin.c
+++ b/plugin/simc/simc_lsmplugin.c
@@ -1001,9 +1001,29 @@ static int pool_member_info(
return rc;
}

+static int volume_create_raid_cap_get(
+ lsm_plugin_ptr c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags)
+{
+ return LSM_ERR_NO_SUPPORT;
+}
+
+static int volume_create_raid(
+ lsm_plugin_ptr c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags)
+{
+ return LSM_ERR_NO_SUPPORT;
+}
+
static struct lsm_ops_v1_2 ops_v1_2 = {
volume_raid_info,
pool_member_info,
+ volume_create_raid_cap_get,
+ volume_create_raid,
};

static int volume_enable_disable(lsm_plugin_ptr c, lsm_volume *v,
--
1.8.3.1
Gris Ge
2015-04-18 11:23:53 UTC
Permalink
* Add support of these methods:
* volume_create_raid()
* volume_create_raid_cap_get()

* Add 'strip_size' property into pool.
* Add 'is_hw_raid_vol' property into volume.
If true, volume_delete() will also delete parent pool.

* Bumped data version to 3.4.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 119 ++++++++++++++++++++++++++++++++++++++++++------
plugin/sim/simulator.py | 8 ++++
2 files changed, 113 insertions(+), 14 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index a0809f6..40a9acb 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -123,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.3"
+ VERSION = "3.4"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -133,7 +133,7 @@ class BackStore(object):
SYS_ID = "sim-01"
SYS_NAME = "LSM simulated storage plug-in"
BLK_SIZE = 512
- STRIP_SIZE = 131072 # 128 KiB
+ DEFAULT_STRIP_SIZE = 128 * 1024 # 128 KiB

_LIST_SPLITTER = '#'

@@ -142,7 +142,8 @@ class BackStore(object):
POOL_KEY_LIST = [
'id', 'name', 'status', 'status_info',
'element_type', 'unsupported_actions', 'raid_type',
- 'member_type', 'parent_pool_id', 'total_space', 'free_space']
+ 'member_type', 'parent_pool_id', 'total_space', 'free_space',
+ 'strip_size']

DISK_KEY_LIST = [
'id', 'name', 'total_space', 'disk_type', 'status',
@@ -150,7 +151,7 @@ class BackStore(object):

VOL_KEY_LIST = [
'id', 'vpd83', 'name', 'total_space', 'consumed_size',
- 'pool_id', 'admin_state', 'thinp']
+ 'pool_id', 'admin_state', 'thinp', 'is_hw_raid_vol']

TGT_KEY_LIST = [
'id', 'port_type', 'service_address', 'network_address',
@@ -172,6 +173,16 @@ class BackStore(object):
'options', 'exp_root_hosts_str', 'exp_rw_hosts_str',
'exp_ro_hosts_str']

+ SUPPORTED_VCR_RAID_TYPES = [
+ Volume.RAID_TYPE_RAID0, Volume.RAID_TYPE_RAID1,
+ Volume.RAID_TYPE_RAID5, Volume.RAID_TYPE_RAID6,
+ Volume.RAID_TYPE_RAID10, Volume.RAID_TYPE_RAID50,
+ Volume.RAID_TYPE_RAID60]
+
+ SUPPORTED_VCR_STRIP_SIZES = [
+ 8 * 1024, 16 * 1024, 32 * 1024, 64 * 1024, 128 * 1024, 256 * 1024,
+ 512 * 1024, 1024 * 1024]
+
def __init__(self, statefile, timeout):
if not os.path.exists(statefile):
os.close(os.open(statefile, os.O_WRONLY | os.O_CREAT))
@@ -218,6 +229,7 @@ def __init__(self, statefile, timeout):
"parent_pool_id INTEGER, " # Indicate this pool is allocated from
# other pool
"member_type INTEGER, "
+ "strip_size INTEGER, "
"total_space LONG);\n") # total_space here is only for
# sub-pool (pool from pool)

@@ -244,6 +256,10 @@ def __init__(self, statefile, timeout):
# support.
"admin_state INTEGER, "
"thinp INTEGER NOT NULL, "
+ "is_hw_raid_vol INTEGER, " # Once its volume deleted, pool will
+ # be delete also. For HW RAID
+ # simulation only.
+
"pool_id INTEGER NOT NULL, "
"FOREIGN KEY(pool_id) "
"REFERENCES pools(id) ON DELETE CASCADE);\n")
@@ -362,6 +378,7 @@ def __init__(self, statefile, timeout):
pool0.raid_type,
pool0.member_type,
pool0.parent_pool_id,
+ pool0.strip_size,
pool1.total_space total_space,
pool1.total_space -
pool2.vol_consumed_size -
@@ -450,6 +467,7 @@ def __init__(self, statefile, timeout):
vol.pool_id,
vol.admin_state,
vol.thinp,
+ vol.is_hw_raid_vol,
vol_mask.ag_id ag_id
FROM
volumes vol
@@ -926,7 +944,11 @@ def sim_pool_of_id(self, sim_pool_id):
ErrorNumber.NOT_FOUND_POOL, "Pool")

def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
- element_type, unsupported_actions=0):
+ element_type, unsupported_actions=0,
+ strip_size=0):
+ if strip_size == 0:
+ strip_size = BackStore.DEFAULT_STRIP_SIZE
+
self._data_add(
'pools',
{
@@ -937,6 +959,7 @@ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
'unsupported_actions': unsupported_actions,
'raid_type': raid_type,
'member_type': Pool.MEMBER_TYPE_DISK,
+ 'strip_size': strip_size,
})

data_disk_count = PoolRAID.data_disk_count(
@@ -1029,7 +1052,9 @@ def _block_rounding(size_bytes):
return (size_bytes + BackStore.BLK_SIZE - 1) / \
BackStore.BLK_SIZE * BackStore.BLK_SIZE

- def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp):
+ def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp,
+ is_hw_raid_vol=0):
+
size_bytes = BackStore._block_rounding(size_bytes)
self._check_pool_free_space(sim_pool_id, size_bytes)
sim_vol = dict()
@@ -1040,6 +1065,7 @@ def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp):
sim_vol['total_space'] = size_bytes
sim_vol['consumed_size'] = size_bytes
sim_vol['admin_state'] = Volume.ADMIN_STATE_ENABLED
+ sim_vol['is_hw_raid_vol'] = is_hw_raid_vol

try:
self._data_add("volumes", sim_vol)
@@ -1055,7 +1081,7 @@ def sim_vol_delete(self, sim_vol_id):
This does not check whether volume exist or not.
"""
# Check existence.
- self.sim_vol_of_id(sim_vol_id)
+ sim_vol = self.sim_vol_of_id(sim_vol_id)

if self._sim_ag_ids_of_masked_vol(sim_vol_id):
raise LsmError(
@@ -1070,8 +1096,11 @@ def sim_vol_delete(self, sim_vol_id):
raise LsmError(
ErrorNumber.PLUGIN_BUG,
"Requested volume is a replication source")
-
- self._data_delete("volumes", 'id="%s"' % sim_vol_id)
+ if sim_vol['is_hw_raid_vol']:
+ # Delete the parent pool instead if found a HW RAIRD volume.
+ self._data_delete("pools", 'id="%s"' % sim_vol['pool_id'])
+ else:
+ self._data_delete("volumes", 'id="%s"' % sim_vol_id)

def sim_vol_mask(self, sim_vol_id, sim_ag_id):
self.sim_vol_of_id(sim_vol_id)
@@ -1759,7 +1788,7 @@ def disks(self):

@_handle_errors
def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
- _internal_use=False):
+ _internal_use=False, _is_hw_raid_vol=0):
"""
The '_internal_use' parameter is only for SimArray internal use.
This method will return the new sim_vol id instead of job_id when
@@ -1769,7 +1798,8 @@ def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
self.bs_obj.trans_begin()

new_sim_vol_id = self.bs_obj.sim_vol_create(
- vol_name, size_bytes, SimArray._sim_pool_id_of(pool_id), thinp)
+ vol_name, size_bytes, SimArray._sim_pool_id_of(pool_id),
+ thinp, is_hw_raid_vol=_is_hw_raid_vol)

if _internal_use:
return new_sim_vol_id
@@ -2251,9 +2281,9 @@ def volume_raid_info(self, lsm_vol):
min_io_size = BackStore.BLK_SIZE
opt_io_size = BackStore.BLK_SIZE
else:
- strip_size = BackStore.STRIP_SIZE
- min_io_size = BackStore.STRIP_SIZE
- opt_io_size = int(data_disk_count * BackStore.STRIP_SIZE)
+ strip_size = sim_pool['strip_size']
+ min_io_size = strip_size
+ opt_io_size = int(data_disk_count * strip_size)

return [raid_type, strip_size, disk_count, min_io_size, opt_io_size]

@@ -2278,3 +2308,64 @@ def pool_member_info(self, lsm_pool):
member_type = Pool.MEMBER_TYPE_UNKNOWN

return sim_pool['raid_type'], member_type, member_ids
+
+ @_handle_errors
+ def volume_create_raid_cap_get(self, system):
+ if system.id != BackStore.SYS_ID:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+ return (
+ BackStore.SUPPORTED_VCR_RAID_TYPES,
+ BackStore.SUPPORTED_VCR_STRIP_SIZES)
+
+ @_handle_errors
+ def volume_create_raid(self, name, raid_type, disks, strip_size):
+ if raid_type not in BackStore.SUPPORTED_VCR_RAID_TYPES:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'raid_type' is not supported")
+
+ if strip_size == Volume.VCR_STRIP_SIZE_DEFAULT:
+ strip_size = BackStore.DEFAULT_STRIP_SIZE
+ elif strip_size not in BackStore.SUPPORTED_VCR_STRIP_SIZES:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'strip_size' is not supported")
+
+ self.bs_obj.trans_begin()
+ pool_name = "Pool for volume %s" % name
+ sim_disk_ids = [
+ SimArray._lsm_id_to_sim_id(
+ d.id,
+ LsmError(ErrorNumber.NOT_FOUND_DISK, "Disk not found"))
+ for d in disks]
+
+ for disk in disks:
+ if not disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in DISK.STATUS_FREE mode" % disk.id)
+ try:
+ sim_pool_id = self.bs_obj.sim_pool_create_from_disk(
+ name=pool_name,
+ raid_type=raid_type,
+ sim_disk_ids=sim_disk_ids,
+ element_type=Pool.ELEMENT_TYPE_VOLUME,
+ unsupported_actions=Pool.UNSUPPORTED_VOLUME_GROW |
+ Pool.UNSUPPORTED_VOLUME_SHRINK,
+ strip_size=strip_size)
+ except sqlite3.IntegrityError as sql_error:
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other volume" % name)
+
+
+ sim_pool = self.bs_obj.sim_pool_of_id(sim_pool_id)
+ sim_vol_id = self.volume_create(
+ SimArray._sim_id_to_lsm_id(sim_pool_id, 'POOL'), name,
+ sim_pool['free_space'], Volume.PROVISION_FULL,
+ _internal_use=True, _is_hw_raid_vol=1)
+ sim_vol = self.bs_obj.sim_vol_of_id(sim_vol_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_vol_2_lsm(sim_vol)
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index 724574f..2b2a775 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
@@ -295,3 +295,11 @@ def volume_raid_info(self, volume, flags=0):

def pool_member_info(self, pool, flags=0):
return self.sim_array.pool_member_info(pool)
+
+ def volume_create_raid_cap_get(self, system, flags=0):
+ return self.sim_array.volume_create_raid_cap_get(system)
+
+ def volume_create_raid(self, name, raid_type, disks, strip_size,
+ flags=0):
+ return self.sim_array.volume_create_raid(
+ name, raid_type, disks, strip_size)
--
1.8.3.1
Gris Ge
2015-04-18 11:23:55 UTC
Permalink
* Add support of these methods:
* volume_create_raid()
* volume_create_raid_cap_get()

* Using hpssacli command to create RAID group volume:

hpssacli ctrl slot=0 create type=ld \
drives=1i:1:13,1i:1:14 size=max raid=1+0 ss=64

* Add new capability lsm.Capabilities.VOLUME_CREATE_RAID.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/hpsa/hpsa.py | 175 +++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 172 insertions(+), 3 deletions(-)

diff --git a/plugin/hpsa/hpsa.py b/plugin/hpsa/hpsa.py
index 75f507f..b6b9708 100644
--- a/plugin/hpsa/hpsa.py
+++ b/plugin/hpsa/hpsa.py
@@ -195,7 +195,7 @@ def _disk_status_of(hp_disk, flag_free):
return disk_status


-def _hp_raid_type_to_lsm(hp_ld):
+def _hp_raid_level_to_lsm(hp_ld):
"""
Based on this property:
Fault Tolerance: 0/1/5/6/1+0
@@ -217,6 +217,27 @@ def _hp_raid_type_to_lsm(hp_ld):
return Volume.RAID_TYPE_UNKNOWN


+def _lsm_raid_type_to_hp(raid_type):
+ if raid_type == Volume.RAID_TYPE_RAID0:
+ return '0'
+ elif raid_type == Volume.RAID_TYPE_RAID1:
+ return '1'
+ elif raid_type == Volume.RAID_TYPE_10:
+ return '1+0'
+ elif raid_type == Volume.RAID_TYPE_5:
+ return '5'
+ elif raid_type == Volume.RAID_TYPE_6:
+ return '6'
+ elif raid_type == Volume.RAID_TYPE_6:
+ return '6'
+ elif raid_type == Volume.RAID_TYPE_60:
+ return '60'
+ else:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Not supported raid type %d" % raid_type)
+
+
class SmartArray(IPlugin):
_DEFAULT_BIN_PATHS = [
"/usr/sbin/hpssacli", "/opt/hp/hpssacli/bld/hpssacli"]
@@ -286,6 +307,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
cap.set(Capabilities.POOL_MEMBER_INFO)
+ cap.set(Capabilities.VOLUME_CREATE_RAID)
return cap

def _sacli_exec(self, sacli_cmds, flag_convert=True):
@@ -511,7 +533,7 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
for array_key_name in ctrl_data[key_name].keys():
if array_key_name == "Logical Drive: %s" % ld_num:
hp_ld = ctrl_data[key_name][array_key_name]
- raid_type = _hp_raid_type_to_lsm(hp_ld)
+ raid_type = _hp_raid_level_to_lsm(hp_ld)
strip_size = _hp_size_to_lsm(hp_ld['Strip Size'])
stripe_size = _hp_size_to_lsm(hp_ld['Full Stripe Size'])
elif array_key_name.startswith("physicaldrive"):
@@ -556,7 +578,7 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
for array_key_name in ctrl_data[key_name].keys():
if array_key_name.startswith("Logical Drive: ") and \
raid_type == Volume.RAID_TYPE_UNKNOWN:
- raid_type = _hp_raid_type_to_lsm(
+ raid_type = _hp_raid_level_to_lsm(
ctrl_data[key_name][array_key_name])
elif array_key_name.startswith("physicaldrive"):
hp_disk = ctrl_data[key_name][array_key_name]
@@ -570,3 +592,150 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
"Pool not found")

return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
+
+ def _vcr_cap_get(self, ctrl_num):
+ supported_raid_types = [
+ Volume.RAID_TYPE_RAID0, Volume.RAID_TYPE_RAID1,
+ Volume.RAID_TYPE_RAID5, Volume.RAID_TYPE_RAID50,
+ Volume.RAID_TYPE_RAID10]
+
+ supported_strip_sizes = [
+ 8 * 1024, 16 * 1024, 32 * 1024, 64 * 1024,
+ 128 * 1024, 256 * 1024, 512 * 1024, 1024 * 1024]
+
+ ctrl_conf = self._sacli_exec([
+ "ctrl", "slot=%s" % ctrl_num, "show", "config", "detail"]
+ ).values()[0]
+
+ if 'RAID 6 (ADG) Status' in ctrl_conf and \
+ ctrl_conf['RAID 6 (ADG) Status'] == 'Enabled':
+ supported_raid_types.extend(
+ [Volume.RAID_TYPE_6, Volume.RAID_TYPE_60])
+
+ return supported_raid_types, supported_strip_sizes
+
+ @_handle_errors
+ def volume_create_raid_cap_get(self, system, flags=Client.FLAG_RSVD):
+ """
+ Depends on this command:
+ hpssacli ctrl slot=0 show config detail
+ All hpsa support RAID 1, 10, 5, 50.
+ If "RAID 6 (ADG) Status: Enabled", it will support RAID 6 and 60.
+ For HP tribile mirror(RAID 1adm and RAID10adm), LSM does support
+ that yet.
+ No command output or document indication special or exceptional
+ support of strip size, assuming all hpsa cards support documented
+ strip sizes.
+ """
+ if not system.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Ilegal input system argument: missing plugin_data property")
+ return self._vcr_cap_get(system.plugin_data)
+
+ @_handle_errors
+ def volume_create_raid(self, name, raid_type, disks, strip_size,
+ flags=Client.FLAG_RSVD):
+ """
+ Depends on these commands:
+ 1. Create LD
+ hpssacli ctrl slot=0 create type=ld \
+ drives=1i:1:13,1i:1:14 size=max raid=1+0 ss=64
+
+ 2. Find out the system ID.
+
+ 3. Find out the pool fist disk belong.
+ hpssacli ctrl slot=0 pd 1i:1:13 show
+
+ 4. List all volumes for this new pool.
+ self.volumes(search_key='pool_id', search_value=pool_id)
+
+ The 'name' argument will be ignored.
+ TODO(Gris Ge): These code only tested for creating 1 disk RAID 0.
+ """
+ hp_raid_level = _lsm_raid_type_to_hp(raid_type)
+ hp_disk_ids = []
+ ctrl_num = None
+ for disk in disks:
+ if not disk.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: missing plugin_data "
+ "property")
+ (cur_ctrl_num, hp_disk_id) = disk.plugin_data.split(':', 1)
+ if ctrl_num and cur_ctrl_num != ctrl_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same controller/system.")
+
+ ctrl_num = cur_ctrl_num
+ hp_disk_ids.append(hp_disk_id)
+
+ cmds = [
+ "ctrl", "slot=%s" % ctrl_num, "create", "type=ld",
+ "drives=%s" % ','.join(hp_disk_ids), 'size=max',
+ 'raid=%s' % hp_raid_level]
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT:
+ cmds.append("ss=%d" % int(strip_size / 1024))
+
+ try:
+ self._sacli_exec(cmds, flag_convert=False)
+ except ExecError:
+ # Check whether disk is free
+ requested_disk_ids = [d.id for d in disks]
+ for cur_disk in self.disks():
+ if cur_disk.id in requested_disk_ids and \
+ not cur_disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in STATUS_FREE state" % cur_disk.id)
+
+ # Check whether got unsupported raid type or strip size
+ supported_raid_types, supported_strip_sizes = \
+ self._vcr_cap_get(ctrl_num)
+
+ if raid_type not in supported_raid_types:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided raid_type is not supported")
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT and \
+ strip_size not in supported_strip_sizes:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided strip_size is not supported")
+
+ raise
+
+ # Find out the system id to gernerate pool_id
+ sys_output = self._sacli_exec(
+ ['ctrl', "slot=%s" % ctrl_num, 'show'])
+
+ sys_id = sys_output.values()[0]['Serial Number']
+ # API code already checked empty 'disks', we will for sure get
+ # valid 'ctrl_num' and 'hp_disk_ids'.
+
+ pd_output = self._sacli_exec(
+ ['ctrl', "slot=%s" % ctrl_num, 'pd', hp_disk_ids[0], 'show'])
+
+ if pd_output.values()[0].keys()[0].lower().startswith("array "):
+ hp_array_id = pd_output.values()[0].keys()[0][len("array "):]
+ hp_array_id = "Array:%s" % hp_array_id
+ else:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_create_raid(): Failed to find out the array ID of "
+ "new array: %s" % pd_output.items())
+
+ pool_id = _pool_id_of(sys_id, hp_array_id)
+
+ lsm_vols = self.volumes(search_key='pool_id', search_value=pool_id)
+
+ if len(lsm_vols) != 1:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_create_raid(): Got unexpected count(not 1) of new "
+ "volumes: %s" % lsm_vols)
+ return lsm_vols[0]
--
1.8.3.1
Gris Ge
2015-04-18 11:23:57 UTC
Permalink
* lsmcli test: new test function volume_create_raid_test().

* Plugin test: new test function test_volume_create_raid().

* C unit test: test_volume_create_raid_cap_get() and
test_volume_create_raid()

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

* Initialize pointers in test_volume_create_raid_cap_get() of tester.c.

* Remove unused variables in tester.c.

Signed-off-by: Gris Ge <***@redhat.com>
---
test/cmdtest.py | 55 ++++++++++++++++++++++++++++
test/plugin_test.py | 50 ++++++++++++++++++++++++++
test/tester.c | 102 ++++++++++++++++++++++++++++++++++++++++++++++++++--
3 files changed, 205 insertions(+), 2 deletions(-)

diff --git a/test/cmdtest.py b/test/cmdtest.py
index 60bef93..f0b9bee 100755
--- a/test/cmdtest.py
+++ b/test/cmdtest.py
@@ -713,6 +713,59 @@ def pool_member_info_test(cap, system_id):
exit(10)
return

+def volume_create_raid_test(cap, system_id):
+ if cap['VOLUME_CREATE_RAID']:
+ out = call(
+ [cmd, '-t' + sep, 'volume-create-raid-cap', '--sys', system_id])[1]
+
+ if 'RAID1' not in [r[1] for r in parse(out)]:
+ return
+
+ out = call([cmd, '-t' + sep, 'list', '--type', 'disks'])[1]
+ free_disk_ids = []
+ disk_list = parse(out)
+ for disk in disk_list:
+ if 'Free' in disk:
+ if len(free_disk_ids) == 2:
+ break
+ free_disk_ids.append(disk[0])
+
+ if len(free_disk_ids) != 2:
+ print "Require two free disks to test volume-create-raid"
+ exit(10)
+
+ out = call([
+ cmd, '-t' + sep, 'volume-create-raid', '--disk', free_disk_ids[0],
+ '--disk', free_disk_ids[1], '--name', 'test_volume_create_raid',
+ '--raid-type', 'raid1'])[1]
+
+ volume = parse(out)
+ vol_id = volume[0][0]
+ pool_id = volume[0][-2]
+
+ if cap['VOLUME_RAID_INFO']:
+ out = call(
+ [cmd, '-t' + sep, 'volume-raid-info', '--vol', vol_id])[1]
+ if parse(out)[0][1] != 'RAID1':
+ print "New volume is not RAID 1"
+ exit(10)
+
+ if cap['POOL_MEMBER_INFO']:
+ out = call(
+ [cmd, '-t' + sep, 'pool-member-info', '--pool', pool_id])[1]
+ if parse(out)[0][1] != 'RAID1':
+ print "New pool is not RAID 1"
+ exit(10)
+ for disk_id in free_disk_ids:
+ if disk_id not in [p[3] for p in parse(out)]:
+ print "New pool does not contain requested disks"
+ exit(10)
+
+ if cap['VOLUME_DELETE']:
+ volume_delete(vol_id)
+
+ return
+

def run_all_tests(cap, system_id):
test_display(cap, system_id)
@@ -729,6 +782,8 @@ def run_all_tests(cap, system_id):

pool_member_info_test(cap,system_id)

+ volume_create_raid_test(cap, system_id)
+
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("-c", "--command", action="store", type="string",
diff --git a/test/plugin_test.py b/test/plugin_test.py
index 5cb48ae..32bdc6b 100755
--- a/test/plugin_test.py
+++ b/test/plugin_test.py
@@ -1294,6 +1294,56 @@ def test_pool_member_info(self):
self.assertTrue(type(member_type) is int)
self.assertTrue(type(member_ids) is list)

+ def _skip_current_test(self, messsage):
+ """
+ If skipTest is supported, skip this test with provided message.
+ Sliently return if not supported.
+ """
+ if hasattr(unittest.TestCase, 'skipTest') is True:
+ self.skipTest(messsage)
+ return
+
+ def test_volume_create_raid(self):
+ for s in self.systems:
+ cap = self.c.capabilities(s)
+ # TODO(Gris Ge): Add test code for other RAID type and strip size
+ if supported(cap, [Cap.VOLUME_CREATE_RAID]):
+ supported_raid_types, supported_strip_sizes = \
+ self.c.volume_create_raid_cap_get(s)
+ if lsm.Volume.RAID_TYPE_RAID1 not in supported_raid_types:
+ self._skip_current_test(
+ "Skip test: current system does not support "
+ "creating RAID1 volume")
+
+ # Find two free disks
+ free_disks = []
+ for disk in self.c.disks():
+ if len(free_disks) == 2:
+ break
+ if disk.status & lsm.Disk.STATUS_FREE:
+ free_disks.append(disk)
+
+ if len(free_disks) != 2:
+ self._skip_current_test(
+ "Skip test: Failed to find two free disks for RAID 1")
+ return
+
+ new_vol = self.c.volume_create_raid(
+ rs('v'), lsm.Volume.RAID_TYPE_RAID1, free_disks,
+ lsm.Volume.VCR_STRIP_SIZE_DEFAULT)
+
+ self.assertTrue(new_vol is not None)
+
+ # TODO(Gris Ge): Use volume_raid_info() and pool_raid_info()
+ # to verify size, raid_type, member type, member ids.
+
+ if supported(cap, [Cap.VOLUME_DELETE]):
+ self._volume_delete(new_vol)
+
+ else:
+ self._skip_current_test(
+ "Skip test: not support of VOLUME_CREATE_RAID")
+
def dump_results():
"""
unittest.main exits when done so we need to register this handler to
diff --git a/test/tester.c b/test/tester.c
index 47a802b..e6313e6 100644
--- a/test/tester.c
+++ b/test/tester.c
@@ -2907,8 +2907,7 @@ START_TEST(test_pool_member_info)
&member_ids, LSM_CLIENT_FLAG_RSVD);
for(y = 0; y < lsm_string_list_size(member_ids); y++){
// Simulator user reading the member id.
- const char *cur_member_id = lsm_string_list_elem_get(
- member_ids, y);
+ lsm_string_list_elem_get(member_ids, y);
}
lsm_string_list_free(member_ids);
G(rc, lsm_pool_record_free, pools[i]);
@@ -2916,6 +2915,103 @@ START_TEST(test_pool_member_info)
}
END_TEST

+START_TEST(test_volume_create_raid_cap_get)
+{
+ if (which_plugin == 1){
+ // silently skip on simc which does not support this method yet.
+ return;
+ }
+ int rc;
+ lsm_system **sys = NULL;
+ uint32_t sys_count = 0;
+
+ G(rc, lsm_system_list, c, &sys, &sys_count, LSM_CLIENT_FLAG_RSVD);
+ fail_unless( sys_count >= 1, "count = %d", sys_count);
+
+ if( sys_count > 0 ) {
+ uint32_t *supported_raid_types = NULL;
+ uint32_t supported_raid_type_count = 0;
+ uint32_t *supported_strip_sizes = NULL;
+ uint32_t supported_strip_size_count = 0;
+
+ G(
+ rc, lsm_volume_create_raid_cap_get, c, sys[0],
+ &supported_raid_types, &supported_raid_type_count,
+ &supported_strip_sizes, &supported_strip_size_count,
+ 0);
+
+ free(supported_raid_types);
+ free(supported_strip_sizes);
+
+ }
+ G(rc, lsm_system_record_array_free, sys, sys_count);
+}
+END_TEST
+
+START_TEST(test_volume_create_raid)
+{
+ if (which_plugin == 1){
+ // silently skip on simc which does not support this method yet.
+ return;
+ }
+ int rc;
+
+ lsm_disk **disks = NULL;
+ uint32_t disk_count = 0;
+
+ G(rc, lsm_disk_list, c, NULL, NULL, &disks, &disk_count, 0);
+
+ // Try to create two disks RAID 1.
+ uint32_t free_disk_count = 0;
+ lsm_disk *free_disks[2];
+ int i;
+ for (i = 0; i< disk_count; i++){
+ if (lsm_disk_status_get(disks[i]) & LSM_DISK_STATUS_FREE){
+ free_disks[free_disk_count++] = disks[i];
+ if (free_disk_count == 2){
+ break;
+ }
+ }
+ }
+ fail_unless(free_disk_count == 2, "Failed to find two free disks");
+
+ lsm_volume *new_volume = NULL;
+
+ G(rc, lsm_volume_create_raid, c, "test_volume_create_raid",
+ LSM_VOLUME_RAID_TYPE_RAID1, free_disks, free_disk_count,
+ LSM_VOLUME_VCR_STRIP_SIZE_DEFAULT, &new_volume, LSM_CLIENT_FLAG_RSVD);
+
+ char *job_del = NULL;
+ int del_rc = lsm_volume_delete(
+ c, new_volume, &job_del, LSM_CLIENT_FLAG_RSVD);
+
+ fail_unless( del_rc == LSM_ERR_OK || del_rc == LSM_ERR_JOB_STARTED,
+ "lsm_volume_delete %d (%s)", rc, error(lsm_error_last_get(c)));
+
+ if( LSM_ERR_JOB_STARTED == del_rc ) {
+ wait_for_job_vol(c, &job_del);
+ }
+
+ G(rc, lsm_disk_record_array_free, disks, disk_count);
+ // The new pool should be automatically be deleted when volume got
+ // deleted.
+ lsm_pool **pools = NULL;
+ uint32_t count = 0;
+ G(
+ rc, lsm_pool_list, c, "id", lsm_volume_pool_id_get(new_volume),
+ &pools, &count, LSM_CLIENT_FLAG_RSVD);
+
+ fail_unless(
+ count == 0,
+ "New HW RAID pool still exists, it should be deleted along with "
+ "lsm_volume_delete()");
+
+ lsm_pool_record_array_free(pools, count);
+
+ G(rc, lsm_volume_record_free, new_volume);
+}
+END_TEST
+
Suite * lsm_suite(void)
{
Suite *s = suite_create("libStorageMgmt");
@@ -2953,6 +3049,8 @@ Suite * lsm_suite(void)
tcase_add_test(basic, test_invalid_input);
tcase_add_test(basic, test_volume_raid_info);
tcase_add_test(basic, test_pool_member_info);
+ tcase_add_test(basic, test_volume_create_raid_cap_get);
+ tcase_add_test(basic, test_volume_create_raid);

suite_add_tcase(s, basic);
return s;
--
1.8.3.1
Gris Ge
2015-04-18 11:23:44 UTC
Permalink
* Add lsm.Client.pool_raid_info() support.

* Use lsm.Pool.MEMBER_TYPE_XXX to replace PoolRAID.MEMBER_TYPE_XXX.

* Removed unused disk_type_to_member_type() and member_type_is_disk()
methods.

* Data version bumped to 3.2.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 98 ++++++++++++++++---------------------------------
plugin/sim/simulator.py | 3 ++
2 files changed, 35 insertions(+), 66 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index ae14848..de43701 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -67,54 +67,6 @@ def _random_vpd():


class PoolRAID(object):
- MEMBER_TYPE_UNKNOWN = 0
- MEMBER_TYPE_DISK = 1
- MEMBER_TYPE_DISK_MIX = 10
- MEMBER_TYPE_DISK_ATA = 11
- MEMBER_TYPE_DISK_SATA = 12
- MEMBER_TYPE_DISK_SAS = 13
- MEMBER_TYPE_DISK_FC = 14
- MEMBER_TYPE_DISK_SOP = 15
- MEMBER_TYPE_DISK_SCSI = 16
- MEMBER_TYPE_DISK_NL_SAS = 17
- MEMBER_TYPE_DISK_HDD = 18
- MEMBER_TYPE_DISK_SSD = 19
- MEMBER_TYPE_DISK_HYBRID = 110
- MEMBER_TYPE_DISK_LUN = 111
-
- MEMBER_TYPE_POOL = 2
-
- _MEMBER_TYPE_2_DISK_TYPE = {
- MEMBER_TYPE_DISK: Disk.TYPE_UNKNOWN,
- MEMBER_TYPE_DISK_MIX: Disk.TYPE_UNKNOWN,
- MEMBER_TYPE_DISK_ATA: Disk.TYPE_ATA,
- MEMBER_TYPE_DISK_SATA: Disk.TYPE_SATA,
- MEMBER_TYPE_DISK_SAS: Disk.TYPE_SAS,
- MEMBER_TYPE_DISK_FC: Disk.TYPE_FC,
- MEMBER_TYPE_DISK_SOP: Disk.TYPE_SOP,
- MEMBER_TYPE_DISK_SCSI: Disk.TYPE_SCSI,
- MEMBER_TYPE_DISK_NL_SAS: Disk.TYPE_NL_SAS,
- MEMBER_TYPE_DISK_HDD: Disk.TYPE_HDD,
- MEMBER_TYPE_DISK_SSD: Disk.TYPE_SSD,
- MEMBER_TYPE_DISK_HYBRID: Disk.TYPE_HYBRID,
- MEMBER_TYPE_DISK_LUN: Disk.TYPE_LUN,
- }
-
- @staticmethod
- def member_type_is_disk(member_type):
- """
- Returns True if defined 'member_type' is disk.
- False when else.
- """
- return member_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE
-
- @staticmethod
- def disk_type_to_member_type(disk_type):
- for m_type, d_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE.items():
- if disk_type == d_type:
- return m_type
- return PoolRAID.MEMBER_TYPE_UNKNOWN
-
_RAID_DISK_CHK = {
Volume.RAID_TYPE_JBOD: lambda x: x > 0,
Volume.RAID_TYPE_RAID0: lambda x: x > 0,
@@ -171,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.1"
+ VERSION = "3.2"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -916,6 +868,12 @@ def sim_syss(self):
"""
return self._get_table('systems', BackStore.SYS_KEY_LIST)

+ def sim_disk_ids_of_pool(self, sim_pool_id):
+ return list(
+ d['id']
+ for d in self._data_find(
+ 'disks', 'owner_pool_id="%s"' % sim_pool_id, ['id']))
+
def sim_disks(self):
"""
Return a list of sim_disk dict.
@@ -935,20 +893,6 @@ def sim_pool_of_id(self, sim_pool_id):

def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
element_type, unsupported_actions=0):
- # Detect disk type
- disk_type = None
- for sim_disk in self.sim_disks():
- if sim_disk['id'] not in sim_disk_ids:
- continue
- if disk_type is None:
- disk_type = sim_disk['disk_type']
- elif disk_type != sim_disk['disk_type']:
- disk_type = None
- break
- member_type = PoolRAID.MEMBER_TYPE_DISK
- if disk_type is not None:
- member_type = PoolRAID.disk_type_to_member_type(disk_type)
-
self._data_add(
'pools',
{
@@ -958,7 +902,7 @@ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
'element_type': element_type,
'unsupported_actions': unsupported_actions,
'raid_type': raid_type,
- 'member_type': member_type,
+ 'member_type': Pool.MEMBER_TYPE_DISK,
})

data_disk_count = PoolRAID.data_disk_count(
@@ -991,7 +935,7 @@ def sim_pool_create_sub_pool(self, name, parent_pool_id, size,
'element_type': element_type,
'unsupported_actions': unsupported_actions,
'raid_type': Volume.RAID_TYPE_OTHER,
- 'member_type': PoolRAID.MEMBER_TYPE_POOL,
+ 'member_type': Pool.MEMBER_TYPE_POOL,
'parent_pool_id': parent_pool_id,
'total_space': size,
})
@@ -2240,7 +2184,7 @@ def volume_raid_info(self, lsm_vol):
opt_io_size = Volume.OPT_IO_SIZE_UNKNOWN
disk_count = Volume.DISK_COUNT_UNKNOWN

- if sim_pool['member_type'] == PoolRAID.MEMBER_TYPE_POOL:
+ if sim_pool['member_type'] == Pool.MEMBER_TYPE_POOL:
parent_sim_pool = self.bs_obj.sim_pool_of_id(
sim_pool['parent_pool_id'])
raid_type = parent_sim_pool['raid_type']
@@ -2278,3 +2222,25 @@ def volume_raid_info(self, lsm_vol):
opt_io_size = int(data_disk_count * BackStore.STRIP_SIZE)

return [raid_type, strip_size, disk_count, min_io_size, opt_io_size]
+
+ @_handle_errors
+ def pool_member_info(self, lsm_pool):
+ sim_pool = self.bs_obj.sim_pool_of_id(
+ SimArray._lsm_id_to_sim_id(
+ lsm_pool.id,
+ LsmError(ErrorNumber.NOT_FOUND_POOL, "Pool not found")))
+ member_type = sim_pool['member_type']
+ member_ids = []
+ if member_type == Pool.MEMBER_TYPE_POOL:
+ member_ids = [
+ SimArray._sim_id_to_lsm_id(
+ sim_pool['parent_pool_id'], 'POOL')]
+ elif member_type == Pool.MEMBER_TYPE_DISK:
+ member_ids = list(
+ SimArray._sim_id_to_lsm_id(sim_disk_id, 'DISK')
+ for sim_disk_id in self.bs_obj.sim_disk_ids_of_pool(
+ sim_pool['id']))
+ else:
+ member_type = Pool.MEMBER_TYPE_UNKNOWN
+
+ return sim_pool['raid_type'], member_type, member_ids
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index d562cd6..724574f 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
@@ -292,3 +292,6 @@ def target_ports(self, search_key=None, search_value=None, flags=0):

def volume_raid_info(self, volume, flags=0):
return self.sim_array.volume_raid_info(volume)
+
+ def pool_member_info(self, pool, flags=0):
+ return self.sim_array.pool_member_info(pool)
--
1.8.3.1
Gris Ge
2015-04-18 11:23:43 UTC
Permalink
* cmdtest.py: Test lsmcli 'pool-raid-info' command.
* plugin_test.py: Capability based python API test.
* tester.c: C API test for both simulator plugin and simulator C plugin.

Changes in V2:

* Updated tester.c to use lsm_string_list for 'member_ids' argument.

Changes in V3:

* Sync with pool_raid_info -> pool_member_info rename.

Signed-off-by: Gris Ge <***@redhat.com>
---
test/cmdtest.py | 20 ++++++++++++++++++++
test/plugin_test.py | 10 ++++++++++
test/tester.c | 30 ++++++++++++++++++++++++++++++
3 files changed, 60 insertions(+)

diff --git a/test/cmdtest.py b/test/cmdtest.py
index e80e027..60bef93 100755
--- a/test/cmdtest.py
+++ b/test/cmdtest.py
@@ -696,6 +696,24 @@ def volume_raid_info_test(cap, system_id):
exit(10)
return

+def pool_member_info_test(cap, system_id):
+ if cap['POOL_MEMBER_INFO']:
+ out = call([cmd, '-t' + sep, 'list', '--type', 'POOLS'])[1]
+ pool_list = parse(out)
+ for pool in pool_list:
+ out = call(
+ [cmd, '-t' + sep, 'pool-member-info', '--pool', pool[0]])[1]
+ r = parse(out)
+ if len(r[0]) != 4:
+ print "pool-member-info got expected output: %s" % out
+ exit(10)
+ if r[0][0] != pool[0]:
+ print "pool-member-info output pool ID is not requested " \
+ "pool ID %s" % out
+ exit(10)
+ return
+
+
def run_all_tests(cap, system_id):
test_display(cap, system_id)
test_plugin_list(cap, system_id)
@@ -709,6 +727,8 @@ def run_all_tests(cap, system_id):

volume_raid_info_test(cap, system_id)

+ pool_member_info_test(cap,system_id)
+
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("-c", "--command", action="store", type="string",
diff --git a/test/plugin_test.py b/test/plugin_test.py
index 69a45b7..5cb48ae 100755
--- a/test/plugin_test.py
+++ b/test/plugin_test.py
@@ -1283,6 +1283,16 @@ def test_create_delete_exports(self):
self.c.export_remove(exp)
self._fs_delete(fs)

+ def test_pool_member_info(self):
+ for s in self.systems:
+ cap = self.c.capabilities(s)
+ if supported(cap, [Cap.POOL_MEMBER_INFO]):
+ for pool in self.c.pools():
+ (raid_type, member_type, member_ids) = \
+ self.c.pool_member_info(pool)
+ self.assertTrue(type(raid_type) is int)
+ self.assertTrue(type(member_type) is int)
+ self.assertTrue(type(member_ids) is list)

def dump_results():
"""
diff --git a/test/tester.c b/test/tester.c
index 1622a75..47a802b 100644
--- a/test/tester.c
+++ b/test/tester.c
@@ -2887,6 +2887,35 @@ START_TEST(test_volume_raid_info)
}
END_TEST

+START_TEST(test_pool_member_info)
+{
+ int rc;
+ lsm_pool **pools = NULL;
+ uint32_t poolCount = 0;
+ G(rc, lsm_pool_list, c, NULL, NULL, &pools, &poolCount,
+ LSM_CLIENT_FLAG_RSVD);
+
+ lsm_volume_raid_type raid_type;
+ lsm_pool_member_type member_type;
+ lsm_string_list *member_ids;
+
+ int i;
+ uint32_t y;
+ for (i = 0; i < poolCount; i++) {
+ G(
+ rc, lsm_pool_member_info, c, pools[i], &raid_type, &member_type,
+ &member_ids, LSM_CLIENT_FLAG_RSVD);
+ for(y = 0; y < lsm_string_list_size(member_ids); y++){
+ // Simulator user reading the member id.
+ const char *cur_member_id = lsm_string_list_elem_get(
+ member_ids, y);
+ }
+ lsm_string_list_free(member_ids);
+ G(rc, lsm_pool_record_free, pools[i]);
+ }
+}
+END_TEST
+
Suite * lsm_suite(void)
{
Suite *s = suite_create("libStorageMgmt");
@@ -2923,6 +2952,7 @@ Suite * lsm_suite(void)
tcase_add_test(basic, test_nfs_exports);
tcase_add_test(basic, test_invalid_input);
tcase_add_test(basic, test_volume_raid_info);
+ tcase_add_test(basic, test_pool_member_info);

suite_add_tcase(s, basic);
return s;
--
1.8.3.1
Gris Ge
2015-04-18 11:23:56 UTC
Permalink
* Add support of these methods:
* volume_create_raid()
* volume_create_raid_cap_get()

* Using storcli command to create RAID group volume:

storcli /c0 add vd RAID10 drives=252:1-4 pdperarray=2 J

* Fixed incorrect lsm.System.plugin_data.

* Add new capability lsm.Capabilities.VOLUME_CREATE_RAID.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 194 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 193 insertions(+), 1 deletion(-)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 46e7916..990548b 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -19,6 +19,7 @@
import json
import re
import errno
+import math

from lsm import (uri_parse, search_property, size_human_2_size_bytes,
Capabilities, LsmError, ErrorNumber, System, Client,
@@ -188,6 +189,27 @@ def _mega_raid_type_to_lsm(vd_basic_info, vd_prop_info):
return raid_type


+def _lsm_raid_type_to_mega(lsm_raid_type):
+ if lsm_raid_type == Volume.RAID_TYPE_RAID0:
+ return 'RAID0'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID1:
+ return 'RAID1'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID5:
+ return 'RAID5'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID6:
+ return 'RAID6'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID10:
+ return 'RAID10'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID50:
+ return 'RAID50'
+ elif lsm_raid_type == Volume.RAID_TYPE_RAID60:
+ return 'RAID60'
+ else:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "RAID type %d not supported" % lsm_raid_type)
+
+
class MegaRAID(IPlugin):
_DEFAULT_BIN_PATHS = [
"/opt/MegaRAID/storcli/storcli64", "/opt/MegaRAID/storcli/storcli"]
@@ -261,6 +283,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.VOLUME_RAID_INFO)
cap.set(Capabilities.POOL_MEMBER_INFO)
+ cap.set(Capabilities.VOLUME_CREATE_RAID)
return cap

def _storcli_exec(self, storcli_cmds, flag_json=True):
@@ -347,7 +370,7 @@ def systems(self, flags=Client.FLAG_RSVD):
)
(status, status_info) = self._lsm_status_of_ctrl(
ctrl_show_all_output)
- plugin_data = "/c%d"
+ plugin_data = "/c%d" % ctrl_num
# Since PCI slot sequence might change.
# This string just stored for quick system verification.

@@ -601,3 +624,172 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
raid_type = Volume.RAID_TYPE_RAID10

return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
+
+ def _vcr_cap_get(self, mega_sys_path):
+ cap_output = self._storcli_exec(
+ [mega_sys_path, "show", "all"])['Capabilities']
+
+ mega_raid_types = \
+ cap_output['RAID Level Supported'].replace(', \n', '').split(', ')
+
+ supported_raid_types = []
+ if 'RAID0' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID0)
+
+ if 'RAID1' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID1)
+
+ if 'RAID5' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID5)
+
+ if 'RAID6' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID6)
+
+ if 'RAID10' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID10)
+
+ if 'RAID50' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID50)
+
+ if 'RAID60' in mega_raid_types:
+ supported_raid_types.append(Volume.RAID_TYPE_RAID60)
+
+ min_strip_size = _mega_size_to_lsm(cap_output['Min Strip Size'])
+ max_strip_size = _mega_size_to_lsm(cap_output['Max Strip Size'])
+
+ supported_strip_sizes = list(
+ min_strip_size * (2 ** i)
+ for i in range(
+ 0,
+ int(
+ math.log(int(max_strip_size / min_strip_size)) /
+ math.log(2)) + 1))
+
+ # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ # The math above is to generate a list like:
+ # min_strip_size, ... n^2 , max_strip_size
+
+ return supported_raid_types, supported_strip_sizes
+
+ @_handle_errors
+ def volume_create_raid_cap_get(self, system, flags=Client.FLAG_RSVD):
+ """
+ Depend on the 'Capabilities' section of "storcli /c0 show all" output.
+ """
+ cur_lsm_syss = list(s for s in self.systems() if s.id == system.id)
+ if len(cur_lsm_syss) != 1:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+
+ lsm_sys = cur_lsm_syss[0]
+ return self._vcr_cap_get(lsm_sys.plugin_data)
+
+ @_handle_errors
+ def volume_create_raid(self, name, raid_type, disks, strip_size,
+ flags=Client.FLAG_RSVD):
+ """
+ Work flow:
+ 1. Create RAID volume
+ storcli /c0 add vd RAID10 drives=252:1-4 pdperarray=2 J
+ 2. Find out pool/DG base on one disk.
+ storcli /c0/e252/s1 show J
+ 3. Find out the volume/VD base on pool/DG using self.volumes()
+ """
+ mega_raid_type = _lsm_raid_type_to_mega(raid_type)
+ ctrl_num = None
+ slot_nums = []
+ enclosure_num = None
+
+ for disk in disks:
+ if not disk.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: missing plugin_data "
+ "property")
+ # Disk should from the same controller.
+ (cur_ctrl_num, cur_enclosure_num, slot_num) = \
+ disk.plugin_data.split(':')
+
+ if ctrl_num and cur_ctrl_num != ctrl_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same controller/system.")
+
+ if enclosure_num and cur_enclosure_num != enclosure_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same disk enclosure.")
+
+ ctrl_num = int(cur_ctrl_num)
+ enclosure_num = cur_enclosure_num
+ slot_nums.append(slot_num)
+
+ # Handle request volume name, LSI only allow 15 characters.
+ name = re.sub('[^0-9a-zA-Z_\-]+', '', name)[:15]
+
+ cmds = [
+ "/c%s" % ctrl_num, "add", "vd", mega_raid_type,
+ 'size=all', "name=%s" % name,
+ "drives=%s:%s" % (enclosure_num, ','.join(slot_nums))]
+
+ if raid_type == Volume.RAID_TYPE_RAID10 or \
+ raid_type == Volume.RAID_TYPE_RAID50 or \
+ raid_type == Volume.RAID_TYPE_RAID60:
+ cmds.append("pdperarray=%d" % int(len(disks) / 2))
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT:
+ cmds.append("strip=%d" % int(strip_size / 1024))
+
+ try:
+ self._storcli_exec(cmds)
+ except ExecError:
+ req_disk_ids = [d.id for d in disks]
+ for cur_disk in self.disks():
+ if cur_disk.id in req_disk_ids and \
+ not cur_disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in STATUS_FREE state" % cur_disk.id)
+ # Check whether got unsupported RAID type or stripe size
+ supported_raid_types, supported_strip_sizes = \
+ self._vcr_cap_get("/c%s" % ctrl_num)
+
+ if raid_type not in supported_raid_types:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'raid_type' is not supported")
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT and \
+ strip_size not in supported_strip_sizes:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'strip_size' is not supported")
+
+ raise
+
+ # Find out the DG ID from one disk.
+ dg_show_output = self._storcli_exec(
+ ["/c%s/e%s/s%s" % tuple(disks[0].plugin_data.split(":")), "show"])
+
+ dg_id = dg_show_output['Drive Information'][0]['DG']
+ if dg_id == '-':
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_create_raid(): No error found in output, "
+ "but RAID is not created: %s" % dg_show_output.items())
+ else:
+ dg_id = int(dg_id)
+
+ pool_id = _pool_id_of(dg_id, self._sys_id_of_ctrl_num(ctrl_num))
+
+ lsm_vols = self.volumes(search_key='pool_id', search_value=pool_id)
+ if len(lsm_vols) != 1:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_create_raid(): Got unexpected volume count(not 1) "
+ "when creating RAID volume")
+
+ return lsm_vols[0]
--
1.8.3.1
Tony Asleson
2015-04-20 23:04:05 UTC
Permalink
Hi Gris,

Patch set looks pretty good, please see my comments thus far on github.
Disclaimer, I ran out of time while reviewing, will review more tomorrow.

Thanks,
Tony
Post by Gris Ge
* Putting two patch sets into one. As `pool_raid_info()` patch set is
just preparation for `volume_create_raid()` patch set. It's better to explain
design choice with both.
* RAID type
* Member type: disk RAID pool or sub-pool.
* Members: disks or parent pools.
* New method `volume_create_raid()` to create RAID group volume.
* New method `volume_create_raid_cap_get()` to query RAID group volume create
supported RAID type and strip sizes.
* Why we need pool_raid_info() when we already have volume_raid_info()?
The `pool_raid_info()` is focus on RAID disk or sub-pool layout.
The `volume_raid_info()` is focus on providing information about host
I/O performance and availability(raid).
* Why don't create single sharable method for SAN/NAS/DAS about RAID
creation?
Please refer to design note of patch 'New feature: Create RAID volume
on hardware RAID.'
* Why no RAID creation method for SAN and NAS?
As we tried before, due to complex RAID settings of SAN and NAS
storage, I didn't find a simple design to support all of them.
* Please refer to design note of patch 'New feature: Create RAID volume'
also.
Tony Asleson
2015-04-21 23:16:05 UTC
Permalink
Hi Gris,

I completed my review and added some more comments. I found some memory
leaks and potential for some memory leaks, please look them over.

Thanks,
Tony
Post by Tony Asleson
Hi Gris,
Patch set looks pretty good, please see my comments thus far on github.
Disclaimer, I ran out of time while reviewing, will review more tomorrow.
Thanks,
Tony
Post by Gris Ge
* Putting two patch sets into one. As `pool_raid_info()` patch set is
just preparation for `volume_create_raid()` patch set. It's better to explain
design choice with both.
* RAID type
* Member type: disk RAID pool or sub-pool.
* Members: disks or parent pools.
* New method `volume_create_raid()` to create RAID group volume.
* New method `volume_create_raid_cap_get()` to query RAID group volume create
supported RAID type and strip sizes.
* Why we need pool_raid_info() when we already have volume_raid_info()?
The `pool_raid_info()` is focus on RAID disk or sub-pool layout.
The `volume_raid_info()` is focus on providing information about host
I/O performance and availability(raid).
* Why don't create single sharable method for SAN/NAS/DAS about RAID
creation?
Please refer to design note of patch 'New feature: Create RAID volume
on hardware RAID.'
* Why no RAID creation method for SAN and NAS?
As we tried before, due to complex RAID settings of SAN and NAS
storage, I didn't find a simple design to support all of them.
* Please refer to design note of patch 'New feature: Create RAID volume'
also.
_______________________________________________
libstoragemgmt-devel mailing list
https://lists.fedorahosted.org/mailman/listinfo/libstoragemgmt-devel
Gris Ge
2015-04-22 15:20:31 UTC
Permalink
* Putting two patch sets into one. As `pool_raid_info()` patch set is
just preparation for `volume_create_raid()` patch set. It's better to explain
design choice with both.

* New method `pool_raid_info()` to query pool member information:
* RAID type
* Member type: disk RAID pool or sub-pool.
* Members: disks or parent pools.

* New method `volume_create_raid()` to create RAID group volume.

* New method `volume_create_raid_cap_get()` to query RAID group volume create
supported RAID type and strip sizes.

* Design notes:
* Why we need pool_raid_info() when we already have volume_raid_info()?
The `pool_raid_info()` is focus on RAID disk or sub-pool layout.

The `volume_raid_info()` is focus on providing information about host
I/O performance and availability(raid).

* Why don't create single sharable method for SAN/NAS/DAS about RAID
creation?
Please refer to design note of patch 'New feature: Create RAID volume
on hardware RAID.'

* Why no RAID creation method for SAN and NAS?
As we tried before, due to complex RAID settings of SAN and NAS
storage, I didn't find a simple design to support all of them.

* Please refer to design note of patch 'New feature: Create RAID volume'
also.

* Please comment in github page, thank you.
https://github.com/libstorage/libstoragemgmt/pull/11

Changes in V3:

* Rename pool_raid_info to pool_member_info.
* Rebased patches for the rename.
* Fix C code warning about comparing signed and unsigned integer.
* Initialized pointers in lsm_plugin_ipc.cpp and tester.c.
* Removed unused variables in tester.c.
* volume_create_raid() tested on:
* MegaRAID: RAID 0/1/5/10. # No RAID 6 license.
* HP SmartArray: RAID 0. # No additional disk.
* Document pull request: https://github.com/libstorage/libstoragemgmt-doc/pull/3

Changes in V4:

* Rename method 'volume_create_raid' to 'volume_raid_create'.
* Fix memory leak in C API.
* Initialize pointers in C code.
* Make sure output pointer is set to sane value if error occurred in C code.
* Add bash auto completion for new lsmcli commands and aliases.
* Fix typo in comments of C and Python library.
* Use hash table to convert vendor RAID string to LSM type in hpsa and
megaraid plugins.
* Tested MegaRAID for 4 disks RAID 10 with default strip size.
Query commands is tested on MegaRAID and Smart Array.
* Document updated and C API user guide for these new methods also included:
https://github.com/libstorage/libstoragemgmt-doc/pull/3

Gris Ge (17):
New method: lsm.Client.pool_raid_info()/lsm_pool_raid_info()
lsmcli: Add new command pool-raid-info
Tests: Add test for lsm.Client.pool_raid_info() and
lsm_pool_raid_info().
Simulator Plugin: Add lsm.Client.pool_raid_info() support.
Simulator C plugin: Add lsm_pool_raid_info() support.
ONTAP Plugin: Add lsm.Client.pool_raid_info() support.
MegaRAID Plugin: Add lsm.Client.pool_raid_info() support.
HP Smart Array Plugin: Add lsm.Client.pool_raid_info() support.
Simulator Plugin: Fix pool free space calculation.
MegaRAID Plugin: Change Disk.plugin_data to parse friendly format.
New feature: Create RAID volume on hardware RAID.
lsmcli: New commands for hardware RAID volume creation.
Simulator Plugin: Add hardware RAID volume creation support.
Simulator C plugin: Sync changes of lsm_ops_v1_2 about
lsm_plug_volume_create_raid_cap_get
HP SmartArray Plugin: Add volume creation support.
MegaRAID Plugin: Add volume creation support.
Test: Hardware RAID volume creation.

c_binding/include/libstoragemgmt/libstoragemgmt.h | 89 ++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 5 +-
.../include/libstoragemgmt/libstoragemgmt_error.h | 2 +
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 84 ++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 20 ++
c_binding/lsm_convert.cpp | 41 +++
c_binding/lsm_convert.hpp | 12 +
c_binding/lsm_mgmt.cpp | 188 +++++++++++++
c_binding/lsm_plugin_ipc.cpp | 138 ++++++++-
doc/man/lsmcli.1.in | 41 +++
plugin/hpsa/hpsa.py | 239 ++++++++++++++--
plugin/megaraid/megaraid.py | 232 +++++++++++++++-
plugin/ontap/ontap.py | 21 ++
plugin/sim/simarray.py | 309 +++++++++++++--------
plugin/sim/simulator.py | 11 +
plugin/simc/simc_lsmplugin.c | 44 ++-
python_binding/lsm/_client.py | 285 +++++++++++++++++++
python_binding/lsm/_common.py | 3 +
python_binding/lsm/_data.py | 9 +
test/cmdtest.py | 75 +++++
test/plugin_test.py | 60 ++++
test/tester.c | 130 +++++++++
tools/bash_completion/lsmcli | 21 +-
tools/lsmcli/cmdline.py | 84 +++++-
tools/lsmcli/data_display.py | 77 ++++-
25 files changed, 2087 insertions(+), 133 deletions(-)
--
1.8.3.1
Gris Ge
2015-04-22 15:20:32 UTC
Permalink
* Python part:
* New python method: lsm.Client.pool_raid_info(pool,flags)
Query the RAID information for certain volume.
Returns [raid_type, member_type, member_ids]
* For sub-pool which allocated space from other pool(like NetApp ONTAP
volume which allocate space from ONTAP aggregate), the member_type
will be lsm.Pool.MEMBER_TYPE_POOL.
* For disk RAID pool which use the whole disk as RAID member(like NetApp
ONTAP aggregate or EMC VNX pool) which the member_type will be
lsm.Pool.MEMBER_TYPE_DISK.

* New constants:
* lsm.Pool.MEMBER_TYPE_POOL
* lsm.Pool.MEMBER_TYPE_DISK
* lsm.Pool.MEMBER_TYPE_OTHER
* lsm.Pool.MEMBER_TYPE_UNKNOWN

* New capability for this new method:
* lsm.Capabilities.POOL_RAID_INFO

* C part:
* New C functions:
* lsm_pool_raid_info()
For API user.
* lsm_plug_pool_raid_info()
For plugin.

* New constants:
* LSM_POOL_MEMBER_TYPE_POOL
* LSM_POOL_MEMBER_TYPE_DISK
* LSM_POOL_MEMBER_TYPE_OTHER
* LSM_POOL_MEMBER_TYPE_UNKNOWN
* LSM_CAP_POOL_RAID_INFO

* For detail information, please refer to docstring of
lsm.Client.pool_raid_info() method in python_binding/lsm/_client.py file
and lsm_pool_raid_info() method in
c_binding/include/libstoragemgmt/libstoragemgmt.h.

Changes in V2:

* Use lsm_string_list instead of 'char **' for 'member_ids' argument in C
library.
* Removed 'member_count' argument as lsm_string_list has
lsm_string_list_size() function.
* Fix the incorrect comment of plugin interface 'lsm_plug_volume_raid_info'.

Changes in V3:

* Method name changed to lsm.Client.pool_member_info() and
lsm_pool_member_info().

Changes in V4:

* Fix typo 'memership' in libstoragemgmt.h.

* Handle 'member_ids' conversion errors in lsm_pool_member_info() of
lsm_mgmt.cpp:
* Raise LSM_ERR_NO_MEMORY when value_to_string_list() failed to
convert value to string list.
* Raise LSM_ERR_LIB_BUG if got member_ids not as an array.

* Initialized return data just in case plugin didn't touch the values
in handle_pool_member_info() of lsm_plugin_ipc.cpp.

* Update docstring of lsm.Client.pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
c_binding/include/libstoragemgmt/libstoragemgmt.h | 27 ++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 4 +-
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 27 ++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 17 ++++
c_binding/lsm_mgmt.cpp | 60 +++++++++++++
c_binding/lsm_plugin_ipc.cpp | 44 +++++++++-
python_binding/lsm/_client.py | 98 ++++++++++++++++++++++
python_binding/lsm/_data.py | 6 ++
8 files changed, 281 insertions(+), 2 deletions(-)

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt.h b/c_binding/include/libstoragemgmt/libstoragemgmt.h
index 3d34f75..85022b4 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt.h
@@ -864,6 +864,33 @@ int LSM_DLL_EXPORT lsm_volume_raid_info(
uint32_t *strip_size, uint32_t *disk_count,
uint32_t *min_io_size, uint32_t *opt_io_size, lsm_flag flags);

+/**
+ * Retrieves the membership of given pool. New in version 1.2.
+ * @param[in] c Valid connection
+ * @param[in] pool The lsm_pool ptr.
+ * @param[out] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[out] member_type
+ * Enum of lsm_pool_member_type.
+ * @param[out] member_ids
+ * The pointer to lsm_string_list pointer.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_POOL,
+ * the 'member_ids' will contain a list of parent Pool
+ * IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_DISK,
+ * the 'member_ids' will contain a list of disk IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_OTHER or
+ * LSM_POOL_MEMBER_TYPE_UNKNOWN, the member_ids should
+ * be NULL.
+ * Need to use lsm_string_list_free() to free this memory.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_pool_member_info(
+ lsm_connect *c, lsm_pool *pool, lsm_volume_raid_type *raid_type,
+ lsm_pool_member_type *member_type, lsm_string_list **member_ids,
+ lsm_flag flags);
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
index 18490f3..943f4a4 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
@@ -113,7 +113,9 @@ typedef enum {
LSM_CAP_TARGET_PORTS = 216, /**< List target ports */
LSM_CAP_TARGET_PORTS_QUICK_SEARCH = 217, /**< Filtering occurs on array */

- LSM_CAP_DISKS = 220 /**< List disk drives */
+ LSM_CAP_DISKS = 220, /**< List disk drives */
+ LSM_CAP_POOL_MEMBER_INFO = 221,
+ /**^ Query pool member information */

} lsm_capability_type;

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
index b36586c..cdc8dc9 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
@@ -827,6 +827,32 @@ typedef int (*lsm_plug_volume_raid_info)(lsm_plugin_ptr c, lsm_volume *volume,
uint32_t *disk_count, uint32_t *min_io_size, uint32_t *opt_io_size,
lsm_flag flags);

+/**
+ * Retrieves the membership of given pool. New in version 1.2.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] pool The lsm_pool ptr.
+ * @param[out] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[out] member_type
+ * Enum of lsm_pool_member_type.
+ * @param[out] member_ids
+ * The pointer of lsm_string_list pointer.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_POOL,
+ * the 'member_ids' will contain a list of parent Pool
+ * IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_DISK,
+ * the 'member_ids' will contain a list of disk IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_OTHER or
+ * LSM_POOL_MEMBER_TYPE_UNKNOWN, the member_ids should
+ * be NULL.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_pool_member_info)(
+ lsm_plugin_ptr c, lsm_pool *pool, lsm_volume_raid_type *raid_type,
+ lsm_pool_member_type *member_type, lsm_string_list **member_ids,
+ lsm_flag flags);
+
/** \struct lsm_ops_v1_2
* \brief Functions added in version 1.2
* NOTE: This structure will change during the developement util version 1.2
@@ -835,6 +861,7 @@ typedef int (*lsm_plug_volume_raid_info)(lsm_plugin_ptr c, lsm_volume *volume,
struct lsm_ops_v1_2 {
lsm_plug_volume_raid_info vol_raid_info;
/**^ Query volume RAID information*/
+ lsm_plug_pool_member_info pool_member_info;
};

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
index c5607c1..115ec5a 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
@@ -169,6 +169,23 @@ typedef enum {
/**^ Vendor specific RAID type */
} lsm_volume_raid_type;

+/**< \enum lsm_pool_member_type Different types of Pool member*/
+typedef enum {
+ LSM_POOL_MEMBER_TYPE_UNKNOWN = 0,
+ /**^ Plugin failed to detect the RAID member type. */
+ LSM_POOL_MEMBER_TYPE_OTHER = 1,
+ /**^ Vendor specific RAID member type. */
+ LSM_POOL_MEMBER_TYPE_DISK = 2,
+ /**^ Pool is created from RAID group using whole disks. */
+ LSM_POOL_MEMBER_TYPE_POOL = 3,
+ /**^
+ * Current pool(also known as sub-pool) is allocated from other
+ * pool(parent pool).
+ * The 'raid_type' will set to RAID_TYPE_OTHER unless certain RAID system
+ * support RAID using space of parent pools.
+ */
+} lsm_pool_member_type;
+
#define LSM_VOLUME_STRIP_SIZE_UNKNOWN 0
#define LSM_VOLUME_DISK_COUNT_UNKNOWN 0
#define LSM_VOLUME_MIN_IO_SIZE_UNKNOWN 0
diff --git a/c_binding/lsm_mgmt.cpp b/c_binding/lsm_mgmt.cpp
index e6a254f..7469127 100644
--- a/c_binding/lsm_mgmt.cpp
+++ b/c_binding/lsm_mgmt.cpp
@@ -802,6 +802,66 @@ int lsm_pool_list(lsm_connect *c, char *search_key, char *search_value,
return rc;
}

+int lsm_pool_member_info(lsm_connect *c, lsm_pool *pool,
+ lsm_volume_raid_type * raid_type,
+ lsm_pool_member_type *member_type,
+ lsm_string_list **member_ids, lsm_flag flags)
+{
+ if( LSM_FLAG_UNUSED_CHECK(flags) ) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ int rc = LSM_ERR_OK;
+ CONN_SETUP(c);
+
+ if( !LSM_IS_POOL(pool) ) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ if( !raid_type || !member_type || !member_ids) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["pool"] = pool_to_value(pool);
+ p["flags"] = Value(flags);
+ Value parameters(p);
+ try {
+
+ Value response;
+
+ rc = rpc(c, "pool_member_info", parameters, response);
+ if( LSM_ERR_OK == rc ) {
+ std::vector<Value> j = response.asArray();
+ *raid_type = (lsm_volume_raid_type) j[0].asInt32_t();
+ *member_type = (lsm_pool_member_type) j[1].asInt32_t();
+ *member_ids = NULL;
+ if (Value::array_t == j[2].valueType()){
+ if (j[2].asArray().size()){
+ *member_ids = value_to_string_list(j[2]);
+ if ( *member_ids == NULL ){
+ return LSM_ERR_NO_MEMORY;
+ }else if( lsm_string_list_size(*member_ids) !=
+ j[2].asArray().size() ){
+
+ lsm_string_list_free(*member_ids);
+ return LSM_ERR_NO_MEMORY;
+ }
+ }
+ }else{
+ rc = logException(
+ c, LSM_ERR_LIB_BUG, "member_ids data is not an array",
+ "member_ids data is not an array");
+ }
+ }
+ } catch( const ValueException &ve ) {
+ rc = logException(c, LSM_ERR_LIB_BUG, "Unexpected type",
+ ve.what());
+ }
+ return rc;
+
+}
+
int lsm_target_port_list(lsm_connect *c, const char *search_key,
const char *search_value,
lsm_target_port **target_ports[],
diff --git a/c_binding/lsm_plugin_ipc.cpp b/c_binding/lsm_plugin_ipc.cpp
index d2a43d4..89b7674 100644
--- a/c_binding/lsm_plugin_ipc.cpp
+++ b/c_binding/lsm_plugin_ipc.cpp
@@ -1015,6 +1015,47 @@ static int handle_volume_raid_info(lsm_plugin_ptr p, Value &params,
return rc;
}

+static int handle_pool_member_info(lsm_plugin_ptr p, Value &params,
+ Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->pool_member_info) {
+ Value v_pool = params["pool"];
+
+ if(IS_CLASS_POOL(v_pool) &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+ lsm_pool *pool = value_to_pool(v_pool);
+ std::vector<Value> result;
+
+ if( pool ) {
+ lsm_volume_raid_type raid_type = LSM_VOLUME_RAID_TYPE_UNKNOWN;
+ lsm_pool_member_type member_type =
+ LSM_POOL_MEMBER_TYPE_UNKNOWN;
+ lsm_string_list *member_ids = NULL;
+
+ rc = p->ops_v1_2->pool_member_info(
+ p, pool, &raid_type, &member_type,
+ &member_ids, LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ result.push_back(Value((int32_t)raid_type));
+ result.push_back(Value((int32_t)member_type));
+ result.push_back(string_list_to_value(member_ids));
+ response = Value(result);
+ }
+
+ lsm_pool_record_free(pool);
+ } else {
+ rc = LSM_ERR_NO_MEMORY;
+ }
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+}
+
static int ag_list(lsm_plugin_ptr p, Value &params, Value &response)
{
int rc = LSM_ERR_NO_SUPPORT;
@@ -2213,7 +2254,8 @@ static std::map<std::string,handler> dispatch = static_map<std::string,handler>
("volume_resize", handle_volume_resize)
("volumes_accessible_by_access_group", vol_accessible_by_ag)
("volumes", handle_volumes)
- ("volume_raid_info", handle_volume_raid_info);
+ ("volume_raid_info", handle_volume_raid_info)
+ ("pool_member_info", handle_pool_member_info);

static int process_request(lsm_plugin_ptr p, const std::string &method, Value &request,
Value &response)
diff --git a/python_binding/lsm/_client.py b/python_binding/lsm/_client.py
index a641b1d..f3c9e6c 100644
--- a/python_binding/lsm/_client.py
+++ b/python_binding/lsm/_client.py
@@ -1074,3 +1074,101 @@ def volume_raid_info(self, volume, flags=FLAG_RSVD):
No support.
"""
return self._tp.rpc('volume_raid_info', _del_self(locals()))
+
+ @_return_requires([int, int, [unicode]])
+ def pool_member_info(self, pool, flags=FLAG_RSVD):
+ """
+ lsm.Client.pool_member_info(self, pool, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ Query the membership information of certain pool:
+ RAID type, member type and member ids.
+ Currently, LibStorageMgmt supports two types of pool:
+ * Sub-pool -- Pool.MEMBER_TYPE_POOL
+ Pool space is allocated from parent pool.
+ Example:
+ * NetApp ONTAP volume
+
+ * Disk RAID pool -- Pool.MEMBER_TYPE_DISK
+ Pool is a RAID group assembled by disks.
+ Example:
+ * LSI MegaRAID disk group
+ * EMC VNX pool
+ * NetApp ONTAP aggregate
+ Parameters:
+ pool (lsm.Pool object)
+ Pool to query
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ [raid_type, member_type, member_ids]
+
+ raid_type (int)
+ RAID Type of requested pool.
+ Could be one of these values:
+ Volume.RAID_TYPE_RAID0
+ Stripe
+ Volume.RAID_TYPE_RAID1
+ Two disks Mirror
+ Volume.RAID_TYPE_RAID3
+ Byte-level striping with dedicated parity
+ Volume.RAID_TYPE_RAID4
+ Block-level striping with dedicated parity
+ Volume.RAID_TYPE_RAID5
+ Block-level striping with distributed parity
+ Volume.RAID_TYPE_RAID6
+ Block-level striping with two distributed parities,
+ aka, RAID-DP
+ Volume.RAID_TYPE_RAID10
+ Stripe of mirrors
+ Volume.RAID_TYPE_RAID15
+ Parity of mirrors
+ Volume.RAID_TYPE_RAID16
+ Dual parity of mirrors
+ Volume.RAID_TYPE_RAID50
+ Stripe of parities Volume.RAID_TYPE_RAID60
+ Stripe of dual parities
+ Volume.RAID_TYPE_RAID51
+ Mirror of parities
+ Volume.RAID_TYPE_RAID61
+ Mirror of dual parities
+ Volume.RAID_TYPE_JBOD
+ Just bunch of disks, no parity, no striping.
+ Volume.RAID_TYPE_UNKNOWN
+ The plugin failed to detect the volume's RAID type.
+ Volume.RAID_TYPE_MIXED
+ This pool contains multiple RAID settings.
+ Volume.RAID_TYPE_OTHER
+ Vendor specific RAID type
+ member_type (int)
+ Could be one of these values:
+ Pool.MEMBER_TYPE_POOL
+ Current pool(also known as sub-pool) is allocated from
+ other pool(parent pool). The 'raid_type' will set to
+ RAID_TYPE_OTHER unless certain RAID system support RAID
+ using space of parent pools.
+ Pool.MEMBER_TYPE_DISK
+ Pool is created from RAID group using whole disks.
+ Pool.MEMBER_TYPE_OTHER
+ Vendor specific RAID member type.
+ Pool.MEMBER_TYPE_UNKNOWN
+ Plugin failed to detect the RAID member type.
+ member_ids (list of strings)
+ When 'member_type' is Pool.MEMBER_TYPE_POOL,
+ the 'member_ids' will contain a list of parent Pool IDs.
+ When 'member_type' is Pool.MEMBER_TYPE_DISK,
+ the 'member_ids' will contain a list of disk IDs.
+ When 'member_type' is Pool.MEMBER_TYPE_OTHER or
+ Pool.MEMBER_TYPE_UNKNOWN, the member_ids should be an empty
+ list.
+ SpecialExceptions:
+ LsmError
+ ErrorNumber.NO_SUPPORT
+ ErrorNumber.NOT_FOUND_POOL
+ Capability:
+ lsm.Capabilities.POOL_MEMBER_INFO
+ """
+ return self._tp.rpc('pool_member_info', _del_self(locals()))
diff --git a/python_binding/lsm/_data.py b/python_binding/lsm/_data.py
index 23681dd..2f42eaf 100644
--- a/python_binding/lsm/_data.py
+++ b/python_binding/lsm/_data.py
@@ -412,6 +412,11 @@ class Pool(IData):
STATUS_INITIALIZING = 1 << 14
STATUS_GROWING = 1 << 15

+ MEMBER_TYPE_UNKNOWN = 0
+ MEMBER_TYPE_OTHER = 1
+ MEMBER_TYPE_DISK = 2
+ MEMBER_TYPE_POOL = 3
+
def __init__(self, _id, _name, _element_type, _unsupported_actions,
_total_space, _free_space,
_status, _status_info, _system_id, _plugin_data=None):
@@ -754,6 +759,7 @@ class Capabilities(IData):
TARGET_PORTS_QUICK_SEARCH = 217

DISKS = 220
+ POOL_MEMBER_INFO = 221

def _to_dict(self):
return {'class': self.__class__.__name__,
--
1.8.3.1
Gris Ge
2015-04-22 15:20:33 UTC
Permalink
* New command 'pool-raid-info' to utilize lsm.Client.pool_raid_info() method.

* Manpage updated.

* New alias 'pri' for 'pool-raid-info' command.

Changes in V2:

* Renamed command to 'pool-member-info' and alias 'pmi'

Changes in V4:

* Add bash auto completion of lsmcli for 'pool-member-info' command and its
alias.

Signed-off-by: Gris Ge <***@redhat.com>
---
doc/man/lsmcli.1.in | 9 +++++++++
tools/bash_completion/lsmcli | 8 +++++++-
tools/lsmcli/cmdline.py | 19 ++++++++++++++++++-
tools/lsmcli/data_display.py | 41 +++++++++++++++++++++++++++++++++++++++++
4 files changed, 75 insertions(+), 2 deletions(-)

diff --git a/doc/man/lsmcli.1.in b/doc/man/lsmcli.1.in
index 381d676..6faf690 100644
--- a/doc/man/lsmcli.1.in
+++ b/doc/man/lsmcli.1.in
@@ -351,6 +351,13 @@ Query RAID information for given volume.
\fB--vol\fR \fI<VOL_ID>\fR
Required. The ID of volume to query.

+.SS pool-member-info
+.TP 15
+Query RAID information for given pool.
+.TP
+\fB--pool\fR \fI<POOL_ID>\fR
+Required. The ID of pool to query.
+
.SS access-group-create
.TP 15
Create an access group.
@@ -589,6 +596,8 @@ Alias of 'volume-mask'
Alias of 'volume-unmask'
.SS vri
Alias of 'volume-raid-info'
+.SS pmi
+Alias of 'pool-member-info'
.SS ac
Alias of 'access-group-create'
.SS aa
diff --git a/tools/bash_completion/lsmcli b/tools/bash_completion/lsmcli
index a67083b..164604f 100644
--- a/tools/bash_completion/lsmcli
+++ b/tools/bash_completion/lsmcli
@@ -76,7 +76,7 @@ function _lsm()
fs-resize fs-export fs-unexport fs-clone fs-snap-create \
fs-snap-delete fs-snap-restore fs-dependants fs-dependants-rm \
file-clone ls lp lv ld la lf lt c p vc vd vr vm vi ve vi ac \
- aa ar ad vri"
+ aa ar ad vri pool-member-info pmi"

list_args="--type"
list_type_args="volumes pools fs snapshots exports nfs_client_auth \
@@ -124,6 +124,7 @@ function _lsm()
fs_snap_restore_args="--snap --fs --file --fileas --force"
fs_dependants_args="--fs"
file_clone_args="--fs --src --dst --backing-snapshot"
+ pool_member_info_args="--pool"

# These operations can potentially be slow and cause hangs depending on plugin and configuration
if [[ ${NO_VALUE_LOOKUP} -ne 0 ]] ; then
@@ -396,6 +397,11 @@ function _lsm()
COMPREPLY=( $(compgen -W "${potential_args}" -- ${cur}) )
return 0
;;
+ pool-member-info|pmi)
+ possible_args "${pool_member_info_args}"
+ COMPREPLY=( $(compgen -W "${potential_args}" -- ${cur}) )
+ return 0
+ ;;
*)
;;
esac
diff --git a/tools/lsmcli/cmdline.py b/tools/lsmcli/cmdline.py
index 9b2b682..d26da80 100644
--- a/tools/lsmcli/cmdline.py
+++ b/tools/lsmcli/cmdline.py
@@ -39,7 +39,8 @@

from lsm.lsmcli.data_display import (
DisplayData, PlugData, out,
- vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo)
+ vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo,
+ PoolRAIDInfo)


## Wraps the invocation to the command line
@@ -376,6 +377,14 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
),

dict(
+ name='pool-member-info',
+ help='Query Pool membership infomation',
+ args=[
+ dict(pool_id_opt),
+ ],
+ ),
+
+ dict(
name='access-group-create',
help='Create an access group',
args=[
@@ -637,6 +646,7 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
['ar', 'access-group-remove'],
['ad', 'access-group-delete'],
['vri', 'volume-raid-info'],
+ ['pmi', 'pool-member-info'],
)


@@ -1334,6 +1344,13 @@ def volume_raid_info(self, args):
VolumeRAIDInfo(
lsm_vol.id, *self.c.volume_raid_info(lsm_vol))])

+ def pool_member_info(self, args):
+ lsm_pool = _get_item(self.c.pools(), args.pool, "Pool")
+ self.display_data(
+ [
+ PoolRAIDInfo(
+ lsm_pool.id, *self.c.pool_member_info(lsm_pool))])
+
## Displays file system dependants
def fs_dependants(self, args):
fs = _get_item(self.c.fs(), args.fs, "File System")
diff --git a/tools/lsmcli/data_display.py b/tools/lsmcli/data_display.py
index 19f7114..115f15a 100644
--- a/tools/lsmcli/data_display.py
+++ b/tools/lsmcli/data_display.py
@@ -279,6 +279,26 @@ def raid_type_to_str(raid_type):
return _enum_type_to_str(raid_type, VolumeRAIDInfo._RAID_TYPE_MAP)


+class PoolRAIDInfo(object):
+ _MEMBER_TYPE_MAP = {
+ Pool.MEMBER_TYPE_UNKNOWN: 'Unknown',
+ Pool.MEMBER_TYPE_OTHER: 'Unknown',
+ Pool.MEMBER_TYPE_POOL: 'Pool',
+ Pool.MEMBER_TYPE_DISK: 'Disk',
+ }
+
+ def __init__(self, pool_id, raid_type, member_type, member_ids):
+ self.pool_id = pool_id
+ self.raid_type = raid_type
+ self.member_type = member_type
+ self.member_ids = member_ids
+
+ @staticmethod
+ def member_type_to_str(member_type):
+ return _enum_type_to_str(
+ member_type, PoolRAIDInfo._MEMBER_TYPE_MAP)
+
+
class DisplayData(object):

def __init__(self):
@@ -557,6 +577,27 @@ def __init__(self):
'value_conv_human': VOL_RAID_INFO_VALUE_CONV_HUMAN,
}

+ POOL_RAID_INFO_HEADER = OrderedDict()
+ POOL_RAID_INFO_HEADER['pool_id'] = 'Pool ID'
+ POOL_RAID_INFO_HEADER['raid_type'] = 'RAID Type'
+ POOL_RAID_INFO_HEADER['member_type'] = 'Member Type'
+ POOL_RAID_INFO_HEADER['member_ids'] = 'Member IDs'
+
+ POOL_RAID_INFO_COLUMN_SKIP_KEYS = []
+
+ POOL_RAID_INFO_VALUE_CONV_ENUM = {
+ 'raid_type': VolumeRAIDInfo.raid_type_to_str,
+ 'member_type': PoolRAIDInfo.member_type_to_str,
+ }
+ POOL_RAID_INFO_VALUE_CONV_HUMAN = []
+
+ VALUE_CONVERT[PoolRAIDInfo] = {
+ 'headers': POOL_RAID_INFO_HEADER,
+ 'column_skip_keys': POOL_RAID_INFO_COLUMN_SKIP_KEYS,
+ 'value_conv_enum': POOL_RAID_INFO_VALUE_CONV_ENUM,
+ 'value_conv_human': POOL_RAID_INFO_VALUE_CONV_HUMAN,
+ }
+
@staticmethod
def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
flag_human, flag_enum):
--
1.8.3.1
Gris Ge
2015-04-22 15:20:36 UTC
Permalink
* Add simply lsm_pool_raid_info() by return unknown data.

* Set LSM_CAP_POOL_RAID_INFO.

Changes in V2:

* Use lsm_string_list for member_ids of lsm_pool_raid_info().

Changes in V3:

* Renamed to lsm_pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/simc/simc_lsmplugin.c | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/plugin/simc/simc_lsmplugin.c b/plugin/simc/simc_lsmplugin.c
index 422a064..6aa7bdb 100644
--- a/plugin/simc/simc_lsmplugin.c
+++ b/plugin/simc/simc_lsmplugin.c
@@ -392,6 +392,7 @@ static int cap(lsm_plugin_ptr c, lsm_system *system,
LSM_CAP_EXPORT_FS,
LSM_CAP_EXPORT_REMOVE,
LSM_CAP_VOLUME_RAID_INFO,
+ LSM_CAP_POOL_MEMBER_INFO,
-1
);

@@ -980,8 +981,29 @@ static int volume_raid_info(lsm_plugin_ptr c, lsm_volume *volume,
return rc;
}

+static int pool_member_info(
+ lsm_plugin_ptr c, lsm_pool *pool,
+ lsm_volume_raid_type *raid_type, lsm_pool_member_type *member_type,
+ lsm_string_list **member_ids, lsm_flag flags)
+{
+ int rc = LSM_ERR_OK;
+ struct plugin_data *pd = (struct plugin_data*)lsm_private_data_get(c);
+ lsm_pool *p = find_pool(pd, lsm_pool_id_get(pool));
+
+ if( !p) {
+ rc = lsm_log_error_basic(c, LSM_ERR_NOT_FOUND_POOL,
+ "Pool not found!");
+ }
+
+ *raid_type = LSM_VOLUME_RAID_TYPE_UNKNOWN;
+ *member_type = LSM_POOL_MEMBER_TYPE_UNKNOWN;
+ *member_ids = NULL;
+ return rc;
+}
+
static struct lsm_ops_v1_2 ops_v1_2 = {
- volume_raid_info
+ volume_raid_info,
+ pool_member_info,
};

static int volume_enable_disable(lsm_plugin_ptr c, lsm_volume *v,
--
1.8.3.1
Gris Ge
2015-04-22 15:20:35 UTC
Permalink
* Add lsm.Client.pool_raid_info() support.

* Use lsm.Pool.MEMBER_TYPE_XXX to replace PoolRAID.MEMBER_TYPE_XXX.

* Removed unused disk_type_to_member_type() and member_type_is_disk()
methods.

* Data version bumped to 3.2.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 98 ++++++++++++++++---------------------------------
plugin/sim/simulator.py | 3 ++
2 files changed, 35 insertions(+), 66 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index ae14848..de43701 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -67,54 +67,6 @@ def _random_vpd():


class PoolRAID(object):
- MEMBER_TYPE_UNKNOWN = 0
- MEMBER_TYPE_DISK = 1
- MEMBER_TYPE_DISK_MIX = 10
- MEMBER_TYPE_DISK_ATA = 11
- MEMBER_TYPE_DISK_SATA = 12
- MEMBER_TYPE_DISK_SAS = 13
- MEMBER_TYPE_DISK_FC = 14
- MEMBER_TYPE_DISK_SOP = 15
- MEMBER_TYPE_DISK_SCSI = 16
- MEMBER_TYPE_DISK_NL_SAS = 17
- MEMBER_TYPE_DISK_HDD = 18
- MEMBER_TYPE_DISK_SSD = 19
- MEMBER_TYPE_DISK_HYBRID = 110
- MEMBER_TYPE_DISK_LUN = 111
-
- MEMBER_TYPE_POOL = 2
-
- _MEMBER_TYPE_2_DISK_TYPE = {
- MEMBER_TYPE_DISK: Disk.TYPE_UNKNOWN,
- MEMBER_TYPE_DISK_MIX: Disk.TYPE_UNKNOWN,
- MEMBER_TYPE_DISK_ATA: Disk.TYPE_ATA,
- MEMBER_TYPE_DISK_SATA: Disk.TYPE_SATA,
- MEMBER_TYPE_DISK_SAS: Disk.TYPE_SAS,
- MEMBER_TYPE_DISK_FC: Disk.TYPE_FC,
- MEMBER_TYPE_DISK_SOP: Disk.TYPE_SOP,
- MEMBER_TYPE_DISK_SCSI: Disk.TYPE_SCSI,
- MEMBER_TYPE_DISK_NL_SAS: Disk.TYPE_NL_SAS,
- MEMBER_TYPE_DISK_HDD: Disk.TYPE_HDD,
- MEMBER_TYPE_DISK_SSD: Disk.TYPE_SSD,
- MEMBER_TYPE_DISK_HYBRID: Disk.TYPE_HYBRID,
- MEMBER_TYPE_DISK_LUN: Disk.TYPE_LUN,
- }
-
- @staticmethod
- def member_type_is_disk(member_type):
- """
- Returns True if defined 'member_type' is disk.
- False when else.
- """
- return member_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE
-
- @staticmethod
- def disk_type_to_member_type(disk_type):
- for m_type, d_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE.items():
- if disk_type == d_type:
- return m_type
- return PoolRAID.MEMBER_TYPE_UNKNOWN
-
_RAID_DISK_CHK = {
Volume.RAID_TYPE_JBOD: lambda x: x > 0,
Volume.RAID_TYPE_RAID0: lambda x: x > 0,
@@ -171,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.1"
+ VERSION = "3.2"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -916,6 +868,12 @@ def sim_syss(self):
"""
return self._get_table('systems', BackStore.SYS_KEY_LIST)

+ def sim_disk_ids_of_pool(self, sim_pool_id):
+ return list(
+ d['id']
+ for d in self._data_find(
+ 'disks', 'owner_pool_id="%s"' % sim_pool_id, ['id']))
+
def sim_disks(self):
"""
Return a list of sim_disk dict.
@@ -935,20 +893,6 @@ def sim_pool_of_id(self, sim_pool_id):

def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
element_type, unsupported_actions=0):
- # Detect disk type
- disk_type = None
- for sim_disk in self.sim_disks():
- if sim_disk['id'] not in sim_disk_ids:
- continue
- if disk_type is None:
- disk_type = sim_disk['disk_type']
- elif disk_type != sim_disk['disk_type']:
- disk_type = None
- break
- member_type = PoolRAID.MEMBER_TYPE_DISK
- if disk_type is not None:
- member_type = PoolRAID.disk_type_to_member_type(disk_type)
-
self._data_add(
'pools',
{
@@ -958,7 +902,7 @@ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
'element_type': element_type,
'unsupported_actions': unsupported_actions,
'raid_type': raid_type,
- 'member_type': member_type,
+ 'member_type': Pool.MEMBER_TYPE_DISK,
})

data_disk_count = PoolRAID.data_disk_count(
@@ -991,7 +935,7 @@ def sim_pool_create_sub_pool(self, name, parent_pool_id, size,
'element_type': element_type,
'unsupported_actions': unsupported_actions,
'raid_type': Volume.RAID_TYPE_OTHER,
- 'member_type': PoolRAID.MEMBER_TYPE_POOL,
+ 'member_type': Pool.MEMBER_TYPE_POOL,
'parent_pool_id': parent_pool_id,
'total_space': size,
})
@@ -2240,7 +2184,7 @@ def volume_raid_info(self, lsm_vol):
opt_io_size = Volume.OPT_IO_SIZE_UNKNOWN
disk_count = Volume.DISK_COUNT_UNKNOWN

- if sim_pool['member_type'] == PoolRAID.MEMBER_TYPE_POOL:
+ if sim_pool['member_type'] == Pool.MEMBER_TYPE_POOL:
parent_sim_pool = self.bs_obj.sim_pool_of_id(
sim_pool['parent_pool_id'])
raid_type = parent_sim_pool['raid_type']
@@ -2278,3 +2222,25 @@ def volume_raid_info(self, lsm_vol):
opt_io_size = int(data_disk_count * BackStore.STRIP_SIZE)

return [raid_type, strip_size, disk_count, min_io_size, opt_io_size]
+
+ @_handle_errors
+ def pool_member_info(self, lsm_pool):
+ sim_pool = self.bs_obj.sim_pool_of_id(
+ SimArray._lsm_id_to_sim_id(
+ lsm_pool.id,
+ LsmError(ErrorNumber.NOT_FOUND_POOL, "Pool not found")))
+ member_type = sim_pool['member_type']
+ member_ids = []
+ if member_type == Pool.MEMBER_TYPE_POOL:
+ member_ids = [
+ SimArray._sim_id_to_lsm_id(
+ sim_pool['parent_pool_id'], 'POOL')]
+ elif member_type == Pool.MEMBER_TYPE_DISK:
+ member_ids = list(
+ SimArray._sim_id_to_lsm_id(sim_disk_id, 'DISK')
+ for sim_disk_id in self.bs_obj.sim_disk_ids_of_pool(
+ sim_pool['id']))
+ else:
+ member_type = Pool.MEMBER_TYPE_UNKNOWN
+
+ return sim_pool['raid_type'], member_type, member_ids
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index d562cd6..724574f 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
@@ -292,3 +292,6 @@ def target_ports(self, search_key=None, search_value=None, flags=0):

def volume_raid_info(self, volume, flags=0):
return self.sim_array.volume_raid_info(volume)
+
+ def pool_member_info(self, pool, flags=0):
+ return self.sim_array.pool_member_info(pool)
--
1.8.3.1
Gris Ge
2015-04-22 15:20:34 UTC
Permalink
* cmdtest.py: Test lsmcli 'pool-raid-info' command.
* plugin_test.py: Capability based python API test.
* tester.c: C API test for both simulator plugin and simulator C plugin.

Changes in V2:

* Updated tester.c to use lsm_string_list for 'member_ids' argument.

Changes in V3:

* Sync with pool_raid_info -> pool_member_info rename.

Changes in V4:

* Initialized the 'member_ids' in pool_member_info test.
* Make sure retried 'member_id' is not empty string.

Signed-off-by: Gris Ge <***@redhat.com>
---
test/cmdtest.py | 20 ++++++++++++++++++++
test/plugin_test.py | 10 ++++++++++
test/tester.c | 31 +++++++++++++++++++++++++++++++
3 files changed, 61 insertions(+)

diff --git a/test/cmdtest.py b/test/cmdtest.py
index e80e027..60bef93 100755
--- a/test/cmdtest.py
+++ b/test/cmdtest.py
@@ -696,6 +696,24 @@ def volume_raid_info_test(cap, system_id):
exit(10)
return

+def pool_member_info_test(cap, system_id):
+ if cap['POOL_MEMBER_INFO']:
+ out = call([cmd, '-t' + sep, 'list', '--type', 'POOLS'])[1]
+ pool_list = parse(out)
+ for pool in pool_list:
+ out = call(
+ [cmd, '-t' + sep, 'pool-member-info', '--pool', pool[0]])[1]
+ r = parse(out)
+ if len(r[0]) != 4:
+ print "pool-member-info got expected output: %s" % out
+ exit(10)
+ if r[0][0] != pool[0]:
+ print "pool-member-info output pool ID is not requested " \
+ "pool ID %s" % out
+ exit(10)
+ return
+
+
def run_all_tests(cap, system_id):
test_display(cap, system_id)
test_plugin_list(cap, system_id)
@@ -709,6 +727,8 @@ def run_all_tests(cap, system_id):

volume_raid_info_test(cap, system_id)

+ pool_member_info_test(cap,system_id)
+
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("-c", "--command", action="store", type="string",
diff --git a/test/plugin_test.py b/test/plugin_test.py
index 69a45b7..5cb48ae 100755
--- a/test/plugin_test.py
+++ b/test/plugin_test.py
@@ -1283,6 +1283,16 @@ def test_create_delete_exports(self):
self.c.export_remove(exp)
self._fs_delete(fs)

+ def test_pool_member_info(self):
+ for s in self.systems:
+ cap = self.c.capabilities(s)
+ if supported(cap, [Cap.POOL_MEMBER_INFO]):
+ for pool in self.c.pools():
+ (raid_type, member_type, member_ids) = \
+ self.c.pool_member_info(pool)
+ self.assertTrue(type(raid_type) is int)
+ self.assertTrue(type(member_type) is int)
+ self.assertTrue(type(member_ids) is list)

def dump_results():
"""
diff --git a/test/tester.c b/test/tester.c
index 1622a75..3b04022 100644
--- a/test/tester.c
+++ b/test/tester.c
@@ -2887,6 +2887,36 @@ START_TEST(test_volume_raid_info)
}
END_TEST

+START_TEST(test_pool_member_info)
+{
+ int rc;
+ lsm_pool **pools = NULL;
+ uint32_t poolCount = 0;
+ G(rc, lsm_pool_list, c, NULL, NULL, &pools, &poolCount,
+ LSM_CLIENT_FLAG_RSVD);
+
+ lsm_volume_raid_type raid_type;
+ lsm_pool_member_type member_type;
+ lsm_string_list *member_ids = NULL;
+
+ int i;
+ uint32_t y;
+ for (i = 0; i < poolCount; i++) {
+ G(
+ rc, lsm_pool_member_info, c, pools[i], &raid_type, &member_type,
+ &member_ids, LSM_CLIENT_FLAG_RSVD);
+ for(y = 0; y < lsm_string_list_size(member_ids); y++){
+ // Simulator user reading the member id.
+ const char *cur_member_id = lsm_string_list_elem_get(
+ member_ids, y);
+ fail_unless( strlen(cur_member_id) );
+ }
+ lsm_string_list_free(member_ids);
+ G(rc, lsm_pool_record_free, pools[i]);
+ }
+}
+END_TEST
+
Suite * lsm_suite(void)
{
Suite *s = suite_create("libStorageMgmt");
@@ -2923,6 +2953,7 @@ Suite * lsm_suite(void)
tcase_add_test(basic, test_nfs_exports);
tcase_add_test(basic, test_invalid_input);
tcase_add_test(basic, test_volume_raid_info);
+ tcase_add_test(basic, test_pool_member_info);

suite_add_tcase(s, basic);
return s;
--
1.8.3.1
Gris Ge
2015-04-22 15:20:37 UTC
Permalink
* Add lsm.Client.pool_raid_info() support:
* Treat NetApp ONTAP aggregate as disk RAID group.
Use na_disk['aggregate'] property to determine disk ownership.
* Treat NetApp ONTAP volume as sub-pool.
Use na_vol['containing-aggregate'] property to determine sub-pool
ownership.

* Set lsm.Capabilities.POOL_RAID_INFO.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/ontap/ontap.py | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/plugin/ontap/ontap.py b/plugin/ontap/ontap.py
index f9f1f56..5a01494 100644
--- a/plugin/ontap/ontap.py
+++ b/plugin/ontap/ontap.py
@@ -545,6 +545,7 @@ def capabilities(self, system, flags=0):
cap.set(Capabilities.TARGET_PORTS)
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_MEMBER_INFO)
return cap

@handle_ontap_errors
@@ -1330,3 +1331,23 @@ def volume_raid_info(self, volume, flags=0):
return [
raid_type, Ontap._STRIP_SIZE, disk_count, Ontap._STRIP_SIZE,
Ontap._OPT_IO_SIZE]
+
+ @handle_ontap_errors
+ def pool_member_info(self, pool, flags=0):
+ if pool.element_type & Pool.ELEMENT_TYPE_VOLUME:
+ # We got a NetApp volume
+ raid_type = Volume.RAID_TYPE_OTHER
+ member_type = Pool.MEMBER_TYPE_POOL
+ na_vol = self.f.volumes(volume_name=pool.name)[0]
+ disk_ids = [na_vol['containing-aggregate']]
+ else:
+ # We got a NetApp aggregate
+ member_type = Pool.MEMBER_TYPE_DISK
+ na_aggr = self.f.aggregates(aggr_name=pool.name)[0]
+ raid_type = Ontap._raid_type_of_na_aggr(na_aggr)
+ disk_ids = list(
+ Ontap._disk_id(d)
+ for d in self.f.disks()
+ if 'aggregate' in d and d['aggregate'] == pool.name)
+
+ return raid_type, member_type, disk_ids
--
1.8.3.1
Gris Ge
2015-04-22 15:20:39 UTC
Permalink
* Add lsm.Client.pool_raid_info() support:
* Base on 'hpssacli ctrl slot=0 show config detail' command.

* Set lsm.Capabilities.POOL_RAID_INFO.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/hpsa/hpsa.py | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)

diff --git a/plugin/hpsa/hpsa.py b/plugin/hpsa/hpsa.py
index 76eb4a7..75f507f 100644
--- a/plugin/hpsa/hpsa.py
+++ b/plugin/hpsa/hpsa.py
@@ -285,6 +285,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_MEMBER_INFO)
return cap

def _sacli_exec(self, sacli_cmds, flag_convert=True):
@@ -531,3 +532,41 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
"Volume not found")

return [raid_type, strip_size, disk_count, strip_size, stripe_size]
+
+ @_handle_errors
+ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
+ """
+ Depend on command:
+ hpssacli ctrl slot=0 show config detail
+ """
+ if not pool.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Ilegal input volume argument: missing plugin_data property")
+
+ (ctrl_num, array_num) = pool.plugin_data.split(":")
+ ctrl_data = self._sacli_exec(
+ ["ctrl", "slot=%s" % ctrl_num, "show", "config", "detail"]
+ ).values()[0]
+
+ disk_ids = []
+ raid_type = Volume.RAID_TYPE_UNKNOWN
+ for key_name in ctrl_data.keys():
+ if key_name == "Array: %s" % array_num:
+ for array_key_name in ctrl_data[key_name].keys():
+ if array_key_name.startswith("Logical Drive: ") and \
+ raid_type == Volume.RAID_TYPE_UNKNOWN:
+ raid_type = _hp_raid_type_to_lsm(
+ ctrl_data[key_name][array_key_name])
+ elif array_key_name.startswith("physicaldrive"):
+ hp_disk = ctrl_data[key_name][array_key_name]
+ if hp_disk['Drive Type'] == 'Data Drive':
+ disk_ids.append(hp_disk['Serial Number'])
+ break
+
+ if len(disk_ids) == 0:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_POOL,
+ "Pool not found")
+
+ return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
--
1.8.3.1
Gris Ge
2015-04-22 15:20:41 UTC
Permalink
* Change Disk.plugin_data from '/c0/e252/s1' to '0:252:1'.

* Updated pool_raid_info() method for this change.

Changes in V3:

* Rebased on pool_raid_info to pool_member_info rename change.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 256ddd6..46e7916 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -29,6 +29,7 @@
# Naming scheme
# mega_sys_path /c0
# mega_disk_path /c0/e64/s0
+# lsi_disk_id 0:64:0


def _handle_errors(method):
@@ -393,7 +394,8 @@ def disks(self, search_key=None, search_value=None,
status = _disk_status_of(
disk_show_basic_dict, disk_show_stat_dict)

- plugin_data = mega_disk_path
+ plugin_data = "%s:%s" % (
+ ctrl_num, disk_show_basic_dict['EID:Slt'])

rc_lsm_disks.append(
Disk(
@@ -576,15 +578,14 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
lsm_disk_map[lsm_disk.plugin_data] = lsm_disk.id

for dg_disk_info in dg_show_all_output['DG Drive LIST']:
- cur_lsi_disk_path = "/c%s" % ctrl_num + "/e%s/s%s" % tuple(
- dg_disk_info['EID:Slt'].split(':'))
- if cur_lsi_disk_path in lsm_disk_map.keys():
- disk_ids.append(lsm_disk_map[cur_lsi_disk_path])
+ cur_lsi_disk_id = "%s:%s" % (ctrl_num, dg_disk_info['EID:Slt'])
+ if cur_lsi_disk_id in lsm_disk_map.keys():
+ disk_ids.append(lsm_disk_map[cur_lsi_disk_id])
else:
raise LsmError(
ErrorNumber.PLUGIN_BUG,
"pool_member_info(): Failed to find disk id of %s" %
- cur_lsi_disk_path)
+ cur_lsi_disk_id)

raid_type = Volume.RAID_TYPE_UNKNOWN
dg_num = lsi_dg_path.split('/')[2][1:]
--
1.8.3.1
Gris Ge
2015-04-22 15:20:40 UTC
Permalink
* When certain pool is RAID0 or other RAID type with multiple data disks,
the free space generated by 'pools_view' sqlite view will be incorrect
as volume consumed size is counted serial times.
The fix is generate temporary table for total space, volume consumed space,
fs consumed space, sub-pool consumed space.

* Since we changed the 'pools_view' view, bumped the version to 3.3.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 98 +++++++++++++++++++++++++++++++++-----------------
1 file changed, 66 insertions(+), 32 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index de43701..a0809f6 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -123,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.2"
+ VERSION = "3.3"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -353,40 +353,74 @@ def __init__(self, statefile, timeout):
"""
CREATE VIEW pools_view AS
SELECT
- pool.id,
- pool.name,
- pool.status,
- pool.status_info,
- pool.element_type,
- pool.unsupported_actions,
- pool.raid_type,
- pool.member_type,
- pool.parent_pool_id,
- ifnull(pool.total_space,
- ifnull(SUM(disk.total_space), 0))
- total_space,
- ifnull(pool.total_space,
- ifnull(SUM(disk.total_space), 0)) -
- ifnull(SUM(volume.consumed_size), 0) -
- ifnull(SUM(fs.consumed_size), 0) -
- ifnull(SUM(pool2.total_space), 0)
- free_space
-
+ pool0.id,
+ pool0.name,
+ pool0.status,
+ pool0.status_info,
+ pool0.element_type,
+ pool0.unsupported_actions,
+ pool0.raid_type,
+ pool0.member_type,
+ pool0.parent_pool_id,
+ pool1.total_space total_space,
+ pool1.total_space -
+ pool2.vol_consumed_size -
+ pool3.fs_consumed_size -
+ pool4.sub_pool_consumed_size free_space
FROM
- pools pool
- LEFT JOIN disks disk
- ON pool.id = disk.owner_pool_id AND
- disk.role = 'DATA'
- LEFT JOIN volumes volume
- ON volume.pool_id = pool.id
- LEFT JOIN fss fs
- ON fs.pool_id = pool.id
- LEFT JOIN pools pool2
- ON pool2.parent_pool_id = pool.id
+ pools pool0
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(pool.total_space,
+ ifnull(SUM(disk.total_space), 0))
+ total_space
+ FROM pools pool
+ LEFT JOIN disks disk
+ ON pool.id = disk.owner_pool_id AND
+ disk.role = 'DATA'
+ GROUP BY
+ pool.id
+ ) pool1 ON pool0.id = pool1.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(volume.consumed_size), 0)
+ vol_consumed_size
+ FROM pools pool
+ LEFT JOIN volumes volume
+ ON volume.pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool2 ON pool0.id = pool2.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(fs.consumed_size), 0)
+ fs_consumed_size
+ FROM pools pool
+ LEFT JOIN fss fs
+ ON fs.pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool3 ON pool0.id = pool3.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(sub_pool.total_space), 0)
+ sub_pool_consumed_size
+ FROM pools pool
+ LEFT JOIN pools sub_pool
+ ON sub_pool.parent_pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool4 ON pool0.id = pool4.id

GROUP BY
- pool.id
- ;
+ pool0.id;
""")
sql_cmd += (
"""
--
1.8.3.1
Gris Ge
2015-04-22 15:20:38 UTC
Permalink
* Add lsm.Client.pool_raid_info() support:
* Use command 'storcli /c0/d0 show all J'.
* Use 'DG Drive LIST' section of output for disk_ids.
* Use 'TOPOLOGY' section of output for raid_type.
Since the command used here is not sharing the same data layout with
command "storcli /c0/v0 show all J" which is used by 'volume_raid_info()'
method, we now have two set of codes for detecting RAID type.

* Set lsm.Capabilities.POOL_RAID_INFO.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 54 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 78603fc..256ddd6 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -259,6 +259,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_MEMBER_INFO)
return cap

def _storcli_exec(self, storcli_cmds, flag_json=True):
@@ -546,3 +547,56 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
return [
raid_type, strip_size, disk_count, strip_size,
strip_size * strip_count]
+
+ @_handle_errors
+ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
+ lsi_dg_path = pool.plugin_data
+ # Check whether pool exists.
+ try:
+ dg_show_all_output = self._storcli_exec(
+ [lsi_dg_path , "show", "all"])
+ except ExecError as exec_error:
+ try:
+ json_output = json.loads(exec_error.stdout)
+ detail_error = json_output[
+ 'Controllers'][0]['Command Status']['Detailed Status']
+ except Exception:
+ raise exec_error
+
+ if detail_error and detail_error[0]['Status'] == 'Not found':
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_POOL,
+ "Pool not found")
+ raise
+
+ ctrl_num = lsi_dg_path.split('/')[1][1:]
+ lsm_disk_map = {}
+ disk_ids = []
+ for lsm_disk in self.disks():
+ lsm_disk_map[lsm_disk.plugin_data] = lsm_disk.id
+
+ for dg_disk_info in dg_show_all_output['DG Drive LIST']:
+ cur_lsi_disk_path = "/c%s" % ctrl_num + "/e%s/s%s" % tuple(
+ dg_disk_info['EID:Slt'].split(':'))
+ if cur_lsi_disk_path in lsm_disk_map.keys():
+ disk_ids.append(lsm_disk_map[cur_lsi_disk_path])
+ else:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "pool_member_info(): Failed to find disk id of %s" %
+ cur_lsi_disk_path)
+
+ raid_type = Volume.RAID_TYPE_UNKNOWN
+ dg_num = lsi_dg_path.split('/')[2][1:]
+ for dg_top in dg_show_all_output['TOPOLOGY']:
+ if dg_top['Arr'] == '-' and \
+ dg_top['Row'] == '-' and \
+ int(dg_top['DG']) == int(dg_num):
+ raid_type = _RAID_TYPE_MAP.get(
+ dg_top['Type'], Volume.RAID_TYPE_UNKNOWN)
+ break
+
+ if raid_type == Volume.RAID_TYPE_RAID1 and len(disk_ids) >= 4:
+ raid_type = Volume.RAID_TYPE_RAID10
+
+ return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
--
1.8.3.1
Gris Ge
2015-04-22 15:20:43 UTC
Permalink
* New commands:
* volume-create-raid
Create RAID group and allocate all space to single new volume.

* volume-create-raid-cap
Query supported RAID types and strip sizes of volume-create-raid
command.

* Manpage updated.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Changes in V4:

* Renamed to 'volume-raid-create' and 'volume-raid-create-cap' commands.

* Add bash auto completion for these commands and their aliases.

Signed-off-by: Gris Ge <***@redhat.com>
---
doc/man/lsmcli.1.in | 32 +++++++++++++++++++++
tools/bash_completion/lsmcli | 13 +++++++++
tools/lsmcli/cmdline.py | 67 +++++++++++++++++++++++++++++++++++++++++++-
tools/lsmcli/data_display.py | 36 +++++++++++++++++++++++-
4 files changed, 146 insertions(+), 2 deletions(-)

diff --git a/doc/man/lsmcli.1.in b/doc/man/lsmcli.1.in
index 6faf690..b1ef119 100644
--- a/doc/man/lsmcli.1.in
+++ b/doc/man/lsmcli.1.in
@@ -232,6 +232,34 @@ Optional. Provisioning type. Valid values are: DEFAULT, THIN, FULL.
Provisioning enabled volume. \fBFULL\fR means requiring a fully allocated
volume.

+.SS volume-raid-create
+Creates a volume on hardware RAID on given disks.
+.TP 15
+\fB--name\fR \fI<NAME>\fR
+Required. Volume name. Might be altered or ignored due to hardware RAID card
+vendor limitation.
+.TP
+\fB--raid-type\fR \fI<RAID_TYPE>\fR
+Required. Could be one of these values: \fBRAID0\fR, \fBRAID1\fR, \fBRAID5\fR,
+\fBRAID6\fR, \fBRAID10\fR, \fBRAID50\fR, \fBRAID60\fR. The supported RAID
+types of current RAID card could be queried via command
+"\fBvolume-raid-create-cap\fR".
+.TP
+\fB--disk\fR \fI<DISK_ID>\fR
+Required. Repeatable. The disk ID for new RAID group.
+.TP
+\fB--strip-size\fR \fI<STRIP_SIZE>\fR
+Optional. The size in bytes of strip on each disks. If not defined, will let
+hardware card to use the vendor default value. The supported stripe size of
+current RAID card could be queried via command "\fBvolume-raid-create-cap\fR".
+
+.SS volume-raid-create-cap
+Query support status of volume-raid-create command for current hardware RAID
+card.
+.TP 15
+\fB--sys\fR \fI<SYS_ID>\fR
+Required. ID of the system to query for capabilities.
+
.SS volume-delete
.TP 15
Delete a volume given its ID
@@ -586,6 +614,10 @@ Alias of 'list --type target_ports'
Alias of 'plugin-info'
.SS vc
Alias of 'volume-create'
+.SS vrc
+Alias of 'volume-raid-create'
+.SS vrcc
+Alias of 'volume-raid-create-cap'
.SS vd
Alias of 'volume-delete'
.SS vr
diff --git a/tools/bash_completion/lsmcli b/tools/bash_completion/lsmcli
index 164604f..eb8fe8e 100644
--- a/tools/bash_completion/lsmcli
+++ b/tools/bash_completion/lsmcli
@@ -125,6 +125,8 @@ function _lsm()
fs_dependants_args="--fs"
file_clone_args="--fs --src --dst --backing-snapshot"
pool_member_info_args="--pool"
+ volume_raid_create_args="--name --disk --raid-type --strip-size"
+ volume_raid_create_cap_args="--sys"

# These operations can potentially be slow and cause hangs depending on plugin and configuration
if [[ ${NO_VALUE_LOOKUP} -ne 0 ]] ; then
@@ -403,6 +405,17 @@ function _lsm()
return 0
;;
*)
+ volume-raid-create|vrc)
+ possible_args "${volume_raid_create_args}"
+ COMPREPLY=( $(compgen -W "${potential_args}" -- ${cur}) )
+ return 0
+ ;;
+ volume-raid-create-cap|vrcc)
+ possible_args "${volume_raid_create_cap_args}"
+ COMPREPLY=( $(compgen -W "${potential_args}" -- ${cur}) )
+ return 0
+ ;;
+ *)
;;
esac

diff --git a/tools/lsmcli/cmdline.py b/tools/lsmcli/cmdline.py
index d26da80..cf0f3d6 100644
--- a/tools/lsmcli/cmdline.py
+++ b/tools/lsmcli/cmdline.py
@@ -40,7 +40,7 @@
from lsm.lsmcli.data_display import (
DisplayData, PlugData, out,
vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo,
- PoolRAIDInfo)
+ PoolRAIDInfo, VcrCap)


## Wraps the invocation to the command line
@@ -242,6 +242,37 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
),

dict(
+ name='volume-raid-create',
+ help='Creates a RAIDed volume on hardware RAID',
+ args=[
+ dict(name="--name", help='volume name', metavar='<NAME>'),
+ dict(name="--disk", metavar='<DISK>',
+ help='Free disks for new RAIDed volume.\n'
+ 'This is repeatable argument.',
+ action='append'),
+ dict(name="--raid-type",
+ help="RAID type for the new RAID group. "
+ "Should be one of these:\n %s" %
+ "\n ".join(
+ VolumeRAIDInfo.VOL_CREATE_RAID_TYPES_STR),
+ choices=VolumeRAIDInfo.VOL_CREATE_RAID_TYPES_STR,
+ type=str.upper),
+ ],
+ optional=[
+ dict(name="--strip-size",
+ help="Strip size. " + size_help),
+ ],
+ ),
+
+ dict(
+ name='volume-raid-create-cap',
+ help='Query capablity of creating a RAIDed volume on hardware RAID',
+ args=[
+ dict(sys_id_opt),
+ ],
+ ),
+
+ dict(
name='volume-delete',
help='Deletes a volume given its id',
args=[
@@ -635,6 +666,8 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
['c', 'capabilities'],
['p', 'plugin-info'],
['vc', 'volume-create'],
+ ['vrc', 'volume-raid-create'],
+ ['vrcc', 'volume-raid-create-cap'],
['vd', 'volume-delete'],
['vr', 'volume-resize'],
['vm', 'volume-mask'],
@@ -1351,6 +1384,38 @@ def pool_member_info(self, args):
PoolRAIDInfo(
lsm_pool.id, *self.c.pool_member_info(lsm_pool))])

+ def volume_raid_create(self, args):
+ raid_type = VolumeRAIDInfo.raid_type_str_to_lsm(args.raid_type)
+
+ all_lsm_disks = self.c.disks()
+ lsm_disks = [d for d in all_lsm_disks if d.id in args.disk]
+ if len(lsm_disks) != len(args.disk):
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_DISK,
+ "Disk ID %s not found" %
+ ', '.join(set(args.disk) - set(d.id for d in all_lsm_disks)))
+
+ busy_disks = [d.id for d in lsm_disks if not d.status & Disk.STATUS_FREE]
+
+ if len(busy_disks) >= 1:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not free" % ", ".join(busy_disks))
+
+ if args.strip_size:
+ strip_size = size_human_2_size_bytes(args.strip_size)
+ else:
+ strip_size = Volume.VCR_STRIP_SIZE_DEFAULT
+
+ self.display_data([
+ self.c.volume_raid_create(
+ args.name, raid_type, lsm_disks, strip_size)])
+
+ def volume_raid_create_cap(self, args):
+ lsm_sys = _get_item(self.c.systems(), args.sys, "System")
+ self.display_data([
+ VcrCap(lsm_sys.id, *self.c.volume_raid_create_cap_get(lsm_sys))])
+
## Displays file system dependants
def fs_dependants(self, args):
fs = _get_item(self.c.fs(), args.fs, "File System")
diff --git a/tools/lsmcli/data_display.py b/tools/lsmcli/data_display.py
index 115f15a..b9edf68 100644
--- a/tools/lsmcli/data_display.py
+++ b/tools/lsmcli/data_display.py
@@ -265,6 +265,9 @@ class VolumeRAIDInfo(object):
Volume.RAID_TYPE_UNKNOWN: 'UNKNOWN',
}

+ VOL_CREATE_RAID_TYPES_STR = [
+ 'RAID0', 'RAID1', 'RAID5', 'RAID6', 'RAID10', 'RAID50', 'RAID60']
+
def __init__(self, vol_id, raid_type, strip_size, disk_count,
min_io_size, opt_io_size):
self.vol_id = vol_id
@@ -278,6 +281,10 @@ def __init__(self, vol_id, raid_type, strip_size, disk_count,
def raid_type_to_str(raid_type):
return _enum_type_to_str(raid_type, VolumeRAIDInfo._RAID_TYPE_MAP)

+ @staticmethod
+ def raid_type_str_to_lsm(raid_type_str):
+ return _str_to_enum(raid_type_str, VolumeRAIDInfo._RAID_TYPE_MAP)
+

class PoolRAIDInfo(object):
_MEMBER_TYPE_MAP = {
@@ -298,6 +305,11 @@ def member_type_to_str(member_type):
return _enum_type_to_str(
member_type, PoolRAIDInfo._MEMBER_TYPE_MAP)

+class VcrCap(object):
+ def __init__(self, system_id, raid_types, strip_sizes):
+ self.system_id = system_id
+ self.raid_types = raid_types
+ self.strip_sizes = strip_sizes

class DisplayData(object):

@@ -598,6 +610,25 @@ def __init__(self):
'value_conv_human': POOL_RAID_INFO_VALUE_CONV_HUMAN,
}

+ VCR_CAP_HEADER = OrderedDict()
+ VCR_CAP_HEADER['system_id'] = 'System ID'
+ VCR_CAP_HEADER['raid_types'] = 'Supported RAID Types'
+ VCR_CAP_HEADER['strip_sizes'] = 'Supported Strip Sizes'
+
+ VCR_CAP_COLUMN_SKIP_KEYS = []
+
+ VCR_CAP_VALUE_CONV_ENUM = {
+ 'raid_types': lambda i: [VolumeRAIDInfo.raid_type_to_str(x) for x in i]
+ }
+ VCR_CAP_VALUE_CONV_HUMAN = ['strip_sizes']
+
+ VALUE_CONVERT[VcrCap] = {
+ 'headers': VCR_CAP_HEADER,
+ 'column_skip_keys': VCR_CAP_COLUMN_SKIP_KEYS,
+ 'value_conv_enum': VCR_CAP_VALUE_CONV_ENUM,
+ 'value_conv_human': VCR_CAP_VALUE_CONV_HUMAN,
+ }
+
@staticmethod
def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
flag_human, flag_enum):
@@ -607,7 +638,10 @@ def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
value = value_conv_enum[key](value)
if flag_human:
if key in value_conv_human:
- value = size_bytes_2_size_human(value)
+ if type(value) is list:
+ value = list(size_bytes_2_size_human(s) for s in value)
+ else:
+ value = size_bytes_2_size_human(value)
return value

@staticmethod
--
1.8.3.1
Gris Ge
2015-04-22 15:20:42 UTC
Permalink
* New methods:
For detail information, please refer to docstring of python methods.

* Python:
* lsm.Client.volume_raid_create_cap_get(self, system,
flags=lsm.Client.FLAG_RSVD)

Query supported RAID types and strip sizes of volume_create_raid().

* lsm.Client.volume_raid_create(self, name, raid_type, disks,
strip_size,
flags=lsm.Client.FLAG_RSVD)
Create a disk RAID pool and allocate its full space to single
volume.

* C user API:
* int lsm_volume_create_raid_cap_get(
lsm_connect *c, lsm_system *system,
uint32_t **supported_raid_types,
uint32_t *supported_raid_type_count,
uint32_t **supported_strip_sizes,
uint32_t *supported_strip_size_count,
lsm_flag flags);
* int lsm_volume_create_raid(
lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
lsm_disk *disks[], uint32_t disk_count, uint32_t strip_size,
lsm_volume **new_volume, lsm_flag flags);

* C plugin interfaces:
* typedef int (*lsm_plug_volume_create_raid_cap_get)(
lsm_plugin_ptr c, lsm_system *system,
uint32_t **supported_raid_types,
uint32_t *supported_raid_type_count,
uint32_t **supported_strip_sizes, uint32_t
*supported_strip_size_count, lsm_flag flags)

* typedef int (*lsm_plug_volume_create_raid)(
lsm_plugin_ptr c, const char *name,
lsm_volume_raid_type raid_type, lsm_disk *disks[], uint32_t
disk_count, uint32_t strip_size, lsm_volume **new_volume,
lsm_flag flags);

* Added two above into 'lsm_ops_v1_2' struct.

* C internal functions for converting C uint32 array to C++ Value:
* values_to_uint32_array(
Value &value, uint32_t **uint32_array, uint32_t *count)
* uint32_array_to_value(
uint32_t *uint32_array, uint32_t count)

* New capability:
* Python: lsm.Capabilities.VOLUME_CREATE_RAID
* C: LSM_CAP_VOLUME_CREATE_RAID

* New errors:
* Python: lsm.ErrorNumber.NOT_FOUND_DISK, lsm.ErrorNumber.DISK_NOT_FREE
* C: LSM_ERR_NOT_FOUND_DISK, LSM_ERR_DISK_NOT_FREE

* Design notes:
* Why not create pool only and allocate volume from it?
Neither HP Smart Array nor LSI MegaRAID support this.
Both require at least one volume on any pool(RAID group) and
pool(RAID group) only created when creating volume.
This is also the root reason that this method is dedicate for
hardware RAID cards, not for SAN or NAS.

* Why not allowing create multiple volumes but use single volume
consuming all space?
Even both vendors above support creating creating multiple volumes
in simple command, but it will be complex design API and many new
errors. In my humble options, most use case is to create single
volume and let OS create partitions on it.

* Why not store supported RAID type and stripe size in existing
lsm.Client.capabilities() call?
RAID type could be stored in capabilities(), but both plugin
implementation and user code has to do a capability to raid type
conversion. The strip size is number, even two vendors mentioned above
are supporting the same list of strip size, but there is no reason
create this limitation, again, conversion is also required for
capability to strip size integer if saving it in capabilities().

* Why not storing supported RAID type returned by
volume_raid_create_cap_get() in bitmap?
A conversion is require is do so. Just want to make plugin and
API user life easier.

* Why strip size is mandatory argument and where is strip size change
method?
Changing strip size after volume creation will normally take a long
time for data migration. We will not introduce strip size change
method without convincing user case.

Changes in V2:

* Rebased on pool_raid_info->pool_member_info rename changes.
* Fix 'comparison between signed and unsigned integer' in
lsm_volume_create_raid() of lsm_mgmt.cpp.
* Initialize pointers in handle_volume_create_raid_cap_get() of
lsm_plugin_ipc.cpp.

Changes in V4:

* Use 'volume_raid_create' instead:
* lsm.Client.volume_raid_create()
* lsm.Client.volume_raid_create_cap_get()
* lsm_volume_raid_create_cap_get()
* lsm_volume_raid_create()
* lsm_plug_volume_raid_create()
* lsm_plug_volume_raid_create_cap_get()

* Use this capability instead:
* LSM_CAP_VOLUME_RAID_CREATE/lsm.Capabilities.VOLUME_RAID_CREATE

* Changed method description to 'Create a disk RAID pool and allocate entire
full space to new volume."

* Free member and set output values to sane if error occurred in
lsm_volume_raid_create_cap_get() of lsm_mgmt.cpp.

* Only free uint32 array if not NULL in handle_volume_create_raid_cap_get()
of lsm_plugin_ipc.cpp.

* Free lsm_sys memory before quit in handle_volume_create_raid_cap_get() of
lsm_plugin_ipc.cpp.

* Fix incorrect strip_size data conversion from JSON to C in
handle_volume_create_raid() of lsm_plugin_ipc.cpp.

* Free lsm_volume memory before quit in handle_volume_create_raid() of
lsm_plugin_ipc.cpp.

* Updated docstring of lsm.Client.volume_raid_create_cap_get() by
removing incorrect RAID types(RAID 15/16/51/61) as no plugin support them
yet.

* Updated docstring of lsm.Client.volume_raid_create() by removing incorrect
capabilities.

* Updated docstring of lsm.Client.volume_raid_create() by removing
NAME_CONFLICT error of duplicate call. Duplicate call should always
raise DISK_NOT_FREE error.

* Updated docstring of lsm.Client.volume_raid_create() by mentioning
the name of new pool should be automatically chose by plugin.

Signed-off-by: Gris Ge <***@redhat.com>
---
c_binding/include/libstoragemgmt/libstoragemgmt.h | 62 +++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 1 +
.../include/libstoragemgmt/libstoragemgmt_error.h | 2 +
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 57 +++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 3 +
c_binding/lsm_convert.cpp | 41 +++++
c_binding/lsm_convert.hpp | 12 ++
c_binding/lsm_mgmt.cpp | 128 ++++++++++++++
c_binding/lsm_plugin_ipc.cpp | 96 ++++++++++-
python_binding/lsm/_client.py | 187 +++++++++++++++++++++
python_binding/lsm/_common.py | 3 +
python_binding/lsm/_data.py | 3 +
12 files changed, 594 insertions(+), 1 deletion(-)

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt.h b/c_binding/include/libstoragemgmt/libstoragemgmt.h
index 85022b4..869d5fd 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt.h
@@ -891,6 +891,68 @@ int LSM_DLL_EXPORT lsm_pool_member_info(
lsm_pool_member_type *member_type, lsm_string_list **member_ids,
lsm_flag flags);

+/**
+ * Query all supported RAID types and strip sizes which could be used
+ * in lsm_volume_raid_create() functions.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid connection
+ * @param[in] system
+ * The lsm_sys type.
+ * @param[out] supported_raid_types
+ * The pointer of uint32_t array. Containing
+ * lsm_volume_raid_type values.
+ * You need to free this memory by yourself.
+ * @param[out] supported_raid_type_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_raid_types array.
+ * @param[out] supported_strip_sizes
+ * The pointer of uint32_t array. Containing
+ * all supported strip sizes.
+ * You need to free this memory by yourself.
+ * @param[out] supported_strip_size_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_strip_sizes array.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_volume_raid_create_cap_get(
+ lsm_connect *c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags);
+
+/**
+ * Create a disk RAID pool and allocate entire full space to new volume.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid connection
+ * @param[in] name String. Name for the new volume. It might be ignored or
+ * altered on some hardwardware raid cards in order to fit
+ * their limitation.
+ * @param[in] raid_type
+ * Enum of lsm_volume_raid_type. Please refer to the returns
+ * of lsm_volume_raid_create_cap_get() function for
+ * supported strip sizes.
+ * @param[in] disks
+ * An array of lsm_disk types
+ * @param[in] disk_count
+ * The count of lsm_disk in 'disks' argument.
+ * Count starts with 1.
+ * @param[in] strip_size
+ * uint32_t. The strip size in bytes. Please refer to
+ * the returns of lsm_volume_raid_create_cap_get() function
+ * for supported strip sizes.
+ * @param[out] new_volume
+ * Newly created volume, Pointer to the lsm_volume type
+ * pointer.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_volume_raid_create(
+ lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags);
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
index 943f4a4..07bf8b0 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
@@ -117,6 +117,7 @@ typedef enum {
LSM_CAP_POOL_MEMBER_INFO = 221,
/**^ Query pool member information */

+ LSM_CAP_VOLUME_RAID_CREATE = 222,
} lsm_capability_type;

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_error.h b/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
index 0b28ddc..29c460a 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
@@ -63,6 +63,7 @@ typedef enum {
LSM_ERR_NOT_FOUND_VOLUME = 205, /**< Specified volume not found */
LSM_ERR_NOT_FOUND_NFS_EXPORT = 206, /**< NFS export not found */
LSM_ERR_NOT_FOUND_SYSTEM = 208, /**< System not found */
+ LSM_ERR_NOT_FOUND_DISK = 209,

LSM_ERR_NOT_LICENSED = 226, /**< Need license for feature */

@@ -90,6 +91,7 @@ typedef enum {

LSM_ERR_EMPTY_ACCESS_GROUP = 511,
LSM_ERR_POOL_NOT_READY = 512,
+ LSM_ERR_DISK_NOT_FREE = 513,

} lsm_error_number;

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
index cdc8dc9..460054d 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
@@ -853,6 +853,61 @@ typedef int (*lsm_plug_pool_member_info)(
lsm_pool_member_type *member_type, lsm_string_list **member_ids,
lsm_flag flags);

+/**
+ * Query all supported RAID types and strip sizes which could be used
+ * in lsm_volume_raid_create() functions.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] system
+ * The lsm_sys type.
+ * @param[out] supported_raid_types
+ * The pointer of uint32_t array. Containing
+ * lsm_volume_raid_type values.
+ * @param[out] supported_raid_type_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_raid_types array.
+ * @param[out] supported_strip_sizes
+ * The pointer of uint32_t array. Containing
+ * all supported strip sizes.
+ * @param[out] supported_strip_size_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_strip_sizes array.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_volume_raid_create_cap_get)(
+ lsm_plugin_ptr c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags);
+
+/**
+ * Create a disk RAID pool and allocate entire full space to new volume.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] name String. Name for the new volume. It might be ignored or
+ * altered on some hardwardware raid cards in order to fit
+ * their limitation.
+ * @param[in] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[in] disks
+ * An array of lsm_disk types
+ * @param[in] disk_count
+ * The count of lsm_disk in 'disks' argument.
+ * @param[in] strip_size
+ * uint32_t. The strip size in bytes.
+ * @param[out] new_volume
+ * Newly created volume, Pointer to the lsm_volume type
+ * pointer.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_volume_raid_create)(
+ lsm_plugin_ptr c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags);
+
/** \struct lsm_ops_v1_2
* \brief Functions added in version 1.2
* NOTE: This structure will change during the developement util version 1.2
@@ -862,6 +917,8 @@ struct lsm_ops_v1_2 {
lsm_plug_volume_raid_info vol_raid_info;
/**^ Query volume RAID information*/
lsm_plug_pool_member_info pool_member_info;
+ lsm_plug_volume_raid_create_cap_get vol_create_raid_cap_get;
+ lsm_plug_volume_raid_create vol_create_raid;
};

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
index 115ec5a..667c320 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
@@ -297,6 +297,9 @@ typedef enum {
LSM_TARGET_PORT_TYPE_ISCSI = 4
} lsm_target_port_type;

+#define LSM_VOLUME_VCR_STRIP_SIZE_DEFAULT 0
+/** ^ Plugin and hardware RAID will use their default strip size */
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/lsm_convert.cpp b/c_binding/lsm_convert.cpp
index 7baf2e6..9b799ab 100644
--- a/c_binding/lsm_convert.cpp
+++ b/c_binding/lsm_convert.cpp
@@ -617,3 +617,44 @@ Value target_port_to_value(lsm_target_port *tp)
}
return Value();
}
+
+int values_to_uint32_array(
+ Value &value, uint32_t **uint32_array, uint32_t *count)
+{
+ int rc = LSM_ERR_OK;
+ try {
+ std::vector<Value> data = value.asArray();
+ *count = data.size();
+ if (*count){
+ *uint32_array = (uint32_t *)malloc(sizeof(uint32_t) * *count);
+ if (*uint32_array){
+ uint32_t i;
+ for( i = 0; i < *count; i++ ){
+ (*uint32_array)[i] = data[i].asUint32_t();
+ }
+ }else{
+ rc = LSM_ERR_NO_MEMORY;
+ }
+ }
+ } catch( const ValueException &ve) {
+ if( *count ) {
+ free(*uint32_array);
+ *uint32_array = NULL;
+ *count = 0;
+ }
+ rc = LSM_ERR_LIB_BUG;
+ }
+ return rc;
+}
+
+Value uint32_array_to_value(uint32_t *uint32_array, uint32_t count)
+{
+ std::vector<Value> rc;
+ if( uint32_array && count ) {
+ uint32_t i;
+ for( i = 0; i < count; i++ ) {
+ rc.push_back(uint32_array[i]);
+ }
+ }
+ return rc;
+}
diff --git a/c_binding/lsm_convert.hpp b/c_binding/lsm_convert.hpp
index fe1e91d..907a229 100644
--- a/c_binding/lsm_convert.hpp
+++ b/c_binding/lsm_convert.hpp
@@ -284,4 +284,16 @@ lsm_target_port LSM_DLL_LOCAL *value_to_target_port(Value &tp);
*/
Value LSM_DLL_LOCAL target_port_to_value(lsm_target_port *tp);

+/**
+ * Converts a value to array of uint32.
+ */
+int LSM_DLL_LOCAL values_to_uint32_array(
+ Value &value, uint32_t **uint32_array, uint32_t *count);
+
+/**
+ * Converts an array of uint32 to a value.
+ */
+Value LSM_DLL_LOCAL uint32_array_to_value(
+ uint32_t *uint32_array, uint32_t count);
+
#endif
diff --git a/c_binding/lsm_mgmt.cpp b/c_binding/lsm_mgmt.cpp
index 7469127..d57edcc 100644
--- a/c_binding/lsm_mgmt.cpp
+++ b/c_binding/lsm_mgmt.cpp
@@ -2268,3 +2268,131 @@ int lsm_nfs_export_delete( lsm_connect *c, lsm_nfs_export *e, lsm_flag flags)
int rc = rpc(c, "export_remove", parameters, response);
return rc;
}
+
+int lsm_volume_raid_create_cap_get(
+ lsm_connect *c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags)
+{
+ CONN_SETUP(c);
+
+ if( !supported_raid_types || !supported_raid_type_count ||
+ !supported_strip_sizes || !supported_strip_size_count) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["system"] = system_to_value(system);
+ p["flags"] = Value(flags);
+
+ Value parameters(p);
+ Value response;
+
+ int rc = rpc(c, "volume_raid_create_cap_get", parameters, response);
+ try {
+ std::vector<Value> j = response.asArray();
+
+ rc = values_to_uint32_array(
+ j[0], supported_raid_types, supported_raid_type_count);
+
+ if( rc != LSM_ERR_OK ){
+ *supported_raid_types = NULL;
+ *supported_raid_type_count = 0;
+ *supported_strip_sizes = NULL;
+ *supported_strip_size_count = 0;
+ return rc;
+ }
+
+ rc = values_to_uint32_array(
+ j[1], supported_strip_sizes, supported_strip_size_count);
+ if( rc != LSM_ERR_OK ){
+ free(*supported_raid_types);
+ *supported_raid_types = NULL;
+ *supported_raid_type_count = 0;
+ *supported_strip_sizes = NULL;
+ *supported_strip_size_count = 0;
+ }
+ } catch( const ValueException &ve ) {
+ rc = logException(c, LSM_ERR_LIB_BUG, "Unexpected type",
+ ve.what());
+ }
+ return rc;
+}
+
+int lsm_volume_raid_create(
+ lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags)
+{
+ CONN_SETUP(c);
+
+ if (disk_count == 0){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "Require at least one disks", NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID1 && disk_count != 2){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 1 only allows two disks", NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID5 && disk_count < 3){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 5 require 3 or more disks",
+ NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID6 && disk_count < 4){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 5 require 4 or more disks",
+ NULL);
+ }
+
+ if ( disk_count % 2 ){
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID10 && disk_count < 4){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 10 require even disks count and 4 or more disks",
+ NULL);
+ }
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID50 && disk_count < 6){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 50 require even disks count and 6 or more disks",
+ NULL);
+ }
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID60 && disk_count < 8){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 60 require even disks count and 8 or more disks",
+ NULL);
+ }
+ }
+
+ if (CHECK_RP(new_volume)){
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["name"] = Value(name);
+ p["raid_type"] = Value((int32_t)raid_type);
+ p["strip_size"] = Value((int32_t)strip_size);
+ p["flags"] = Value(flags);
+ std::vector<Value> disks_value;
+ for (uint32_t i = 0; i < disk_count; i++){
+ disks_value.push_back(disk_to_value(disks[i]));
+ }
+ p["disks"] = disks_value;
+
+ Value parameters(p);
+ Value response;
+
+ int rc = rpc(c, "volume_raid_create", parameters, response);
+ if( LSM_ERR_OK == rc ) {
+ *new_volume = value_to_volume(response);
+ }
+ return rc;
+
+}
diff --git a/c_binding/lsm_plugin_ipc.cpp b/c_binding/lsm_plugin_ipc.cpp
index 89b7674..b4d7982 100644
--- a/c_binding/lsm_plugin_ipc.cpp
+++ b/c_binding/lsm_plugin_ipc.cpp
@@ -2200,6 +2200,98 @@ static int iscsi_chap(lsm_plugin_ptr p, Value &params, Value &response)
}
return rc;
}
+static int handle_volume_raid_create_cap_get(
+ lsm_plugin_ptr p, Value &params, Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->vol_create_raid_cap_get ) {
+ Value v_system = params["system"];
+
+ if( IS_CLASS_SYSTEM(v_system) &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+ lsm_system *sys = value_to_system(v_system);
+
+ uint32_t *supported_raid_types = NULL;
+ uint32_t supported_raid_type_count = 0;
+
+ uint32_t *supported_strip_sizes = NULL;
+ uint32_t supported_strip_size_count = 0;
+
+ rc = p->ops_v1_2->vol_create_raid_cap_get(
+ p, sys, &supported_raid_types, &supported_raid_type_count,
+ &supported_strip_sizes, &supported_strip_size_count,
+ LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ std::vector<Value> result;
+ result.push_back(
+ uint32_array_to_value(
+ supported_raid_types, supported_raid_type_count));
+ result.push_back(
+ uint32_array_to_value(
+ supported_strip_sizes, supported_strip_size_count));
+ response = Value(result);
+ free(supported_raid_types);
+ free(supported_strip_sizes);
+ }
+
+ lsm_system_record_free(sys);
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+
+}
+
+static int handle_volume_raid_create(
+ lsm_plugin_ptr p, Value &params, Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->vol_create_raid ) {
+ Value v_name = params["name"];
+ Value v_raid_type = params["raid_type"];
+ Value v_strip_size = params["strip_size"];
+ Value v_disks = params["disks"];
+
+ if( Value::string_t == v_name.valueType() &&
+ Value::numeric_t == v_raid_type.valueType() &&
+ Value::numeric_t == v_strip_size.valueType() &&
+ Value::array_t == v_disks.valueType() &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+
+ lsm_disk **disks = NULL;
+ uint32_t disk_count = 0;
+ rc = value_array_to_disks(v_disks, &disks, &disk_count);
+ if( LSM_ERR_OK != rc ) {
+ return rc;
+ }
+
+ const char *name = v_name.asC_str();
+ lsm_volume_raid_type raid_type =
+ (lsm_volume_raid_type) v_raid_type.asInt32_t();
+ uint32_t strip_size = v_strip_size.asUint32_t();
+
+ lsm_volume *new_vol = NULL;
+
+ rc = p->ops_v1_2->vol_create_raid(
+ p, name, raid_type, disks, disk_count, strip_size,
+ &new_vol, LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ response = volume_to_value(new_vol);
+ lsm_volume_record_free(new_vol);
+ }
+
+ lsm_disk_record_array_free(disks, disk_count);
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+}

/**
* map of function pointers
@@ -2255,7 +2347,9 @@ static std::map<std::string,handler> dispatch = static_map<std::string,handler>
("volumes_accessible_by_access_group", vol_accessible_by_ag)
("volumes", handle_volumes)
("volume_raid_info", handle_volume_raid_info)
- ("pool_member_info", handle_pool_member_info);
+ ("pool_member_info", handle_pool_member_info)
+ ("volume_raid_create", handle_volume_raid_create)
+ ("volume_raid_create_cap_get", handle_volume_raid_create_cap_get);

static int process_request(lsm_plugin_ptr p, const std::string &method, Value &request,
Value &response)
diff --git a/python_binding/lsm/_client.py b/python_binding/lsm/_client.py
index f3c9e6c..16929a5 100644
--- a/python_binding/lsm/_client.py
+++ b/python_binding/lsm/_client.py
@@ -1172,3 +1172,190 @@ def pool_member_info(self, pool, flags=FLAG_RSVD):
lsm.Capabilities.POOL_MEMBER_INFO
"""
return self._tp.rpc('pool_member_info', _del_self(locals()))
+
+ @_return_requires([[int], [int]])
+ def volume_raid_create_cap_get(self, system, flags=FLAG_RSVD):
+ """
+ lsm.Client.volume_raid_create_cap_get(
+ self, system, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ This method is dedicated to local hardware RAID cards.
+ Query out all supported RAID types and strip sizes which could
+ be used by lsm.Client.volume_raid_create() method.
+ Parameters:
+ system (lsm.System)
+ Instance of lsm.System
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ [raid_types, strip_sizes]
+ raid_types ([int])
+ List of integer, possible values are:
+ Volume.RAID_TYPE_RAID0
+ Volume.RAID_TYPE_RAID1
+ Volume.RAID_TYPE_RAID5
+ Volume.RAID_TYPE_RAID6
+ Volume.RAID_TYPE_RAID10
+ Volume.RAID_TYPE_RAID50
+ Volume.RAID_TYPE_RAID60
+ strip_sizes ([int])
+ List of integer. Stripe size in bytes.
+ SpecialExceptions:
+ LsmError
+ lsm.ErrorNumber.NO_SUPPORT
+ Method not supported.
+ Sample:
+ lsm_client = lsm.Client('sim://')
+ lsm_sys = lsm_client.systems()[0]
+ disks = lsm_client.disks(
+ search_key='system_id', search_value=lsm_sys.id)
+
+ free_disks = [d for d in disks if d.status == Disk.STATUS_FREE]
+ supported_raid_types, supported_strip_sizes = \
+ lsm_client.volume_raid_create_cap_get(lsm_sys)
+ new_vol = lsm_client.volume_raid_create(
+ 'test_volume_raid_create', supported_raid_types[0],
+ free_disks, supported_strip_sizes[0])
+
+ Capability:
+ lsm.Capabilities.VOLUME_CREATE_RAID
+ This method is mandatory when volume_raid_create() is
+ supported.
+ """
+ return self._tp.rpc('volume_raid_create_cap_get', _del_self(locals()))
+
+ @_return_requires(Volume)
+ def volume_raid_create(self, name, raid_type, disks, strip_size,
+ flags=FLAG_RSVD):
+ """
+ lsm.Client.volume_raid_create(self, name, raid_type, disks,
+ strip_size, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ This method is dedicated to local hardware RAID cards.
+ Create a disk RAID pool and allocate entire storage space to
+ new volume using requested volume name.
+ When dealing with RAID10, 50 or 60, the first half part of
+ 'disks' will be located in one bottom layer RAID group.
+ The new volume and new pool will created within the same system
+ of provided disks.
+ This method does not allow duplicate call, when duplicate call
+ was issued, LsmError with ErrorNumber.DISK_NOT_FREE will be raise.
+ User should check disk.status for Disk.STATUS_FREE before
+ invoking this method.
+ Parameters:
+ name (string)
+ The name for new volume.
+ The requested volume name might be ignored due to restriction
+ of hardware RAID vendors.
+ The pool name will be automatically choosed by plugin.
+ raid_type (int)
+ The RAID type for the RAID group, possible values are:
+ Volume.RAID_TYPE_RAID0
+ Volume.RAID_TYPE_RAID1
+ Volume.RAID_TYPE_RAID5
+ Volume.RAID_TYPE_RAID6
+ Volume.RAID_TYPE_RAID10
+ Volume.RAID_TYPE_RAID15
+ Volume.RAID_TYPE_RAID16
+ Volume.RAID_TYPE_RAID50
+ Please check volume_raid_create_cap_get() returns to get
+ supported all raid types of current hardware RAID card.
+ disks ([lsm.Disks,])
+ A list of lsm.Disk objects. Free disks used for new RAID group.
+ strip_size (int)
+ The size in bytes of strip.
+ When setting strip_size to Volume.VCR_STRIP_SIZE_DEFAULT, it
+ allow hardware RAID cards to choose their default value.
+ Please use volume_raid_create_cap_get() method to get all
+ supported strip size of current hardware RAID card.
+ The Volume.VCR_STRIP_SIZE_DEFAULT is always supported when
+ lsm.Capabilities.VOLUME_CREATE_RAID is supported.
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ lsm.Volume
+ The lsm.Volume object for newly created volume.
+ SpecialExceptions:
+ LsmError
+ lsm.ErrorNumber.NO_SUPPORT
+ Method not supported or RAID type not supported.
+ lsm.ErrorNumber.DISK_NOT_FREE
+ Disk is not in Disk.STATUS_FREE status.
+ lsm.ErrorNumber.NOT_FOUND_DISK
+ Disk not found
+ lsm.ErrorNumber.INVALID_ARGUMENT
+ 1. Invalid input argument data.
+ 2. Disks are not from the same system.
+ 3. Disks are not from the same enclosure.
+ 4. Invalid strip_size.
+ 5. Disk count are meet the minimum requirement:
+ RAID1: len(disks) == 2
+ RAID5: len(disks) >= 3
+ RAID6: len(disks) >= 4
+ RAID10: len(disks) % 2 == 0 and len(disks) >= 4
+ RAID50: len(disks) % 2 == 0 and len(disks) >= 6
+ RAID60: len(disks) % 2 == 0 and len(disks) >= 8
+ lsm.ErrorNumber.NAME_CONFLICT
+ Requested name is already be used by other volume.
+ Sample:
+ lsm_client = lsm.Client('sim://')
+ disks = lsm_client.disks()
+ free_disks = [d for d in disks if d.status == Disk.STATUS_FREE]
+ new_vol = lsm_client.volume_raid_create(
+ 'raid0_vol1', Volume.RAID_TYPE_RAID0, free_disks)
+ Capability:
+ lsm.Capabilities.VOLUME_CREATE_RAID
+ Indicate current system support volume_raid_create() method.
+ At least one RAID type should be supported.
+ The strip_size == Volume.VCR_STRIP_SIZE_DEFAULT is supported.
+ """
+ if len(disks) == 0:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: no disk included")
+
+ if raid_type == Volume.RAID_TYPE_RAID1 and len(disks) != 2:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 1 only allow 2 disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID5 and len(disks) < 3:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 5 require 3 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID6 and len(disks) < 4:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 6 require 4 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID10:
+ if len(disks) % 2 or len(disks) < 4:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 10 require even disks count and 4 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID50:
+ if len(disks) % 2 or len(disks) < 6:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 50 require even disks count and 6 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID60:
+ if len(disks) % 2 or len(disks) < 8:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 60 require even disks count and 8 or more disks")
+
+ return self._tp.rpc('volume_raid_create', _del_self(locals()))
diff --git a/python_binding/lsm/_common.py b/python_binding/lsm/_common.py
index 4c87661..be0c734 100644
--- a/python_binding/lsm/_common.py
+++ b/python_binding/lsm/_common.py
@@ -447,6 +447,7 @@ class ErrorNumber(object):
NOT_FOUND_VOLUME = 205
NOT_FOUND_NFS_EXPORT = 206
NOT_FOUND_SYSTEM = 208
+ NOT_FOUND_DISK = 209

NOT_LICENSED = 226

@@ -478,6 +479,8 @@ class ErrorNumber(object):

POOL_NOT_READY = 512 # Pool is not ready for create/resize/etc

+ DISK_NOT_FREE = 513 # Disk is not in DISK.STATUS_FREE status.
+
_LOCALS = locals()

@staticmethod
diff --git a/python_binding/lsm/_data.py b/python_binding/lsm/_data.py
index 2f42eaf..0e16311 100644
--- a/python_binding/lsm/_data.py
+++ b/python_binding/lsm/_data.py
@@ -306,6 +306,8 @@ class Volume(IData):
MIN_IO_SIZE_UNKNOWN = 0
OPT_IO_SIZE_UNKNOWN = 0

+ VCR_STRIP_SIZE_DEFAULT = 0
+
def __init__(self, _id, _name, _vpd83, _block_size, _num_of_blocks,
_admin_state, _system_id, _pool_id, _plugin_data=None):
self._id = _id # Identifier
@@ -760,6 +762,7 @@ class Capabilities(IData):

DISKS = 220
POOL_MEMBER_INFO = 221
+ VOLUME_RAID_CREATE = 222

def _to_dict(self):
return {'class': self.__class__.__name__,
--
1.8.3.1
Gris Ge
2015-04-22 15:20:45 UTC
Permalink
* Updated lsm_ops_v1_2 structure by adding two methods:
* lsm_plug_volume_create_raid_cap_get()
* lsm_plug_volume_create_raid()

* These two methods simply return no support.

* Tested C plugin interface by returning fake data, but changed back to
no support. We will implement the real method in the future.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Changes in V4:

* Method renamed to volume_raid_create().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/simc/simc_lsmplugin.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)

diff --git a/plugin/simc/simc_lsmplugin.c b/plugin/simc/simc_lsmplugin.c
index 6aa7bdb..5b9be3f 100644
--- a/plugin/simc/simc_lsmplugin.c
+++ b/plugin/simc/simc_lsmplugin.c
@@ -1001,9 +1001,29 @@ static int pool_member_info(
return rc;
}

+static int volume_raid_create_cap_get(
+ lsm_plugin_ptr c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags)
+{
+ return LSM_ERR_NO_SUPPORT;
+}
+
+static int volume_raid_create(
+ lsm_plugin_ptr c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags)
+{
+ return LSM_ERR_NO_SUPPORT;
+}
+
static struct lsm_ops_v1_2 ops_v1_2 = {
volume_raid_info,
pool_member_info,
+ volume_raid_create_cap_get,
+ volume_raid_create,
};

static int volume_enable_disable(lsm_plugin_ptr c, lsm_volume *v,
--
1.8.3.1
Gris Ge
2015-04-22 15:20:44 UTC
Permalink
* Add support of these methods:
* volume_create_raid()
* volume_create_raid_cap_get()

* Add 'strip_size' property into pool.
* Add 'is_hw_raid_vol' property into volume.
If true, volume_delete() will also delete parent pool.

* Bumped data version to 3.4.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Changes in V4:

* Method renamed to volume_raid_create().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 119 ++++++++++++++++++++++++++++++++++++++++++------
plugin/sim/simulator.py | 8 ++++
2 files changed, 113 insertions(+), 14 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index a0809f6..3b4ec2a 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -123,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.3"
+ VERSION = "3.4"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -133,7 +133,7 @@ class BackStore(object):
SYS_ID = "sim-01"
SYS_NAME = "LSM simulated storage plug-in"
BLK_SIZE = 512
- STRIP_SIZE = 131072 # 128 KiB
+ DEFAULT_STRIP_SIZE = 128 * 1024 # 128 KiB

_LIST_SPLITTER = '#'

@@ -142,7 +142,8 @@ class BackStore(object):
POOL_KEY_LIST = [
'id', 'name', 'status', 'status_info',
'element_type', 'unsupported_actions', 'raid_type',
- 'member_type', 'parent_pool_id', 'total_space', 'free_space']
+ 'member_type', 'parent_pool_id', 'total_space', 'free_space',
+ 'strip_size']

DISK_KEY_LIST = [
'id', 'name', 'total_space', 'disk_type', 'status',
@@ -150,7 +151,7 @@ class BackStore(object):

VOL_KEY_LIST = [
'id', 'vpd83', 'name', 'total_space', 'consumed_size',
- 'pool_id', 'admin_state', 'thinp']
+ 'pool_id', 'admin_state', 'thinp', 'is_hw_raid_vol']

TGT_KEY_LIST = [
'id', 'port_type', 'service_address', 'network_address',
@@ -172,6 +173,16 @@ class BackStore(object):
'options', 'exp_root_hosts_str', 'exp_rw_hosts_str',
'exp_ro_hosts_str']

+ SUPPORTED_VCR_RAID_TYPES = [
+ Volume.RAID_TYPE_RAID0, Volume.RAID_TYPE_RAID1,
+ Volume.RAID_TYPE_RAID5, Volume.RAID_TYPE_RAID6,
+ Volume.RAID_TYPE_RAID10, Volume.RAID_TYPE_RAID50,
+ Volume.RAID_TYPE_RAID60]
+
+ SUPPORTED_VCR_STRIP_SIZES = [
+ 8 * 1024, 16 * 1024, 32 * 1024, 64 * 1024, 128 * 1024, 256 * 1024,
+ 512 * 1024, 1024 * 1024]
+
def __init__(self, statefile, timeout):
if not os.path.exists(statefile):
os.close(os.open(statefile, os.O_WRONLY | os.O_CREAT))
@@ -218,6 +229,7 @@ def __init__(self, statefile, timeout):
"parent_pool_id INTEGER, " # Indicate this pool is allocated from
# other pool
"member_type INTEGER, "
+ "strip_size INTEGER, "
"total_space LONG);\n") # total_space here is only for
# sub-pool (pool from pool)

@@ -244,6 +256,10 @@ def __init__(self, statefile, timeout):
# support.
"admin_state INTEGER, "
"thinp INTEGER NOT NULL, "
+ "is_hw_raid_vol INTEGER, " # Once its volume deleted, pool will
+ # be delete also. For HW RAID
+ # simulation only.
+
"pool_id INTEGER NOT NULL, "
"FOREIGN KEY(pool_id) "
"REFERENCES pools(id) ON DELETE CASCADE);\n")
@@ -362,6 +378,7 @@ def __init__(self, statefile, timeout):
pool0.raid_type,
pool0.member_type,
pool0.parent_pool_id,
+ pool0.strip_size,
pool1.total_space total_space,
pool1.total_space -
pool2.vol_consumed_size -
@@ -450,6 +467,7 @@ def __init__(self, statefile, timeout):
vol.pool_id,
vol.admin_state,
vol.thinp,
+ vol.is_hw_raid_vol,
vol_mask.ag_id ag_id
FROM
volumes vol
@@ -926,7 +944,11 @@ def sim_pool_of_id(self, sim_pool_id):
ErrorNumber.NOT_FOUND_POOL, "Pool")

def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
- element_type, unsupported_actions=0):
+ element_type, unsupported_actions=0,
+ strip_size=0):
+ if strip_size == 0:
+ strip_size = BackStore.DEFAULT_STRIP_SIZE
+
self._data_add(
'pools',
{
@@ -937,6 +959,7 @@ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
'unsupported_actions': unsupported_actions,
'raid_type': raid_type,
'member_type': Pool.MEMBER_TYPE_DISK,
+ 'strip_size': strip_size,
})

data_disk_count = PoolRAID.data_disk_count(
@@ -1029,7 +1052,9 @@ def _block_rounding(size_bytes):
return (size_bytes + BackStore.BLK_SIZE - 1) / \
BackStore.BLK_SIZE * BackStore.BLK_SIZE

- def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp):
+ def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp,
+ is_hw_raid_vol=0):
+
size_bytes = BackStore._block_rounding(size_bytes)
self._check_pool_free_space(sim_pool_id, size_bytes)
sim_vol = dict()
@@ -1040,6 +1065,7 @@ def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp):
sim_vol['total_space'] = size_bytes
sim_vol['consumed_size'] = size_bytes
sim_vol['admin_state'] = Volume.ADMIN_STATE_ENABLED
+ sim_vol['is_hw_raid_vol'] = is_hw_raid_vol

try:
self._data_add("volumes", sim_vol)
@@ -1055,7 +1081,7 @@ def sim_vol_delete(self, sim_vol_id):
This does not check whether volume exist or not.
"""
# Check existence.
- self.sim_vol_of_id(sim_vol_id)
+ sim_vol = self.sim_vol_of_id(sim_vol_id)

if self._sim_ag_ids_of_masked_vol(sim_vol_id):
raise LsmError(
@@ -1070,8 +1096,11 @@ def sim_vol_delete(self, sim_vol_id):
raise LsmError(
ErrorNumber.PLUGIN_BUG,
"Requested volume is a replication source")
-
- self._data_delete("volumes", 'id="%s"' % sim_vol_id)
+ if sim_vol['is_hw_raid_vol']:
+ # Delete the parent pool instead if found a HW RAID volume.
+ self._data_delete("pools", 'id="%s"' % sim_vol['pool_id'])
+ else:
+ self._data_delete("volumes", 'id="%s"' % sim_vol_id)

def sim_vol_mask(self, sim_vol_id, sim_ag_id):
self.sim_vol_of_id(sim_vol_id)
@@ -1759,7 +1788,7 @@ def disks(self):

@_handle_errors
def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
- _internal_use=False):
+ _internal_use=False, _is_hw_raid_vol=0):
"""
The '_internal_use' parameter is only for SimArray internal use.
This method will return the new sim_vol id instead of job_id when
@@ -1769,7 +1798,8 @@ def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
self.bs_obj.trans_begin()

new_sim_vol_id = self.bs_obj.sim_vol_create(
- vol_name, size_bytes, SimArray._sim_pool_id_of(pool_id), thinp)
+ vol_name, size_bytes, SimArray._sim_pool_id_of(pool_id),
+ thinp, is_hw_raid_vol=_is_hw_raid_vol)

if _internal_use:
return new_sim_vol_id
@@ -2251,9 +2281,9 @@ def volume_raid_info(self, lsm_vol):
min_io_size = BackStore.BLK_SIZE
opt_io_size = BackStore.BLK_SIZE
else:
- strip_size = BackStore.STRIP_SIZE
- min_io_size = BackStore.STRIP_SIZE
- opt_io_size = int(data_disk_count * BackStore.STRIP_SIZE)
+ strip_size = sim_pool['strip_size']
+ min_io_size = strip_size
+ opt_io_size = int(data_disk_count * strip_size)

return [raid_type, strip_size, disk_count, min_io_size, opt_io_size]

@@ -2278,3 +2308,64 @@ def pool_member_info(self, lsm_pool):
member_type = Pool.MEMBER_TYPE_UNKNOWN

return sim_pool['raid_type'], member_type, member_ids
+
+ @_handle_errors
+ def volume_raid_create_cap_get(self, system):
+ if system.id != BackStore.SYS_ID:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+ return (
+ BackStore.SUPPORTED_VCR_RAID_TYPES,
+ BackStore.SUPPORTED_VCR_STRIP_SIZES)
+
+ @_handle_errors
+ def volume_raid_create(self, name, raid_type, disks, strip_size):
+ if raid_type not in BackStore.SUPPORTED_VCR_RAID_TYPES:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'raid_type' is not supported")
+
+ if strip_size == Volume.VCR_STRIP_SIZE_DEFAULT:
+ strip_size = BackStore.DEFAULT_STRIP_SIZE
+ elif strip_size not in BackStore.SUPPORTED_VCR_STRIP_SIZES:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'strip_size' is not supported")
+
+ self.bs_obj.trans_begin()
+ pool_name = "Pool for volume %s" % name
+ sim_disk_ids = [
+ SimArray._lsm_id_to_sim_id(
+ d.id,
+ LsmError(ErrorNumber.NOT_FOUND_DISK, "Disk not found"))
+ for d in disks]
+
+ for disk in disks:
+ if not disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in DISK.STATUS_FREE mode" % disk.id)
+ try:
+ sim_pool_id = self.bs_obj.sim_pool_create_from_disk(
+ name=pool_name,
+ raid_type=raid_type,
+ sim_disk_ids=sim_disk_ids,
+ element_type=Pool.ELEMENT_TYPE_VOLUME,
+ unsupported_actions=Pool.UNSUPPORTED_VOLUME_GROW |
+ Pool.UNSUPPORTED_VOLUME_SHRINK,
+ strip_size=strip_size)
+ except sqlite3.IntegrityError as sql_error:
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other volume" % name)
+
+
+ sim_pool = self.bs_obj.sim_pool_of_id(sim_pool_id)
+ sim_vol_id = self.volume_create(
+ SimArray._sim_id_to_lsm_id(sim_pool_id, 'POOL'), name,
+ sim_pool['free_space'], Volume.PROVISION_FULL,
+ _internal_use=True, _is_hw_raid_vol=1)
+ sim_vol = self.bs_obj.sim_vol_of_id(sim_vol_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_vol_2_lsm(sim_vol)
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index 724574f..610ab81 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
@@ -295,3 +295,11 @@ def volume_raid_info(self, volume, flags=0):

def pool_member_info(self, pool, flags=0):
return self.sim_array.pool_member_info(pool)
+
+ def volume_raid_create_cap_get(self, system, flags=0):
+ return self.sim_array.volume_raid_create_cap_get(system)
+
+ def volume_raid_create(self, name, raid_type, disks, strip_size,
+ flags=0):
+ return self.sim_array.volume_raid_create(
+ name, raid_type, disks, strip_size)
--
1.8.3.1
Gris Ge
2015-04-22 15:20:46 UTC
Permalink
* Add support of these methods:
* volume_create_raid()
* volume_create_raid_cap_get()

* Using hpssacli command to create RAID group volume:

hpssacli ctrl slot=0 create type=ld \
drives=1i:1:13,1i:1:14 size=max raid=1+0 ss=64

* Add new capability lsm.Capabilities.VOLUME_CREATE_RAID.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Changes in V4:

* Method renamed to volume_raid_create().

* Use hash table lookup for _hp_raid_level_to_lsm() and
_lsm_raid_type_to_hp().

* Mark '1adm'(3 disk mirror) and '1+0adm' as Volume.RAID_TYPE_OTHER
base on HP document.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/hpsa/hpsa.py | 202 +++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 185 insertions(+), 17 deletions(-)

diff --git a/plugin/hpsa/hpsa.py b/plugin/hpsa/hpsa.py
index 75f507f..d1a580e 100644
--- a/plugin/hpsa/hpsa.py
+++ b/plugin/hpsa/hpsa.py
@@ -195,26 +195,46 @@ def _disk_status_of(hp_disk, flag_free):
return disk_status


-def _hp_raid_type_to_lsm(hp_ld):
+_HP_RAID_LEVEL_CONV = {
+ '0': Volume.RAID_TYPE_RAID0,
+ # TODO(Gris Ge): Investigate whether HP has 4 disks RAID 1.
+ # In LSM, that's RAID10.
+ '1': Volume.RAID_TYPE_RAID1,
+ '5': Volume.RAID_TYPE_RAID5,
+ '6': Volume.RAID_TYPE_RAID6,
+ '1+0': Volume.RAID_TYPE_RAID10,
+ '50': Volume.RAID_TYPE_RAID50,
+ '60': Volume.RAID_TYPE_RAID60,
+}
+
+
+_HP_VENDOR_RAID_LEVELS = ['1adm', '1+0adm']
+
+
+_LSM_RAID_TYPE_CONV = dict(
+ zip(_HP_RAID_LEVEL_CONV.values(), _HP_RAID_LEVEL_CONV.keys()))
+
+
+def _hp_raid_level_to_lsm(hp_ld):
"""
Based on this property:
Fault Tolerance: 0/1/5/6/1+0
"""
hp_raid_level = hp_ld['Fault Tolerance']
- if hp_raid_level == '0':
- return Volume.RAID_TYPE_RAID0
- elif hp_raid_level == '1':
- # TODO(Gris Ge): Investigate whether HP has 4 disks RAID 1.
- # In LSM, that's RAID10.
- return Volume.RAID_TYPE_RAID1
- elif hp_raid_level == '5':
- return Volume.RAID_TYPE_RAID5
- elif hp_raid_level == '6':
- return Volume.RAID_TYPE_RAID6
- elif hp_raid_level == '1+0':
- return Volume.RAID_TYPE_RAID10
-
- return Volume.RAID_TYPE_UNKNOWN
+
+ if hp_raid_level in _HP_VENDOR_RAID_LEVELS:
+ return Volume.RAID_TYPE_OTHER
+
+ return _HP_RAID_LEVEL_CONV.get(hp_raid_level, Volume.RAID_TYPE_UNKNOWN)
+
+
+def _lsm_raid_type_to_hp(raid_type):
+ try:
+ return _LSM_RAID_TYPE_CONV[raid_type]
+ except KeyError:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Not supported raid type %d" % raid_type)


class SmartArray(IPlugin):
@@ -286,6 +306,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
cap.set(Capabilities.POOL_MEMBER_INFO)
+ cap.set(Capabilities.VOLUME_RAID_CREATE)
return cap

def _sacli_exec(self, sacli_cmds, flag_convert=True):
@@ -511,7 +532,7 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
for array_key_name in ctrl_data[key_name].keys():
if array_key_name == "Logical Drive: %s" % ld_num:
hp_ld = ctrl_data[key_name][array_key_name]
- raid_type = _hp_raid_type_to_lsm(hp_ld)
+ raid_type = _hp_raid_level_to_lsm(hp_ld)
strip_size = _hp_size_to_lsm(hp_ld['Strip Size'])
stripe_size = _hp_size_to_lsm(hp_ld['Full Stripe Size'])
elif array_key_name.startswith("physicaldrive"):
@@ -556,7 +577,7 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
for array_key_name in ctrl_data[key_name].keys():
if array_key_name.startswith("Logical Drive: ") and \
raid_type == Volume.RAID_TYPE_UNKNOWN:
- raid_type = _hp_raid_type_to_lsm(
+ raid_type = _hp_raid_level_to_lsm(
ctrl_data[key_name][array_key_name])
elif array_key_name.startswith("physicaldrive"):
hp_disk = ctrl_data[key_name][array_key_name]
@@ -570,3 +591,150 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
"Pool not found")

return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
+
+ def _vrc_cap_get(self, ctrl_num):
+ supported_raid_types = [
+ Volume.RAID_TYPE_RAID0, Volume.RAID_TYPE_RAID1,
+ Volume.RAID_TYPE_RAID5, Volume.RAID_TYPE_RAID50,
+ Volume.RAID_TYPE_RAID10]
+
+ supported_strip_sizes = [
+ 8 * 1024, 16 * 1024, 32 * 1024, 64 * 1024,
+ 128 * 1024, 256 * 1024, 512 * 1024, 1024 * 1024]
+
+ ctrl_conf = self._sacli_exec([
+ "ctrl", "slot=%s" % ctrl_num, "show", "config", "detail"]
+ ).values()[0]
+
+ if 'RAID 6 (ADG) Status' in ctrl_conf and \
+ ctrl_conf['RAID 6 (ADG) Status'] == 'Enabled':
+ supported_raid_types.extend(
+ [Volume.RAID_TYPE_RAID6, Volume.RAID_TYPE_RAID60])
+
+ return supported_raid_types, supported_strip_sizes
+
+ @_handle_errors
+ def volume_raid_create_cap_get(self, system, flags=Client.FLAG_RSVD):
+ """
+ Depends on this command:
+ hpssacli ctrl slot=0 show config detail
+ All hpsa support RAID 1, 10, 5, 50.
+ If "RAID 6 (ADG) Status: Enabled", it will support RAID 6 and 60.
+ For HP tribile mirror(RAID 1adm and RAID10adm), LSM does support
+ that yet.
+ No command output or document indication special or exceptional
+ support of strip size, assuming all hpsa cards support documented
+ strip sizes.
+ """
+ if not system.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Ilegal input system argument: missing plugin_data property")
+ return self._vrc_cap_get(system.plugin_data)
+
+ @_handle_errors
+ def volume_raid_create(self, name, raid_type, disks, strip_size,
+ flags=Client.FLAG_RSVD):
+ """
+ Depends on these commands:
+ 1. Create LD
+ hpssacli ctrl slot=0 create type=ld \
+ drives=1i:1:13,1i:1:14 size=max raid=1+0 ss=64
+
+ 2. Find out the system ID.
+
+ 3. Find out the pool fist disk belong.
+ hpssacli ctrl slot=0 pd 1i:1:13 show
+
+ 4. List all volumes for this new pool.
+ self.volumes(search_key='pool_id', search_value=pool_id)
+
+ The 'name' argument will be ignored.
+ TODO(Gris Ge): These code only tested for creating 1 disk RAID 0.
+ """
+ hp_raid_level = _lsm_raid_type_to_hp(raid_type)
+ hp_disk_ids = []
+ ctrl_num = None
+ for disk in disks:
+ if not disk.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: missing plugin_data "
+ "property")
+ (cur_ctrl_num, hp_disk_id) = disk.plugin_data.split(':', 1)
+ if ctrl_num and cur_ctrl_num != ctrl_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same controller/system.")
+
+ ctrl_num = cur_ctrl_num
+ hp_disk_ids.append(hp_disk_id)
+
+ cmds = [
+ "ctrl", "slot=%s" % ctrl_num, "create", "type=ld",
+ "drives=%s" % ','.join(hp_disk_ids), 'size=max',
+ 'raid=%s' % hp_raid_level]
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT:
+ cmds.append("ss=%d" % int(strip_size / 1024))
+
+ try:
+ self._sacli_exec(cmds, flag_convert=False)
+ except ExecError:
+ # Check whether disk is free
+ requested_disk_ids = [d.id for d in disks]
+ for cur_disk in self.disks():
+ if cur_disk.id in requested_disk_ids and \
+ not cur_disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in STATUS_FREE state" % cur_disk.id)
+
+ # Check whether got unsupported raid type or strip size
+ supported_raid_types, supported_strip_sizes = \
+ self._vrc_cap_get(ctrl_num)
+
+ if raid_type not in supported_raid_types:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided raid_type is not supported")
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT and \
+ strip_size not in supported_strip_sizes:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided strip_size is not supported")
+
+ raise
+
+ # Find out the system id to gernerate pool_id
+ sys_output = self._sacli_exec(
+ ['ctrl', "slot=%s" % ctrl_num, 'show'])
+
+ sys_id = sys_output.values()[0]['Serial Number']
+ # API code already checked empty 'disks', we will for sure get
+ # valid 'ctrl_num' and 'hp_disk_ids'.
+
+ pd_output = self._sacli_exec(
+ ['ctrl', "slot=%s" % ctrl_num, 'pd', hp_disk_ids[0], 'show'])
+
+ if pd_output.values()[0].keys()[0].lower().startswith("array "):
+ hp_array_id = pd_output.values()[0].keys()[0][len("array "):]
+ hp_array_id = "Array:%s" % hp_array_id
+ else:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_raid_create(): Failed to find out the array ID of "
+ "new array: %s" % pd_output.items())
+
+ pool_id = _pool_id_of(sys_id, hp_array_id)
+
+ lsm_vols = self.volumes(search_key='pool_id', search_value=pool_id)
+
+ if len(lsm_vols) != 1:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_raid_create(): Got unexpected count(not 1) of new "
+ "volumes: %s" % lsm_vols)
+ return lsm_vols[0]
--
1.8.3.1
Gris Ge
2015-04-22 15:20:47 UTC
Permalink
* Add support of these methods:
* volume_create_raid()
* volume_create_raid_cap_get()

* Using storcli command to create RAID group volume:

storcli /c0 add vd RAID10 drives=252:1-4 pdperarray=2 J

* Fixed incorrect lsm.System.plugin_data.

* Add new capability lsm.Capabilities.VOLUME_CREATE_RAID.

Changes in V4:

* Method renamed to volume_raid_create().

* Use hash table lookup in _lsm_raid_type_to_mega() and _vcr_cap_get().

* Use math.log(x, 2) instead of log(x,10)/log(2, 10) in _vcr_cap_get().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 175 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 174 insertions(+), 1 deletion(-)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 46e7916..62ad028 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -19,6 +19,7 @@
import json
import re
import errno
+import math

from lsm import (uri_parse, search_property, size_human_2_size_bytes,
Capabilities, LsmError, ErrorNumber, System, Client,
@@ -175,6 +176,16 @@ def _pool_id_of(dg_id, sys_id):
'RAID60': Volume.RAID_TYPE_RAID60,
}

+_LSM_RAID_TYPE_CONV = {
+ Volume.RAID_TYPE_RAID0: 'RAID0',
+ Volume.RAID_TYPE_RAID1: 'RAID1',
+ Volume.RAID_TYPE_RAID5: 'RAID5',
+ Volume.RAID_TYPE_RAID6: 'RAID6',
+ Volume.RAID_TYPE_RAID50: 'RAID50',
+ Volume.RAID_TYPE_RAID60: 'RAID60',
+ Volume.RAID_TYPE_RAID10: 'RAID10',
+}
+

def _mega_raid_type_to_lsm(vd_basic_info, vd_prop_info):
raid_type = _RAID_TYPE_MAP.get(
@@ -188,6 +199,15 @@ def _mega_raid_type_to_lsm(vd_basic_info, vd_prop_info):
return raid_type


+def _lsm_raid_type_to_mega(lsm_raid_type):
+ try:
+ return _LSM_RAID_TYPE_CONV[lsm_raid_type]
+ except KeyError:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "RAID type %d not supported" % lsm_raid_type)
+
+
class MegaRAID(IPlugin):
_DEFAULT_BIN_PATHS = [
"/opt/MegaRAID/storcli/storcli64", "/opt/MegaRAID/storcli/storcli"]
@@ -261,6 +281,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.VOLUME_RAID_INFO)
cap.set(Capabilities.POOL_MEMBER_INFO)
+ cap.set(Capabilities.VOLUME_RAID_CREATE)
return cap

def _storcli_exec(self, storcli_cmds, flag_json=True):
@@ -347,7 +368,7 @@ def systems(self, flags=Client.FLAG_RSVD):
)
(status, status_info) = self._lsm_status_of_ctrl(
ctrl_show_all_output)
- plugin_data = "/c%d"
+ plugin_data = "/c%d" % ctrl_num
# Since PCI slot sequence might change.
# This string just stored for quick system verification.

@@ -601,3 +622,155 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
raid_type = Volume.RAID_TYPE_RAID10

return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
+
+ def _vcr_cap_get(self, mega_sys_path):
+ cap_output = self._storcli_exec(
+ [mega_sys_path, "show", "all"])['Capabilities']
+
+ mega_raid_types = \
+ cap_output['RAID Level Supported'].replace(', \n', '').split(', ')
+
+ supported_raid_types = []
+ for cur_mega_raid_type in _RAID_TYPE_MAP.keys():
+ if cur_mega_raid_type in mega_raid_types:
+ supported_raid_types.append(
+ _RAID_TYPE_MAP[cur_mega_raid_type])
+
+ supported_raid_types = sorted(list(set(supported_raid_types)))
+
+ min_strip_size = _mega_size_to_lsm(cap_output['Min Strip Size'])
+ max_strip_size = _mega_size_to_lsm(cap_output['Max Strip Size'])
+
+ supported_strip_sizes = list(
+ min_strip_size * (2 ** i)
+ for i in range(
+ 0, int(math.log(max_strip_size / min_strip_size, 2) + 1)))
+
+ # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ # The math above is to generate a list like:
+ # min_strip_size, ... n^2 , max_strip_size
+
+ return supported_raid_types, supported_strip_sizes
+
+ @_handle_errors
+ def volume_raid_create_cap_get(self, system, flags=Client.FLAG_RSVD):
+ """
+ Depend on the 'Capabilities' section of "storcli /c0 show all" output.
+ """
+ cur_lsm_syss = list(s for s in self.systems() if s.id == system.id)
+ if len(cur_lsm_syss) != 1:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+
+ lsm_sys = cur_lsm_syss[0]
+ return self._vcr_cap_get(lsm_sys.plugin_data)
+
+ @_handle_errors
+ def volume_raid_create(self, name, raid_type, disks, strip_size,
+ flags=Client.FLAG_RSVD):
+ """
+ Work flow:
+ 1. Create RAID volume
+ storcli /c0 add vd RAID10 drives=252:1-4 pdperarray=2 J
+ 2. Find out pool/DG base on one disk.
+ storcli /c0/e252/s1 show J
+ 3. Find out the volume/VD base on pool/DG using self.volumes()
+ """
+ mega_raid_type = _lsm_raid_type_to_mega(raid_type)
+ ctrl_num = None
+ slot_nums = []
+ enclosure_num = None
+
+ for disk in disks:
+ if not disk.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: missing plugin_data "
+ "property")
+ # Disk should from the same controller.
+ (cur_ctrl_num, cur_enclosure_num, slot_num) = \
+ disk.plugin_data.split(':')
+
+ if ctrl_num and cur_ctrl_num != ctrl_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same controller/system.")
+
+ if enclosure_num and cur_enclosure_num != enclosure_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same disk enclosure.")
+
+ ctrl_num = int(cur_ctrl_num)
+ enclosure_num = cur_enclosure_num
+ slot_nums.append(slot_num)
+
+ # Handle request volume name, LSI only allow 15 characters.
+ name = re.sub('[^0-9a-zA-Z_\-]+', '', name)[:15]
+
+ cmds = [
+ "/c%s" % ctrl_num, "add", "vd", mega_raid_type,
+ 'size=all', "name=%s" % name,
+ "drives=%s:%s" % (enclosure_num, ','.join(slot_nums))]
+
+ if raid_type == Volume.RAID_TYPE_RAID10 or \
+ raid_type == Volume.RAID_TYPE_RAID50 or \
+ raid_type == Volume.RAID_TYPE_RAID60:
+ cmds.append("pdperarray=%d" % int(len(disks) / 2))
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT:
+ cmds.append("strip=%d" % int(strip_size / 1024))
+
+ try:
+ self._storcli_exec(cmds)
+ except ExecError:
+ req_disk_ids = [d.id for d in disks]
+ for cur_disk in self.disks():
+ if cur_disk.id in req_disk_ids and \
+ not cur_disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in STATUS_FREE state" % cur_disk.id)
+ # Check whether got unsupported RAID type or stripe size
+ supported_raid_types, supported_strip_sizes = \
+ self._vcr_cap_get("/c%s" % ctrl_num)
+
+ if raid_type not in supported_raid_types:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'raid_type' is not supported")
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT and \
+ strip_size not in supported_strip_sizes:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'strip_size' is not supported")
+
+ raise
+
+ # Find out the DG ID from one disk.
+ dg_show_output = self._storcli_exec(
+ ["/c%s/e%s/s%s" % tuple(disks[0].plugin_data.split(":")), "show"])
+
+ dg_id = dg_show_output['Drive Information'][0]['DG']
+ if dg_id == '-':
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_raid_create(): No error found in output, "
+ "but RAID is not created: %s" % dg_show_output.items())
+ else:
+ dg_id = int(dg_id)
+
+ pool_id = _pool_id_of(dg_id, self._sys_id_of_ctrl_num(ctrl_num))
+
+ lsm_vols = self.volumes(search_key='pool_id', search_value=pool_id)
+ if len(lsm_vols) != 1:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_raid_create(): Got unexpected volume count(not 1) "
+ "when creating RAID volume")
+
+ return lsm_vols[0]
--
1.8.3.1
Gris Ge
2015-04-22 15:20:48 UTC
Permalink
* lsmcli test: new test function volume_create_raid_test().

* Plugin test: new test function test_volume_create_raid().

* C unit test: test_volume_create_raid_cap_get() and
test_volume_create_raid()

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

* Initialize pointers in test_volume_create_raid_cap_get() of tester.c.

* Remove unused variables in tester.c.

Changes in V4:

* Method renamed to volume_raid_create().

Signed-off-by: Gris Ge <***@redhat.com>
---
test/cmdtest.py | 55 +++++++++++++++++++++++++++++
test/plugin_test.py | 50 +++++++++++++++++++++++++++
test/tester.c | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 204 insertions(+)

diff --git a/test/cmdtest.py b/test/cmdtest.py
index 60bef93..247a291 100755
--- a/test/cmdtest.py
+++ b/test/cmdtest.py
@@ -713,6 +713,59 @@ def pool_member_info_test(cap, system_id):
exit(10)
return

+def volume_raid_create_test(cap, system_id):
+ if cap['VOLUME_RAID_CREATE']:
+ out = call(
+ [cmd, '-t' + sep, 'volume-create-raid-cap', '--sys', system_id])[1]
+
+ if 'RAID1' not in [r[1] for r in parse(out)]:
+ return
+
+ out = call([cmd, '-t' + sep, 'list', '--type', 'disks'])[1]
+ free_disk_ids = []
+ disk_list = parse(out)
+ for disk in disk_list:
+ if 'Free' in disk:
+ if len(free_disk_ids) == 2:
+ break
+ free_disk_ids.append(disk[0])
+
+ if len(free_disk_ids) != 2:
+ print "Require two free disks to test volume-create-raid"
+ exit(10)
+
+ out = call([
+ cmd, '-t' + sep, 'volume-create-raid', '--disk', free_disk_ids[0],
+ '--disk', free_disk_ids[1], '--name', 'test_volume_raid_create',
+ '--raid-type', 'raid1'])[1]
+
+ volume = parse(out)
+ vol_id = volume[0][0]
+ pool_id = volume[0][-2]
+
+ if cap['VOLUME_RAID_INFO']:
+ out = call(
+ [cmd, '-t' + sep, 'volume-raid-info', '--vol', vol_id])[1]
+ if parse(out)[0][1] != 'RAID1':
+ print "New volume is not RAID 1"
+ exit(10)
+
+ if cap['POOL_MEMBER_INFO']:
+ out = call(
+ [cmd, '-t' + sep, 'pool-member-info', '--pool', pool_id])[1]
+ if parse(out)[0][1] != 'RAID1':
+ print "New pool is not RAID 1"
+ exit(10)
+ for disk_id in free_disk_ids:
+ if disk_id not in [p[3] for p in parse(out)]:
+ print "New pool does not contain requested disks"
+ exit(10)
+
+ if cap['VOLUME_DELETE']:
+ volume_delete(vol_id)
+
+ return
+

def run_all_tests(cap, system_id):
test_display(cap, system_id)
@@ -729,6 +782,8 @@ def run_all_tests(cap, system_id):

pool_member_info_test(cap,system_id)

+ volume_raid_create_test(cap, system_id)
+
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("-c", "--command", action="store", type="string",
diff --git a/test/plugin_test.py b/test/plugin_test.py
index 5cb48ae..2ec698c 100755
--- a/test/plugin_test.py
+++ b/test/plugin_test.py
@@ -1294,6 +1294,56 @@ def test_pool_member_info(self):
self.assertTrue(type(member_type) is int)
self.assertTrue(type(member_ids) is list)

+ def _skip_current_test(self, messsage):
+ """
+ If skipTest is supported, skip this test with provided message.
+ Sliently return if not supported.
+ """
+ if hasattr(unittest.TestCase, 'skipTest') is True:
+ self.skipTest(messsage)
+ return
+
+ def test_volume_raid_create(self):
+ for s in self.systems:
+ cap = self.c.capabilities(s)
+ # TODO(Gris Ge): Add test code for other RAID type and strip size
+ if supported(cap, [Cap.VOLUME_RAID_CREATE]):
+ supported_raid_types, supported_strip_sizes = \
+ self.c.volume_raid_create_cap_get(s)
+ if lsm.Volume.RAID_TYPE_RAID1 not in supported_raid_types:
+ self._skip_current_test(
+ "Skip test: current system does not support "
+ "creating RAID1 volume")
+
+ # Find two free disks
+ free_disks = []
+ for disk in self.c.disks():
+ if len(free_disks) == 2:
+ break
+ if disk.status & lsm.Disk.STATUS_FREE:
+ free_disks.append(disk)
+
+ if len(free_disks) != 2:
+ self._skip_current_test(
+ "Skip test: Failed to find two free disks for RAID 1")
+ return
+
+ new_vol = self.c.volume_raid_create(
+ rs('v'), lsm.Volume.RAID_TYPE_RAID1, free_disks,
+ lsm.Volume.VCR_STRIP_SIZE_DEFAULT)
+
+ self.assertTrue(new_vol is not None)
+
+ # TODO(Gris Ge): Use volume_raid_info() and pool_raid_info()
+ # to verify size, raid_type, member type, member ids.
+
+ if supported(cap, [Cap.VOLUME_DELETE]):
+ self._volume_delete(new_vol)
+
+ else:
+ self._skip_current_test(
+ "Skip test: not support of VOLUME_RAID_CREATE")
+
def dump_results():
"""
unittest.main exits when done so we need to register this handler to
diff --git a/test/tester.c b/test/tester.c
index 3b04022..9af6495 100644
--- a/test/tester.c
+++ b/test/tester.c
@@ -2917,6 +2917,103 @@ START_TEST(test_pool_member_info)
}
END_TEST

+START_TEST(test_volume_raid_create_cap_get)
+{
+ if (which_plugin == 1){
+ // silently skip on simc which does not support this method yet.
+ return;
+ }
+ int rc;
+ lsm_system **sys = NULL;
+ uint32_t sys_count = 0;
+
+ G(rc, lsm_system_list, c, &sys, &sys_count, LSM_CLIENT_FLAG_RSVD);
+ fail_unless( sys_count >= 1, "count = %d", sys_count);
+
+ if( sys_count > 0 ) {
+ uint32_t *supported_raid_types = NULL;
+ uint32_t supported_raid_type_count = 0;
+ uint32_t *supported_strip_sizes = NULL;
+ uint32_t supported_strip_size_count = 0;
+
+ G(
+ rc, lsm_volume_raid_create_cap_get, c, sys[0],
+ &supported_raid_types, &supported_raid_type_count,
+ &supported_strip_sizes, &supported_strip_size_count,
+ 0);
+
+ free(supported_raid_types);
+ free(supported_strip_sizes);
+
+ }
+ G(rc, lsm_system_record_array_free, sys, sys_count);
+}
+END_TEST
+
+START_TEST(test_volume_raid_create)
+{
+ if (which_plugin == 1){
+ // silently skip on simc which does not support this method yet.
+ return;
+ }
+ int rc;
+
+ lsm_disk **disks = NULL;
+ uint32_t disk_count = 0;
+
+ G(rc, lsm_disk_list, c, NULL, NULL, &disks, &disk_count, 0);
+
+ // Try to create two disks RAID 1.
+ uint32_t free_disk_count = 0;
+ lsm_disk *free_disks[2];
+ int i;
+ for (i = 0; i< disk_count; i++){
+ if (lsm_disk_status_get(disks[i]) & LSM_DISK_STATUS_FREE){
+ free_disks[free_disk_count++] = disks[i];
+ if (free_disk_count == 2){
+ break;
+ }
+ }
+ }
+ fail_unless(free_disk_count == 2, "Failed to find two free disks");
+
+ lsm_volume *new_volume = NULL;
+
+ G(rc, lsm_volume_raid_create, c, "test_volume_raid_create",
+ LSM_VOLUME_RAID_TYPE_RAID1, free_disks, free_disk_count,
+ LSM_VOLUME_VCR_STRIP_SIZE_DEFAULT, &new_volume, LSM_CLIENT_FLAG_RSVD);
+
+ char *job_del = NULL;
+ int del_rc = lsm_volume_delete(
+ c, new_volume, &job_del, LSM_CLIENT_FLAG_RSVD);
+
+ fail_unless( del_rc == LSM_ERR_OK || del_rc == LSM_ERR_JOB_STARTED,
+ "lsm_volume_delete %d (%s)", rc, error(lsm_error_last_get(c)));
+
+ if( LSM_ERR_JOB_STARTED == del_rc ) {
+ wait_for_job_vol(c, &job_del);
+ }
+
+ G(rc, lsm_disk_record_array_free, disks, disk_count);
+ // The new pool should be automatically be deleted when volume got
+ // deleted.
+ lsm_pool **pools = NULL;
+ uint32_t count = 0;
+ G(
+ rc, lsm_pool_list, c, "id", lsm_volume_pool_id_get(new_volume),
+ &pools, &count, LSM_CLIENT_FLAG_RSVD);
+
+ fail_unless(
+ count == 0,
+ "New HW RAID pool still exists, it should be deleted along with "
+ "lsm_volume_delete()");
+
+ lsm_pool_record_array_free(pools, count);
+
+ G(rc, lsm_volume_record_free, new_volume);
+}
+END_TEST
+
Suite * lsm_suite(void)
{
Suite *s = suite_create("libStorageMgmt");
@@ -2954,6 +3051,8 @@ Suite * lsm_suite(void)
tcase_add_test(basic, test_invalid_input);
tcase_add_test(basic, test_volume_raid_info);
tcase_add_test(basic, test_pool_member_info);
+ tcase_add_test(basic, test_volume_raid_create_cap_get);
+ tcase_add_test(basic, test_volume_raid_create);

suite_add_tcase(s, basic);
return s;
--
1.8.3.1
Tony Asleson
2015-04-22 15:31:49 UTC
Permalink
Hi Gris,

Please update the commit messages to reflect the name changes, eg.
change pool_raid_info to pool_member_info etc.

Thanks,
-Tony
Post by Gris Ge
* Putting two patch sets into one. As `pool_raid_info()` patch set is
just preparation for `volume_create_raid()` patch set. It's better to explain
design choice with both.
* RAID type
* Member type: disk RAID pool or sub-pool.
* Members: disks or parent pools.
* New method `volume_create_raid()` to create RAID group volume.
* New method `volume_create_raid_cap_get()` to query RAID group volume create
supported RAID type and strip sizes.
* Why we need pool_raid_info() when we already have volume_raid_info()?
The `pool_raid_info()` is focus on RAID disk or sub-pool layout.
The `volume_raid_info()` is focus on providing information about host
I/O performance and availability(raid).
* Why don't create single sharable method for SAN/NAS/DAS about RAID
creation?
Please refer to design note of patch 'New feature: Create RAID volume
on hardware RAID.'
* Why no RAID creation method for SAN and NAS?
As we tried before, due to complex RAID settings of SAN and NAS
storage, I didn't find a simple design to support all of them.
* Please refer to design note of patch 'New feature: Create RAID volume'
also.
* Please comment in github page, thank you.
https://github.com/libstorage/libstoragemgmt/pull/11
* Rename pool_raid_info to pool_member_info.
* Rebased patches for the rename.
* Fix C code warning about comparing signed and unsigned integer.
* Initialized pointers in lsm_plugin_ipc.cpp and tester.c.
* Removed unused variables in tester.c.
* MegaRAID: RAID 0/1/5/10. # No RAID 6 license.
* HP SmartArray: RAID 0. # No additional disk.
* Document pull request: https://github.com/libstorage/libstoragemgmt-doc/pull/3
* Rename method 'volume_create_raid' to 'volume_raid_create'.
* Fix memory leak in C API.
* Initialize pointers in C code.
* Make sure output pointer is set to sane value if error occurred in C code.
* Add bash auto completion for new lsmcli commands and aliases.
* Fix typo in comments of C and Python library.
* Use hash table to convert vendor RAID string to LSM type in hpsa and
megaraid plugins.
* Tested MegaRAID for 4 disks RAID 10 with default strip size.
Query commands is tested on MegaRAID and Smart Array.
https://github.com/libstorage/libstoragemgmt-doc/pull/3
New method: lsm.Client.pool_raid_info()/lsm_pool_raid_info()
lsmcli: Add new command pool-raid-info
Tests: Add test for lsm.Client.pool_raid_info() and
lsm_pool_raid_info().
Simulator Plugin: Add lsm.Client.pool_raid_info() support.
Simulator C plugin: Add lsm_pool_raid_info() support.
ONTAP Plugin: Add lsm.Client.pool_raid_info() support.
MegaRAID Plugin: Add lsm.Client.pool_raid_info() support.
HP Smart Array Plugin: Add lsm.Client.pool_raid_info() support.
Simulator Plugin: Fix pool free space calculation.
MegaRAID Plugin: Change Disk.plugin_data to parse friendly format.
New feature: Create RAID volume on hardware RAID.
lsmcli: New commands for hardware RAID volume creation.
Simulator Plugin: Add hardware RAID volume creation support.
Simulator C plugin: Sync changes of lsm_ops_v1_2 about
lsm_plug_volume_create_raid_cap_get
HP SmartArray Plugin: Add volume creation support.
MegaRAID Plugin: Add volume creation support.
Test: Hardware RAID volume creation.
c_binding/include/libstoragemgmt/libstoragemgmt.h | 89 ++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 5 +-
.../include/libstoragemgmt/libstoragemgmt_error.h | 2 +
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 84 ++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 20 ++
c_binding/lsm_convert.cpp | 41 +++
c_binding/lsm_convert.hpp | 12 +
c_binding/lsm_mgmt.cpp | 188 +++++++++++++
c_binding/lsm_plugin_ipc.cpp | 138 ++++++++-
doc/man/lsmcli.1.in | 41 +++
plugin/hpsa/hpsa.py | 239 ++++++++++++++--
plugin/megaraid/megaraid.py | 232 +++++++++++++++-
plugin/ontap/ontap.py | 21 ++
plugin/sim/simarray.py | 309 +++++++++++++--------
plugin/sim/simulator.py | 11 +
plugin/simc/simc_lsmplugin.c | 44 ++-
python_binding/lsm/_client.py | 285 +++++++++++++++++++
python_binding/lsm/_common.py | 3 +
python_binding/lsm/_data.py | 9 +
test/cmdtest.py | 75 +++++
test/plugin_test.py | 60 ++++
test/tester.c | 130 +++++++++
tools/bash_completion/lsmcli | 21 +-
tools/lsmcli/cmdline.py | 84 +++++-
tools/lsmcli/data_display.py | 77 ++++-
25 files changed, 2087 insertions(+), 133 deletions(-)
Gris Ge
2015-04-23 07:30:07 UTC
Permalink
Post by Tony Asleson
Hi Gris,
Please update the commit messages to reflect the name changes, eg.
change pool_raid_info to pool_member_info etc.
Thanks,
-Tony
Hi Tony,

V5 patch set uploaded in github pull request with correct commit
messages now.

I didn't email these patches to maillist since they are purely commit
message changes and a trivial code comment update.

Document commit message was corrected also.

https://github.com/libstorage/libstoragemgmt/pull/11
https://github.com/libstorage/libstoragemgmt-doc/pull/3

Best regards.
--
Gris Ge
Tony Asleson
2015-04-23 22:36:10 UTC
Permalink
Post by Gris Ge
Post by Tony Asleson
Hi Gris,
Please update the commit messages to reflect the name changes, eg.
change pool_raid_info to pool_member_info etc.
Thanks,
-Tony
Hi Tony,
V5 patch set uploaded in github pull request with correct commit
messages now.
Hi Gris,

I was taking a quick peek over the latest commits in your branch before
I merged... and I found the following:

* lsm_plugin_ipc.cpp (1034): Memory leak, member_ids not being freed

I don't remember if I mentioned this before and I can't seem to figure
out how to find the previous line comments at github. I may be placing
code comments in emails again for historical reference, unless there is
a way to do this in github that I am missing :-/ Andy pointed me to:
https://julien.danjou.info/blog/2013/rant-about-github-pull-request-workflow-implementation

I found this, which I think I missed the first time, sorry:

* lsm_convert.cpp (626): We should zero the count and null the array
pointer, otherwise if we throw the exception on line 626 we could have a
count != 0 and try to free memory that we didn't allocate.
Post by Gris Ge
I didn't email these patches to maillist since they are purely commit
message changes and a trivial code comment update.
Yes, this is fine

Regards,
Tony
Gris Ge
2015-04-24 12:06:15 UTC
Permalink
* Putting two patch sets into one. As `pool_member_info()` patch set is
just preparation for `volume_raid_create()` patch set. It's better to
explain design choice with both.

* New method `pool_member_info()` to query pool member information:
* RAID type
* Member type: disk RAID pool or sub-pool.
* Members: disks or parent pools.

* New method `volume_raid_create()` to create RAID group volume.

* New method `volume_raid_create_cap_get()` to query RAID group volume create
supported RAID type and strip sizes.

* Design notes:
* Why we need pool_member_info() when we already have volume_raid_info()?
The `pool_member_info()` is focus on RAID disk or sub-pool layout.

The `volume_raid_info()` is focus on providing information about host
I/O performance and availability(raid).

* Why don't create single sharable method for SAN/NAS/DAS about RAID
creation?
Please refer to design note of patch 'New feature: Create RAID volume
on hardware RAID.'

* Why no RAID creation method for SAN and NAS?
As we tried before, due to complex RAID settings of SAN and NAS
storage, I didn't find a simple design to support all of them.

* Please refer to design note of patch 'New feature: Create RAID volume'
also.

* Please comment in github page, thank you.
https://github.com/libstorage/libstoragemgmt/pull/11

Changes in V3:

* Rename pool_raid_info to pool_member_info.
* Rebased patches for the rename.
* Fix C code warning about comparing signed and unsigned integer.
* Initialized pointers in lsm_plugin_ipc.cpp and tester.c.
* Removed unused variables in tester.c.
* volume_create_raid() tested on:
* MegaRAID: RAID 0/1/5/10. # No RAID 6 license.
* HP SmartArray: RAID 0. # No additional disk.
* Document pull request: https://github.com/libstorage/libstoragemgmt-doc/pull/3

Changes in V4:

* Rename method 'volume_create_raid' to 'volume_raid_create'.
* Fix memory leak in C API.
* Initialize pointers in C code.
* Make sure output pointer is set to sane value if error occurred in C code.
* Add bash auto completion for new lsmcli commands and aliases.
* Fix typo in comments of C and Python library.
* Use hash table to convert vendor RAID string to LSM type in hpsa and
megaraid plugins.
* Tested MegaRAID for 4 disks RAID 10 with default strip size.
Query commands is tested on MegaRAID and Smart Array.
* Document updated and C API user guide for these new methods also included:
https://github.com/libstorage/libstoragemgmt-doc/pull/3

Changes in V5:

* Updated commit message to use current method name:
* pool_member_info
* volume_raid_create
* volume_raid_create_cap_get

Changes in V6:

* Patch 1/18, 8/18:
Fix memory leaks in C library and C unit test.

* Patch 18/18:
New patch for fixing volume_raid_create C unit test.

Gris Ge (18):
New method: lsm.Client.pool_member_info()/lsm_pool_member_info()
lsmcli: Add new command pool-member-info
Simulator Plugin: Add lsm.Client.pool_member_info() support.
Simulator C plugin: Add lsm_pool_member_info() support.
ONTAP Plugin: Add lsm.Client.pool_member_info() support.
MegaRAID Plugin: Add lsm.Client.pool_member_info() support.
HP Smart Array Plugin: Add lsm.Client.pool_member_info() support.
Tests: Add test for lsm.Client.pool_member_info() and
lsm_pool_member_info().
Simulator Plugin: Fix pool free space calculation.
MegaRAID Plugin: Change Disk.plugin_data to parse friendly format.
New feature: Create RAID volume on hardware RAID.
lsmcli: New commands for hardware RAID volume creation.
Simulator Plugin: Add hardware RAID volume creation support.
Simulator C plugin: Sync changes of lsm_ops_v1_2 about
lsm_plug_volume_raid_create_cap_get
HP SmartArray Plugin: Add volume creation support.
MegaRAID Plugin: Add volume creation support.
Test: Hardware RAID volume creation.
C Unit Test: Fix memory leak of test_volume_raid_info test.

c_binding/include/libstoragemgmt/libstoragemgmt.h | 89 ++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 5 +-
.../include/libstoragemgmt/libstoragemgmt_error.h | 2 +
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 84 ++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 20 ++
c_binding/lsm_convert.cpp | 41 +++
c_binding/lsm_convert.hpp | 12 +
c_binding/lsm_mgmt.cpp | 188 +++++++++++++
c_binding/lsm_plugin_ipc.cpp | 141 +++++++++-
doc/man/lsmcli.1.in | 41 +++
plugin/hpsa/hpsa.py | 239 ++++++++++++++--
plugin/megaraid/megaraid.py | 232 +++++++++++++++-
plugin/ontap/ontap.py | 21 ++
plugin/sim/simarray.py | 309 +++++++++++++--------
plugin/sim/simulator.py | 11 +
plugin/simc/simc_lsmplugin.c | 44 ++-
python_binding/lsm/_client.py | 285 +++++++++++++++++++
python_binding/lsm/_common.py | 3 +
python_binding/lsm/_data.py | 9 +
test/cmdtest.py | 75 +++++
test/plugin_test.py | 60 ++++
test/tester.c | 131 +++++++++
tools/bash_completion/lsmcli | 21 +-
tools/lsmcli/cmdline.py | 84 +++++-
tools/lsmcli/data_display.py | 77 ++++-
25 files changed, 2091 insertions(+), 133 deletions(-)
--
1.8.3.1
Gris Ge
2015-04-24 12:06:16 UTC
Permalink
* Python part:
* New python method: lsm.Client.pool_member_info(pool,flags)
Query the member information for certain pool.
Returns [raid_type, member_type, member_ids]
* For sub-pool which allocated space from other pool(like NetApp ONTAP
volume which allocate space from ONTAP aggregate), the member_type
will be lsm.Pool.MEMBER_TYPE_POOL.
* For disk RAID pool which use the whole disk as RAID member(like NetApp
ONTAP aggregate or EMC VNX pool) which the member_type will be
lsm.Pool.MEMBER_TYPE_DISK.

* New constants:
* lsm.Pool.MEMBER_TYPE_POOL
* lsm.Pool.MEMBER_TYPE_DISK
* lsm.Pool.MEMBER_TYPE_OTHER
* lsm.Pool.MEMBER_TYPE_UNKNOWN

* New capability for this new method:
* lsm.Capabilities.POOL_MEMBER_INFO

* C part:
* New C functions:
* lsm_pool_member_info()
For API user.
* lsm_plug_pool_member_info()
For plugin.

* New constants:
* LSM_POOL_MEMBER_TYPE_POOL
* LSM_POOL_MEMBER_TYPE_DISK
* LSM_POOL_MEMBER_TYPE_OTHER
* LSM_POOL_MEMBER_TYPE_UNKNOWN
* LSM_CAP_POOL_MEMBER_INFO

* For detail information, please refer to docstring of
lsm.Client.pool_member_info() method in python_binding/lsm/_client.py file
and lsm_pool_member_info() method in
c_binding/include/libstoragemgmt/libstoragemgmt.h.

Changes in V2:

* Use lsm_string_list instead of 'char **' for 'member_ids' argument in C
library.
* Removed 'member_count' argument as lsm_string_list has
lsm_string_list_size() function.
* Fix the incorrect comment of plugin interface 'lsm_plug_volume_raid_info'.

Changes in V3:

* Method name changed to lsm.Client.pool_member_info() and
lsm_pool_member_info().

Changes in V4:

* Fix typo 'memership' in libstoragemgmt.h.

* Handle 'member_ids' conversion errors in lsm_pool_member_info() of
lsm_mgmt.cpp:
* Raise LSM_ERR_NO_MEMORY when value_to_string_list() failed to
convert value to string list.
* Raise LSM_ERR_LIB_BUG if got member_ids not as an array.

* Initialized return data just in case plugin didn't touch the values
in handle_pool_member_info() of lsm_plugin_ipc.cpp.

* Update docstring of lsm.Client.pool_member_info().

Changes in V5:

* Update commit notes to use new method name: pool_member_info.

Changes in V6:

* Fix memory leak 'member_ids' in handle_pool_member_info() of
lsm_plugin_ipc.cpp.

Signed-off-by: Gris Ge <***@redhat.com>
---
c_binding/include/libstoragemgmt/libstoragemgmt.h | 27 ++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 4 +-
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 27 ++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 17 ++++
c_binding/lsm_mgmt.cpp | 60 +++++++++++++
c_binding/lsm_plugin_ipc.cpp | 47 ++++++++++-
python_binding/lsm/_client.py | 98 ++++++++++++++++++++++
python_binding/lsm/_data.py | 6 ++
8 files changed, 284 insertions(+), 2 deletions(-)

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt.h b/c_binding/include/libstoragemgmt/libstoragemgmt.h
index 3d34f75..85022b4 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt.h
@@ -864,6 +864,33 @@ int LSM_DLL_EXPORT lsm_volume_raid_info(
uint32_t *strip_size, uint32_t *disk_count,
uint32_t *min_io_size, uint32_t *opt_io_size, lsm_flag flags);

+/**
+ * Retrieves the membership of given pool. New in version 1.2.
+ * @param[in] c Valid connection
+ * @param[in] pool The lsm_pool ptr.
+ * @param[out] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[out] member_type
+ * Enum of lsm_pool_member_type.
+ * @param[out] member_ids
+ * The pointer to lsm_string_list pointer.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_POOL,
+ * the 'member_ids' will contain a list of parent Pool
+ * IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_DISK,
+ * the 'member_ids' will contain a list of disk IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_OTHER or
+ * LSM_POOL_MEMBER_TYPE_UNKNOWN, the member_ids should
+ * be NULL.
+ * Need to use lsm_string_list_free() to free this memory.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_pool_member_info(
+ lsm_connect *c, lsm_pool *pool, lsm_volume_raid_type *raid_type,
+ lsm_pool_member_type *member_type, lsm_string_list **member_ids,
+ lsm_flag flags);
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
index 18490f3..943f4a4 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
@@ -113,7 +113,9 @@ typedef enum {
LSM_CAP_TARGET_PORTS = 216, /**< List target ports */
LSM_CAP_TARGET_PORTS_QUICK_SEARCH = 217, /**< Filtering occurs on array */

- LSM_CAP_DISKS = 220 /**< List disk drives */
+ LSM_CAP_DISKS = 220, /**< List disk drives */
+ LSM_CAP_POOL_MEMBER_INFO = 221,
+ /**^ Query pool member information */

} lsm_capability_type;

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
index b36586c..cdc8dc9 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
@@ -827,6 +827,32 @@ typedef int (*lsm_plug_volume_raid_info)(lsm_plugin_ptr c, lsm_volume *volume,
uint32_t *disk_count, uint32_t *min_io_size, uint32_t *opt_io_size,
lsm_flag flags);

+/**
+ * Retrieves the membership of given pool. New in version 1.2.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] pool The lsm_pool ptr.
+ * @param[out] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[out] member_type
+ * Enum of lsm_pool_member_type.
+ * @param[out] member_ids
+ * The pointer of lsm_string_list pointer.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_POOL,
+ * the 'member_ids' will contain a list of parent Pool
+ * IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_DISK,
+ * the 'member_ids' will contain a list of disk IDs.
+ * When 'member_type' is LSM_POOL_MEMBER_TYPE_OTHER or
+ * LSM_POOL_MEMBER_TYPE_UNKNOWN, the member_ids should
+ * be NULL.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_pool_member_info)(
+ lsm_plugin_ptr c, lsm_pool *pool, lsm_volume_raid_type *raid_type,
+ lsm_pool_member_type *member_type, lsm_string_list **member_ids,
+ lsm_flag flags);
+
/** \struct lsm_ops_v1_2
* \brief Functions added in version 1.2
* NOTE: This structure will change during the developement util version 1.2
@@ -835,6 +861,7 @@ typedef int (*lsm_plug_volume_raid_info)(lsm_plugin_ptr c, lsm_volume *volume,
struct lsm_ops_v1_2 {
lsm_plug_volume_raid_info vol_raid_info;
/**^ Query volume RAID information*/
+ lsm_plug_pool_member_info pool_member_info;
};

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
index c5607c1..115ec5a 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
@@ -169,6 +169,23 @@ typedef enum {
/**^ Vendor specific RAID type */
} lsm_volume_raid_type;

+/**< \enum lsm_pool_member_type Different types of Pool member*/
+typedef enum {
+ LSM_POOL_MEMBER_TYPE_UNKNOWN = 0,
+ /**^ Plugin failed to detect the RAID member type. */
+ LSM_POOL_MEMBER_TYPE_OTHER = 1,
+ /**^ Vendor specific RAID member type. */
+ LSM_POOL_MEMBER_TYPE_DISK = 2,
+ /**^ Pool is created from RAID group using whole disks. */
+ LSM_POOL_MEMBER_TYPE_POOL = 3,
+ /**^
+ * Current pool(also known as sub-pool) is allocated from other
+ * pool(parent pool).
+ * The 'raid_type' will set to RAID_TYPE_OTHER unless certain RAID system
+ * support RAID using space of parent pools.
+ */
+} lsm_pool_member_type;
+
#define LSM_VOLUME_STRIP_SIZE_UNKNOWN 0
#define LSM_VOLUME_DISK_COUNT_UNKNOWN 0
#define LSM_VOLUME_MIN_IO_SIZE_UNKNOWN 0
diff --git a/c_binding/lsm_mgmt.cpp b/c_binding/lsm_mgmt.cpp
index e6a254f..7469127 100644
--- a/c_binding/lsm_mgmt.cpp
+++ b/c_binding/lsm_mgmt.cpp
@@ -802,6 +802,66 @@ int lsm_pool_list(lsm_connect *c, char *search_key, char *search_value,
return rc;
}

+int lsm_pool_member_info(lsm_connect *c, lsm_pool *pool,
+ lsm_volume_raid_type * raid_type,
+ lsm_pool_member_type *member_type,
+ lsm_string_list **member_ids, lsm_flag flags)
+{
+ if( LSM_FLAG_UNUSED_CHECK(flags) ) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ int rc = LSM_ERR_OK;
+ CONN_SETUP(c);
+
+ if( !LSM_IS_POOL(pool) ) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ if( !raid_type || !member_type || !member_ids) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["pool"] = pool_to_value(pool);
+ p["flags"] = Value(flags);
+ Value parameters(p);
+ try {
+
+ Value response;
+
+ rc = rpc(c, "pool_member_info", parameters, response);
+ if( LSM_ERR_OK == rc ) {
+ std::vector<Value> j = response.asArray();
+ *raid_type = (lsm_volume_raid_type) j[0].asInt32_t();
+ *member_type = (lsm_pool_member_type) j[1].asInt32_t();
+ *member_ids = NULL;
+ if (Value::array_t == j[2].valueType()){
+ if (j[2].asArray().size()){
+ *member_ids = value_to_string_list(j[2]);
+ if ( *member_ids == NULL ){
+ return LSM_ERR_NO_MEMORY;
+ }else if( lsm_string_list_size(*member_ids) !=
+ j[2].asArray().size() ){
+
+ lsm_string_list_free(*member_ids);
+ return LSM_ERR_NO_MEMORY;
+ }
+ }
+ }else{
+ rc = logException(
+ c, LSM_ERR_LIB_BUG, "member_ids data is not an array",
+ "member_ids data is not an array");
+ }
+ }
+ } catch( const ValueException &ve ) {
+ rc = logException(c, LSM_ERR_LIB_BUG, "Unexpected type",
+ ve.what());
+ }
+ return rc;
+
+}
+
int lsm_target_port_list(lsm_connect *c, const char *search_key,
const char *search_value,
lsm_target_port **target_ports[],
diff --git a/c_binding/lsm_plugin_ipc.cpp b/c_binding/lsm_plugin_ipc.cpp
index d2a43d4..dcae563 100644
--- a/c_binding/lsm_plugin_ipc.cpp
+++ b/c_binding/lsm_plugin_ipc.cpp
@@ -1015,6 +1015,50 @@ static int handle_volume_raid_info(lsm_plugin_ptr p, Value &params,
return rc;
}

+static int handle_pool_member_info(lsm_plugin_ptr p, Value &params,
+ Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->pool_member_info) {
+ Value v_pool = params["pool"];
+
+ if(IS_CLASS_POOL(v_pool) &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+ lsm_pool *pool = value_to_pool(v_pool);
+ std::vector<Value> result;
+
+ if( pool ) {
+ lsm_volume_raid_type raid_type = LSM_VOLUME_RAID_TYPE_UNKNOWN;
+ lsm_pool_member_type member_type =
+ LSM_POOL_MEMBER_TYPE_UNKNOWN;
+ lsm_string_list *member_ids = NULL;
+
+ rc = p->ops_v1_2->pool_member_info(
+ p, pool, &raid_type, &member_type,
+ &member_ids, LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ result.push_back(Value((int32_t)raid_type));
+ result.push_back(Value((int32_t)member_type));
+ result.push_back(string_list_to_value(member_ids));
+ if ( member_ids != NULL ) {
+ lsm_string_list_free(member_ids);
+ }
+ response = Value(result);
+ }
+
+ lsm_pool_record_free(pool);
+ } else {
+ rc = LSM_ERR_NO_MEMORY;
+ }
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+}
+
static int ag_list(lsm_plugin_ptr p, Value &params, Value &response)
{
int rc = LSM_ERR_NO_SUPPORT;
@@ -2213,7 +2257,8 @@ static std::map<std::string,handler> dispatch = static_map<std::string,handler>
("volume_resize", handle_volume_resize)
("volumes_accessible_by_access_group", vol_accessible_by_ag)
("volumes", handle_volumes)
- ("volume_raid_info", handle_volume_raid_info);
+ ("volume_raid_info", handle_volume_raid_info)
+ ("pool_member_info", handle_pool_member_info);

static int process_request(lsm_plugin_ptr p, const std::string &method, Value &request,
Value &response)
diff --git a/python_binding/lsm/_client.py b/python_binding/lsm/_client.py
index a641b1d..f3c9e6c 100644
--- a/python_binding/lsm/_client.py
+++ b/python_binding/lsm/_client.py
@@ -1074,3 +1074,101 @@ def volume_raid_info(self, volume, flags=FLAG_RSVD):
No support.
"""
return self._tp.rpc('volume_raid_info', _del_self(locals()))
+
+ @_return_requires([int, int, [unicode]])
+ def pool_member_info(self, pool, flags=FLAG_RSVD):
+ """
+ lsm.Client.pool_member_info(self, pool, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ Query the membership information of certain pool:
+ RAID type, member type and member ids.
+ Currently, LibStorageMgmt supports two types of pool:
+ * Sub-pool -- Pool.MEMBER_TYPE_POOL
+ Pool space is allocated from parent pool.
+ Example:
+ * NetApp ONTAP volume
+
+ * Disk RAID pool -- Pool.MEMBER_TYPE_DISK
+ Pool is a RAID group assembled by disks.
+ Example:
+ * LSI MegaRAID disk group
+ * EMC VNX pool
+ * NetApp ONTAP aggregate
+ Parameters:
+ pool (lsm.Pool object)
+ Pool to query
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ [raid_type, member_type, member_ids]
+
+ raid_type (int)
+ RAID Type of requested pool.
+ Could be one of these values:
+ Volume.RAID_TYPE_RAID0
+ Stripe
+ Volume.RAID_TYPE_RAID1
+ Two disks Mirror
+ Volume.RAID_TYPE_RAID3
+ Byte-level striping with dedicated parity
+ Volume.RAID_TYPE_RAID4
+ Block-level striping with dedicated parity
+ Volume.RAID_TYPE_RAID5
+ Block-level striping with distributed parity
+ Volume.RAID_TYPE_RAID6
+ Block-level striping with two distributed parities,
+ aka, RAID-DP
+ Volume.RAID_TYPE_RAID10
+ Stripe of mirrors
+ Volume.RAID_TYPE_RAID15
+ Parity of mirrors
+ Volume.RAID_TYPE_RAID16
+ Dual parity of mirrors
+ Volume.RAID_TYPE_RAID50
+ Stripe of parities Volume.RAID_TYPE_RAID60
+ Stripe of dual parities
+ Volume.RAID_TYPE_RAID51
+ Mirror of parities
+ Volume.RAID_TYPE_RAID61
+ Mirror of dual parities
+ Volume.RAID_TYPE_JBOD
+ Just bunch of disks, no parity, no striping.
+ Volume.RAID_TYPE_UNKNOWN
+ The plugin failed to detect the volume's RAID type.
+ Volume.RAID_TYPE_MIXED
+ This pool contains multiple RAID settings.
+ Volume.RAID_TYPE_OTHER
+ Vendor specific RAID type
+ member_type (int)
+ Could be one of these values:
+ Pool.MEMBER_TYPE_POOL
+ Current pool(also known as sub-pool) is allocated from
+ other pool(parent pool). The 'raid_type' will set to
+ RAID_TYPE_OTHER unless certain RAID system support RAID
+ using space of parent pools.
+ Pool.MEMBER_TYPE_DISK
+ Pool is created from RAID group using whole disks.
+ Pool.MEMBER_TYPE_OTHER
+ Vendor specific RAID member type.
+ Pool.MEMBER_TYPE_UNKNOWN
+ Plugin failed to detect the RAID member type.
+ member_ids (list of strings)
+ When 'member_type' is Pool.MEMBER_TYPE_POOL,
+ the 'member_ids' will contain a list of parent Pool IDs.
+ When 'member_type' is Pool.MEMBER_TYPE_DISK,
+ the 'member_ids' will contain a list of disk IDs.
+ When 'member_type' is Pool.MEMBER_TYPE_OTHER or
+ Pool.MEMBER_TYPE_UNKNOWN, the member_ids should be an empty
+ list.
+ SpecialExceptions:
+ LsmError
+ ErrorNumber.NO_SUPPORT
+ ErrorNumber.NOT_FOUND_POOL
+ Capability:
+ lsm.Capabilities.POOL_MEMBER_INFO
+ """
+ return self._tp.rpc('pool_member_info', _del_self(locals()))
diff --git a/python_binding/lsm/_data.py b/python_binding/lsm/_data.py
index 23681dd..2f42eaf 100644
--- a/python_binding/lsm/_data.py
+++ b/python_binding/lsm/_data.py
@@ -412,6 +412,11 @@ class Pool(IData):
STATUS_INITIALIZING = 1 << 14
STATUS_GROWING = 1 << 15

+ MEMBER_TYPE_UNKNOWN = 0
+ MEMBER_TYPE_OTHER = 1
+ MEMBER_TYPE_DISK = 2
+ MEMBER_TYPE_POOL = 3
+
def __init__(self, _id, _name, _element_type, _unsupported_actions,
_total_space, _free_space,
_status, _status_info, _system_id, _plugin_data=None):
@@ -754,6 +759,7 @@ class Capabilities(IData):
TARGET_PORTS_QUICK_SEARCH = 217

DISKS = 220
+ POOL_MEMBER_INFO = 221

def _to_dict(self):
return {'class': self.__class__.__name__,
--
1.8.3.1
Gris Ge
2015-04-24 12:06:17 UTC
Permalink
* New command 'pool-member-info' to utilize lsm.Client.pool_member_info()
method.

* Manpage updated.

* New alias 'pmi' for 'pool-member-info' command.

Changes in V2:

* Renamed command to 'pool-member-info' and alias 'pmi'

Changes in V4:

* Add bash auto completion of lsmcli for 'pool-member-info' command and its
alias.

Signed-off-by: Gris Ge <***@redhat.com>
---
doc/man/lsmcli.1.in | 9 +++++++++
tools/bash_completion/lsmcli | 8 +++++++-
tools/lsmcli/cmdline.py | 19 ++++++++++++++++++-
tools/lsmcli/data_display.py | 41 +++++++++++++++++++++++++++++++++++++++++
4 files changed, 75 insertions(+), 2 deletions(-)

diff --git a/doc/man/lsmcli.1.in b/doc/man/lsmcli.1.in
index 381d676..6faf690 100644
--- a/doc/man/lsmcli.1.in
+++ b/doc/man/lsmcli.1.in
@@ -351,6 +351,13 @@ Query RAID information for given volume.
\fB--vol\fR \fI<VOL_ID>\fR
Required. The ID of volume to query.

+.SS pool-member-info
+.TP 15
+Query RAID information for given pool.
+.TP
+\fB--pool\fR \fI<POOL_ID>\fR
+Required. The ID of pool to query.
+
.SS access-group-create
.TP 15
Create an access group.
@@ -589,6 +596,8 @@ Alias of 'volume-mask'
Alias of 'volume-unmask'
.SS vri
Alias of 'volume-raid-info'
+.SS pmi
+Alias of 'pool-member-info'
.SS ac
Alias of 'access-group-create'
.SS aa
diff --git a/tools/bash_completion/lsmcli b/tools/bash_completion/lsmcli
index a67083b..164604f 100644
--- a/tools/bash_completion/lsmcli
+++ b/tools/bash_completion/lsmcli
@@ -76,7 +76,7 @@ function _lsm()
fs-resize fs-export fs-unexport fs-clone fs-snap-create \
fs-snap-delete fs-snap-restore fs-dependants fs-dependants-rm \
file-clone ls lp lv ld la lf lt c p vc vd vr vm vi ve vi ac \
- aa ar ad vri"
+ aa ar ad vri pool-member-info pmi"

list_args="--type"
list_type_args="volumes pools fs snapshots exports nfs_client_auth \
@@ -124,6 +124,7 @@ function _lsm()
fs_snap_restore_args="--snap --fs --file --fileas --force"
fs_dependants_args="--fs"
file_clone_args="--fs --src --dst --backing-snapshot"
+ pool_member_info_args="--pool"

# These operations can potentially be slow and cause hangs depending on plugin and configuration
if [[ ${NO_VALUE_LOOKUP} -ne 0 ]] ; then
@@ -396,6 +397,11 @@ function _lsm()
COMPREPLY=( $(compgen -W "${potential_args}" -- ${cur}) )
return 0
;;
+ pool-member-info|pmi)
+ possible_args "${pool_member_info_args}"
+ COMPREPLY=( $(compgen -W "${potential_args}" -- ${cur}) )
+ return 0
+ ;;
*)
;;
esac
diff --git a/tools/lsmcli/cmdline.py b/tools/lsmcli/cmdline.py
index 9b2b682..d26da80 100644
--- a/tools/lsmcli/cmdline.py
+++ b/tools/lsmcli/cmdline.py
@@ -39,7 +39,8 @@

from lsm.lsmcli.data_display import (
DisplayData, PlugData, out,
- vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo)
+ vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo,
+ PoolRAIDInfo)


## Wraps the invocation to the command line
@@ -376,6 +377,14 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
),

dict(
+ name='pool-member-info',
+ help='Query Pool membership infomation',
+ args=[
+ dict(pool_id_opt),
+ ],
+ ),
+
+ dict(
name='access-group-create',
help='Create an access group',
args=[
@@ -637,6 +646,7 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
['ar', 'access-group-remove'],
['ad', 'access-group-delete'],
['vri', 'volume-raid-info'],
+ ['pmi', 'pool-member-info'],
)


@@ -1334,6 +1344,13 @@ def volume_raid_info(self, args):
VolumeRAIDInfo(
lsm_vol.id, *self.c.volume_raid_info(lsm_vol))])

+ def pool_member_info(self, args):
+ lsm_pool = _get_item(self.c.pools(), args.pool, "Pool")
+ self.display_data(
+ [
+ PoolRAIDInfo(
+ lsm_pool.id, *self.c.pool_member_info(lsm_pool))])
+
## Displays file system dependants
def fs_dependants(self, args):
fs = _get_item(self.c.fs(), args.fs, "File System")
diff --git a/tools/lsmcli/data_display.py b/tools/lsmcli/data_display.py
index 19f7114..115f15a 100644
--- a/tools/lsmcli/data_display.py
+++ b/tools/lsmcli/data_display.py
@@ -279,6 +279,26 @@ def raid_type_to_str(raid_type):
return _enum_type_to_str(raid_type, VolumeRAIDInfo._RAID_TYPE_MAP)


+class PoolRAIDInfo(object):
+ _MEMBER_TYPE_MAP = {
+ Pool.MEMBER_TYPE_UNKNOWN: 'Unknown',
+ Pool.MEMBER_TYPE_OTHER: 'Unknown',
+ Pool.MEMBER_TYPE_POOL: 'Pool',
+ Pool.MEMBER_TYPE_DISK: 'Disk',
+ }
+
+ def __init__(self, pool_id, raid_type, member_type, member_ids):
+ self.pool_id = pool_id
+ self.raid_type = raid_type
+ self.member_type = member_type
+ self.member_ids = member_ids
+
+ @staticmethod
+ def member_type_to_str(member_type):
+ return _enum_type_to_str(
+ member_type, PoolRAIDInfo._MEMBER_TYPE_MAP)
+
+
class DisplayData(object):

def __init__(self):
@@ -557,6 +577,27 @@ def __init__(self):
'value_conv_human': VOL_RAID_INFO_VALUE_CONV_HUMAN,
}

+ POOL_RAID_INFO_HEADER = OrderedDict()
+ POOL_RAID_INFO_HEADER['pool_id'] = 'Pool ID'
+ POOL_RAID_INFO_HEADER['raid_type'] = 'RAID Type'
+ POOL_RAID_INFO_HEADER['member_type'] = 'Member Type'
+ POOL_RAID_INFO_HEADER['member_ids'] = 'Member IDs'
+
+ POOL_RAID_INFO_COLUMN_SKIP_KEYS = []
+
+ POOL_RAID_INFO_VALUE_CONV_ENUM = {
+ 'raid_type': VolumeRAIDInfo.raid_type_to_str,
+ 'member_type': PoolRAIDInfo.member_type_to_str,
+ }
+ POOL_RAID_INFO_VALUE_CONV_HUMAN = []
+
+ VALUE_CONVERT[PoolRAIDInfo] = {
+ 'headers': POOL_RAID_INFO_HEADER,
+ 'column_skip_keys': POOL_RAID_INFO_COLUMN_SKIP_KEYS,
+ 'value_conv_enum': POOL_RAID_INFO_VALUE_CONV_ENUM,
+ 'value_conv_human': POOL_RAID_INFO_VALUE_CONV_HUMAN,
+ }
+
@staticmethod
def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
flag_human, flag_enum):
--
1.8.3.1
Gris Ge
2015-04-24 12:06:18 UTC
Permalink
* Add lsm.Client.pool_member_info() support.

* Use lsm.Pool.MEMBER_TYPE_XXX to replace PoolRAID.MEMBER_TYPE_XXX.

* Removed unused disk_type_to_member_type() and member_type_is_disk()
methods.

* Data version bumped to 3.2.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 98 ++++++++++++++++---------------------------------
plugin/sim/simulator.py | 3 ++
2 files changed, 35 insertions(+), 66 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index ae14848..de43701 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -67,54 +67,6 @@ def _random_vpd():


class PoolRAID(object):
- MEMBER_TYPE_UNKNOWN = 0
- MEMBER_TYPE_DISK = 1
- MEMBER_TYPE_DISK_MIX = 10
- MEMBER_TYPE_DISK_ATA = 11
- MEMBER_TYPE_DISK_SATA = 12
- MEMBER_TYPE_DISK_SAS = 13
- MEMBER_TYPE_DISK_FC = 14
- MEMBER_TYPE_DISK_SOP = 15
- MEMBER_TYPE_DISK_SCSI = 16
- MEMBER_TYPE_DISK_NL_SAS = 17
- MEMBER_TYPE_DISK_HDD = 18
- MEMBER_TYPE_DISK_SSD = 19
- MEMBER_TYPE_DISK_HYBRID = 110
- MEMBER_TYPE_DISK_LUN = 111
-
- MEMBER_TYPE_POOL = 2
-
- _MEMBER_TYPE_2_DISK_TYPE = {
- MEMBER_TYPE_DISK: Disk.TYPE_UNKNOWN,
- MEMBER_TYPE_DISK_MIX: Disk.TYPE_UNKNOWN,
- MEMBER_TYPE_DISK_ATA: Disk.TYPE_ATA,
- MEMBER_TYPE_DISK_SATA: Disk.TYPE_SATA,
- MEMBER_TYPE_DISK_SAS: Disk.TYPE_SAS,
- MEMBER_TYPE_DISK_FC: Disk.TYPE_FC,
- MEMBER_TYPE_DISK_SOP: Disk.TYPE_SOP,
- MEMBER_TYPE_DISK_SCSI: Disk.TYPE_SCSI,
- MEMBER_TYPE_DISK_NL_SAS: Disk.TYPE_NL_SAS,
- MEMBER_TYPE_DISK_HDD: Disk.TYPE_HDD,
- MEMBER_TYPE_DISK_SSD: Disk.TYPE_SSD,
- MEMBER_TYPE_DISK_HYBRID: Disk.TYPE_HYBRID,
- MEMBER_TYPE_DISK_LUN: Disk.TYPE_LUN,
- }
-
- @staticmethod
- def member_type_is_disk(member_type):
- """
- Returns True if defined 'member_type' is disk.
- False when else.
- """
- return member_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE
-
- @staticmethod
- def disk_type_to_member_type(disk_type):
- for m_type, d_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE.items():
- if disk_type == d_type:
- return m_type
- return PoolRAID.MEMBER_TYPE_UNKNOWN
-
_RAID_DISK_CHK = {
Volume.RAID_TYPE_JBOD: lambda x: x > 0,
Volume.RAID_TYPE_RAID0: lambda x: x > 0,
@@ -171,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.1"
+ VERSION = "3.2"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -916,6 +868,12 @@ def sim_syss(self):
"""
return self._get_table('systems', BackStore.SYS_KEY_LIST)

+ def sim_disk_ids_of_pool(self, sim_pool_id):
+ return list(
+ d['id']
+ for d in self._data_find(
+ 'disks', 'owner_pool_id="%s"' % sim_pool_id, ['id']))
+
def sim_disks(self):
"""
Return a list of sim_disk dict.
@@ -935,20 +893,6 @@ def sim_pool_of_id(self, sim_pool_id):

def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
element_type, unsupported_actions=0):
- # Detect disk type
- disk_type = None
- for sim_disk in self.sim_disks():
- if sim_disk['id'] not in sim_disk_ids:
- continue
- if disk_type is None:
- disk_type = sim_disk['disk_type']
- elif disk_type != sim_disk['disk_type']:
- disk_type = None
- break
- member_type = PoolRAID.MEMBER_TYPE_DISK
- if disk_type is not None:
- member_type = PoolRAID.disk_type_to_member_type(disk_type)
-
self._data_add(
'pools',
{
@@ -958,7 +902,7 @@ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
'element_type': element_type,
'unsupported_actions': unsupported_actions,
'raid_type': raid_type,
- 'member_type': member_type,
+ 'member_type': Pool.MEMBER_TYPE_DISK,
})

data_disk_count = PoolRAID.data_disk_count(
@@ -991,7 +935,7 @@ def sim_pool_create_sub_pool(self, name, parent_pool_id, size,
'element_type': element_type,
'unsupported_actions': unsupported_actions,
'raid_type': Volume.RAID_TYPE_OTHER,
- 'member_type': PoolRAID.MEMBER_TYPE_POOL,
+ 'member_type': Pool.MEMBER_TYPE_POOL,
'parent_pool_id': parent_pool_id,
'total_space': size,
})
@@ -2240,7 +2184,7 @@ def volume_raid_info(self, lsm_vol):
opt_io_size = Volume.OPT_IO_SIZE_UNKNOWN
disk_count = Volume.DISK_COUNT_UNKNOWN

- if sim_pool['member_type'] == PoolRAID.MEMBER_TYPE_POOL:
+ if sim_pool['member_type'] == Pool.MEMBER_TYPE_POOL:
parent_sim_pool = self.bs_obj.sim_pool_of_id(
sim_pool['parent_pool_id'])
raid_type = parent_sim_pool['raid_type']
@@ -2278,3 +2222,25 @@ def volume_raid_info(self, lsm_vol):
opt_io_size = int(data_disk_count * BackStore.STRIP_SIZE)

return [raid_type, strip_size, disk_count, min_io_size, opt_io_size]
+
+ @_handle_errors
+ def pool_member_info(self, lsm_pool):
+ sim_pool = self.bs_obj.sim_pool_of_id(
+ SimArray._lsm_id_to_sim_id(
+ lsm_pool.id,
+ LsmError(ErrorNumber.NOT_FOUND_POOL, "Pool not found")))
+ member_type = sim_pool['member_type']
+ member_ids = []
+ if member_type == Pool.MEMBER_TYPE_POOL:
+ member_ids = [
+ SimArray._sim_id_to_lsm_id(
+ sim_pool['parent_pool_id'], 'POOL')]
+ elif member_type == Pool.MEMBER_TYPE_DISK:
+ member_ids = list(
+ SimArray._sim_id_to_lsm_id(sim_disk_id, 'DISK')
+ for sim_disk_id in self.bs_obj.sim_disk_ids_of_pool(
+ sim_pool['id']))
+ else:
+ member_type = Pool.MEMBER_TYPE_UNKNOWN
+
+ return sim_pool['raid_type'], member_type, member_ids
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index d562cd6..724574f 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
@@ -292,3 +292,6 @@ def target_ports(self, search_key=None, search_value=None, flags=0):

def volume_raid_info(self, volume, flags=0):
return self.sim_array.volume_raid_info(volume)
+
+ def pool_member_info(self, pool, flags=0):
+ return self.sim_array.pool_member_info(pool)
--
1.8.3.1
Gris Ge
2015-04-24 12:06:19 UTC
Permalink
* Add simply lsm_pool_member_info() by return unknown data.

* Set LSM_CAP_POOL_MEMBER_INFO.

Changes in V2:

* Use lsm_string_list for member_ids of lsm_pool_member_info().

Changes in V3:

* Renamed to lsm_pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/simc/simc_lsmplugin.c | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/plugin/simc/simc_lsmplugin.c b/plugin/simc/simc_lsmplugin.c
index 422a064..6aa7bdb 100644
--- a/plugin/simc/simc_lsmplugin.c
+++ b/plugin/simc/simc_lsmplugin.c
@@ -392,6 +392,7 @@ static int cap(lsm_plugin_ptr c, lsm_system *system,
LSM_CAP_EXPORT_FS,
LSM_CAP_EXPORT_REMOVE,
LSM_CAP_VOLUME_RAID_INFO,
+ LSM_CAP_POOL_MEMBER_INFO,
-1
);

@@ -980,8 +981,29 @@ static int volume_raid_info(lsm_plugin_ptr c, lsm_volume *volume,
return rc;
}

+static int pool_member_info(
+ lsm_plugin_ptr c, lsm_pool *pool,
+ lsm_volume_raid_type *raid_type, lsm_pool_member_type *member_type,
+ lsm_string_list **member_ids, lsm_flag flags)
+{
+ int rc = LSM_ERR_OK;
+ struct plugin_data *pd = (struct plugin_data*)lsm_private_data_get(c);
+ lsm_pool *p = find_pool(pd, lsm_pool_id_get(pool));
+
+ if( !p) {
+ rc = lsm_log_error_basic(c, LSM_ERR_NOT_FOUND_POOL,
+ "Pool not found!");
+ }
+
+ *raid_type = LSM_VOLUME_RAID_TYPE_UNKNOWN;
+ *member_type = LSM_POOL_MEMBER_TYPE_UNKNOWN;
+ *member_ids = NULL;
+ return rc;
+}
+
static struct lsm_ops_v1_2 ops_v1_2 = {
- volume_raid_info
+ volume_raid_info,
+ pool_member_info,
};

static int volume_enable_disable(lsm_plugin_ptr c, lsm_volume *v,
--
1.8.3.1
Gris Ge
2015-04-24 12:06:20 UTC
Permalink
* Add lsm.Client.pool_member_info() support:
* Treat NetApp ONTAP aggregate as disk RAID group.
Use na_disk['aggregate'] property to determine disk ownership.
* Treat NetApp ONTAP volume as sub-pool.
Use na_vol['containing-aggregate'] property to determine sub-pool
ownership.

* Set lsm.Capabilities.POOL_MEMBER_INFO.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/ontap/ontap.py | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/plugin/ontap/ontap.py b/plugin/ontap/ontap.py
index f9f1f56..5a01494 100644
--- a/plugin/ontap/ontap.py
+++ b/plugin/ontap/ontap.py
@@ -545,6 +545,7 @@ def capabilities(self, system, flags=0):
cap.set(Capabilities.TARGET_PORTS)
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_MEMBER_INFO)
return cap

@handle_ontap_errors
@@ -1330,3 +1331,23 @@ def volume_raid_info(self, volume, flags=0):
return [
raid_type, Ontap._STRIP_SIZE, disk_count, Ontap._STRIP_SIZE,
Ontap._OPT_IO_SIZE]
+
+ @handle_ontap_errors
+ def pool_member_info(self, pool, flags=0):
+ if pool.element_type & Pool.ELEMENT_TYPE_VOLUME:
+ # We got a NetApp volume
+ raid_type = Volume.RAID_TYPE_OTHER
+ member_type = Pool.MEMBER_TYPE_POOL
+ na_vol = self.f.volumes(volume_name=pool.name)[0]
+ disk_ids = [na_vol['containing-aggregate']]
+ else:
+ # We got a NetApp aggregate
+ member_type = Pool.MEMBER_TYPE_DISK
+ na_aggr = self.f.aggregates(aggr_name=pool.name)[0]
+ raid_type = Ontap._raid_type_of_na_aggr(na_aggr)
+ disk_ids = list(
+ Ontap._disk_id(d)
+ for d in self.f.disks()
+ if 'aggregate' in d and d['aggregate'] == pool.name)
+
+ return raid_type, member_type, disk_ids
--
1.8.3.1
Gris Ge
2015-04-24 12:06:21 UTC
Permalink
* Add lsm.Client.pool_member_info() support:
* Use command 'storcli /c0/d0 show all J'.
* Use 'DG Drive LIST' section of output for disk_ids.
* Use 'TOPOLOGY' section of output for raid_type.
Since the command used here is not sharing the same data layout with
command "storcli /c0/v0 show all J" which is used by 'volume_raid_info()'
method, we now have two set of codes for detecting RAID type.

* Set lsm.Capabilities.POOL_MEMBER_INFO.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 54 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 78603fc..256ddd6 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -259,6 +259,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_MEMBER_INFO)
return cap

def _storcli_exec(self, storcli_cmds, flag_json=True):
@@ -546,3 +547,56 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
return [
raid_type, strip_size, disk_count, strip_size,
strip_size * strip_count]
+
+ @_handle_errors
+ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
+ lsi_dg_path = pool.plugin_data
+ # Check whether pool exists.
+ try:
+ dg_show_all_output = self._storcli_exec(
+ [lsi_dg_path , "show", "all"])
+ except ExecError as exec_error:
+ try:
+ json_output = json.loads(exec_error.stdout)
+ detail_error = json_output[
+ 'Controllers'][0]['Command Status']['Detailed Status']
+ except Exception:
+ raise exec_error
+
+ if detail_error and detail_error[0]['Status'] == 'Not found':
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_POOL,
+ "Pool not found")
+ raise
+
+ ctrl_num = lsi_dg_path.split('/')[1][1:]
+ lsm_disk_map = {}
+ disk_ids = []
+ for lsm_disk in self.disks():
+ lsm_disk_map[lsm_disk.plugin_data] = lsm_disk.id
+
+ for dg_disk_info in dg_show_all_output['DG Drive LIST']:
+ cur_lsi_disk_path = "/c%s" % ctrl_num + "/e%s/s%s" % tuple(
+ dg_disk_info['EID:Slt'].split(':'))
+ if cur_lsi_disk_path in lsm_disk_map.keys():
+ disk_ids.append(lsm_disk_map[cur_lsi_disk_path])
+ else:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "pool_member_info(): Failed to find disk id of %s" %
+ cur_lsi_disk_path)
+
+ raid_type = Volume.RAID_TYPE_UNKNOWN
+ dg_num = lsi_dg_path.split('/')[2][1:]
+ for dg_top in dg_show_all_output['TOPOLOGY']:
+ if dg_top['Arr'] == '-' and \
+ dg_top['Row'] == '-' and \
+ int(dg_top['DG']) == int(dg_num):
+ raid_type = _RAID_TYPE_MAP.get(
+ dg_top['Type'], Volume.RAID_TYPE_UNKNOWN)
+ break
+
+ if raid_type == Volume.RAID_TYPE_RAID1 and len(disk_ids) >= 4:
+ raid_type = Volume.RAID_TYPE_RAID10
+
+ return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
--
1.8.3.1
Gris Ge
2015-04-24 12:06:22 UTC
Permalink
* Add lsm.Client.pool_member_info() support:
* Base on 'hpssacli ctrl slot=0 show config detail' command.

* Set lsm.Capabilities.POOL_MEMBER_INFO.

Changes in V3:

* Renamed to pool_member_info().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/hpsa/hpsa.py | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)

diff --git a/plugin/hpsa/hpsa.py b/plugin/hpsa/hpsa.py
index 76eb4a7..75f507f 100644
--- a/plugin/hpsa/hpsa.py
+++ b/plugin/hpsa/hpsa.py
@@ -285,6 +285,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
+ cap.set(Capabilities.POOL_MEMBER_INFO)
return cap

def _sacli_exec(self, sacli_cmds, flag_convert=True):
@@ -531,3 +532,41 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
"Volume not found")

return [raid_type, strip_size, disk_count, strip_size, stripe_size]
+
+ @_handle_errors
+ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
+ """
+ Depend on command:
+ hpssacli ctrl slot=0 show config detail
+ """
+ if not pool.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Ilegal input volume argument: missing plugin_data property")
+
+ (ctrl_num, array_num) = pool.plugin_data.split(":")
+ ctrl_data = self._sacli_exec(
+ ["ctrl", "slot=%s" % ctrl_num, "show", "config", "detail"]
+ ).values()[0]
+
+ disk_ids = []
+ raid_type = Volume.RAID_TYPE_UNKNOWN
+ for key_name in ctrl_data.keys():
+ if key_name == "Array: %s" % array_num:
+ for array_key_name in ctrl_data[key_name].keys():
+ if array_key_name.startswith("Logical Drive: ") and \
+ raid_type == Volume.RAID_TYPE_UNKNOWN:
+ raid_type = _hp_raid_type_to_lsm(
+ ctrl_data[key_name][array_key_name])
+ elif array_key_name.startswith("physicaldrive"):
+ hp_disk = ctrl_data[key_name][array_key_name]
+ if hp_disk['Drive Type'] == 'Data Drive':
+ disk_ids.append(hp_disk['Serial Number'])
+ break
+
+ if len(disk_ids) == 0:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_POOL,
+ "Pool not found")
+
+ return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
--
1.8.3.1
Gris Ge
2015-04-24 12:06:23 UTC
Permalink
* cmdtest.py: Test lsmcli 'pool-member-info' command.
* plugin_test.py: Capability based python API test.
* tester.c: C API test for both simulator plugin and simulator C plugin.

Changes in V2:

* Updated tester.c to use lsm_string_list for 'member_ids' argument.

Changes in V3:

* Sync with pool_raid_info -> pool_member_info rename.

Changes in V4:

* Initialized the 'member_ids' in pool_member_info test.
* Make sure retried 'member_id' is not empty string.

Changes in V6:

* Fix memory leak 'pools' in test_pool_member_info of tester.c.

Signed-off-by: Gris Ge <***@redhat.com>
---
test/cmdtest.py | 20 ++++++++++++++++++++
test/plugin_test.py | 10 ++++++++++
test/tester.c | 31 +++++++++++++++++++++++++++++++
3 files changed, 61 insertions(+)

diff --git a/test/cmdtest.py b/test/cmdtest.py
index e80e027..60bef93 100755
--- a/test/cmdtest.py
+++ b/test/cmdtest.py
@@ -696,6 +696,24 @@ def volume_raid_info_test(cap, system_id):
exit(10)
return

+def pool_member_info_test(cap, system_id):
+ if cap['POOL_MEMBER_INFO']:
+ out = call([cmd, '-t' + sep, 'list', '--type', 'POOLS'])[1]
+ pool_list = parse(out)
+ for pool in pool_list:
+ out = call(
+ [cmd, '-t' + sep, 'pool-member-info', '--pool', pool[0]])[1]
+ r = parse(out)
+ if len(r[0]) != 4:
+ print "pool-member-info got expected output: %s" % out
+ exit(10)
+ if r[0][0] != pool[0]:
+ print "pool-member-info output pool ID is not requested " \
+ "pool ID %s" % out
+ exit(10)
+ return
+
+
def run_all_tests(cap, system_id):
test_display(cap, system_id)
test_plugin_list(cap, system_id)
@@ -709,6 +727,8 @@ def run_all_tests(cap, system_id):

volume_raid_info_test(cap, system_id)

+ pool_member_info_test(cap,system_id)
+
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("-c", "--command", action="store", type="string",
diff --git a/test/plugin_test.py b/test/plugin_test.py
index 69a45b7..5cb48ae 100755
--- a/test/plugin_test.py
+++ b/test/plugin_test.py
@@ -1283,6 +1283,16 @@ def test_create_delete_exports(self):
self.c.export_remove(exp)
self._fs_delete(fs)

+ def test_pool_member_info(self):
+ for s in self.systems:
+ cap = self.c.capabilities(s)
+ if supported(cap, [Cap.POOL_MEMBER_INFO]):
+ for pool in self.c.pools():
+ (raid_type, member_type, member_ids) = \
+ self.c.pool_member_info(pool)
+ self.assertTrue(type(raid_type) is int)
+ self.assertTrue(type(member_type) is int)
+ self.assertTrue(type(member_ids) is list)

def dump_results():
"""
diff --git a/test/tester.c b/test/tester.c
index 1622a75..ee12bcc 100644
--- a/test/tester.c
+++ b/test/tester.c
@@ -2887,6 +2887,36 @@ START_TEST(test_volume_raid_info)
}
END_TEST

+START_TEST(test_pool_member_info)
+{
+ int rc;
+ lsm_pool **pools = NULL;
+ uint32_t poolCount = 0;
+ G(rc, lsm_pool_list, c, NULL, NULL, &pools, &poolCount,
+ LSM_CLIENT_FLAG_RSVD);
+
+ lsm_volume_raid_type raid_type;
+ lsm_pool_member_type member_type;
+ lsm_string_list *member_ids = NULL;
+
+ int i;
+ uint32_t y;
+ for (i = 0; i < poolCount; i++) {
+ G(
+ rc, lsm_pool_member_info, c, pools[i], &raid_type, &member_type,
+ &member_ids, LSM_CLIENT_FLAG_RSVD);
+ for(y = 0; y < lsm_string_list_size(member_ids); y++){
+ // Simulator user reading the member id.
+ const char *cur_member_id = lsm_string_list_elem_get(
+ member_ids, y);
+ fail_unless( strlen(cur_member_id) );
+ }
+ lsm_string_list_free(member_ids);
+ }
+ G(rc, lsm_pool_record_array_free, pools, poolCount);
+}
+END_TEST
+
Suite * lsm_suite(void)
{
Suite *s = suite_create("libStorageMgmt");
@@ -2923,6 +2953,7 @@ Suite * lsm_suite(void)
tcase_add_test(basic, test_nfs_exports);
tcase_add_test(basic, test_invalid_input);
tcase_add_test(basic, test_volume_raid_info);
+ tcase_add_test(basic, test_pool_member_info);

suite_add_tcase(s, basic);
return s;
--
1.8.3.1
Gris Ge
2015-04-24 12:06:24 UTC
Permalink
* When certain pool is RAID0 or other RAID type with multiple data disks,
the free space generated by 'pools_view' sqlite view will be incorrect
as volume consumed size is counted serial times.
The fix is generate temporary table for total space, volume consumed space,
fs consumed space, sub-pool consumed space.

* Since we changed the 'pools_view' view, bumped the version to 3.3.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 98 +++++++++++++++++++++++++++++++++-----------------
1 file changed, 66 insertions(+), 32 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index de43701..a0809f6 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -123,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.2"
+ VERSION = "3.3"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -353,40 +353,74 @@ def __init__(self, statefile, timeout):
"""
CREATE VIEW pools_view AS
SELECT
- pool.id,
- pool.name,
- pool.status,
- pool.status_info,
- pool.element_type,
- pool.unsupported_actions,
- pool.raid_type,
- pool.member_type,
- pool.parent_pool_id,
- ifnull(pool.total_space,
- ifnull(SUM(disk.total_space), 0))
- total_space,
- ifnull(pool.total_space,
- ifnull(SUM(disk.total_space), 0)) -
- ifnull(SUM(volume.consumed_size), 0) -
- ifnull(SUM(fs.consumed_size), 0) -
- ifnull(SUM(pool2.total_space), 0)
- free_space
-
+ pool0.id,
+ pool0.name,
+ pool0.status,
+ pool0.status_info,
+ pool0.element_type,
+ pool0.unsupported_actions,
+ pool0.raid_type,
+ pool0.member_type,
+ pool0.parent_pool_id,
+ pool1.total_space total_space,
+ pool1.total_space -
+ pool2.vol_consumed_size -
+ pool3.fs_consumed_size -
+ pool4.sub_pool_consumed_size free_space
FROM
- pools pool
- LEFT JOIN disks disk
- ON pool.id = disk.owner_pool_id AND
- disk.role = 'DATA'
- LEFT JOIN volumes volume
- ON volume.pool_id = pool.id
- LEFT JOIN fss fs
- ON fs.pool_id = pool.id
- LEFT JOIN pools pool2
- ON pool2.parent_pool_id = pool.id
+ pools pool0
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(pool.total_space,
+ ifnull(SUM(disk.total_space), 0))
+ total_space
+ FROM pools pool
+ LEFT JOIN disks disk
+ ON pool.id = disk.owner_pool_id AND
+ disk.role = 'DATA'
+ GROUP BY
+ pool.id
+ ) pool1 ON pool0.id = pool1.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(volume.consumed_size), 0)
+ vol_consumed_size
+ FROM pools pool
+ LEFT JOIN volumes volume
+ ON volume.pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool2 ON pool0.id = pool2.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(fs.consumed_size), 0)
+ fs_consumed_size
+ FROM pools pool
+ LEFT JOIN fss fs
+ ON fs.pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool3 ON pool0.id = pool3.id
+
+ LEFT JOIN (
+ SELECT
+ pool.id,
+ ifnull(SUM(sub_pool.total_space), 0)
+ sub_pool_consumed_size
+ FROM pools pool
+ LEFT JOIN pools sub_pool
+ ON sub_pool.parent_pool_id = pool.id
+ GROUP BY
+ pool.id
+ ) pool4 ON pool0.id = pool4.id

GROUP BY
- pool.id
- ;
+ pool0.id;
""")
sql_cmd += (
"""
--
1.8.3.1
Gris Ge
2015-04-24 12:06:25 UTC
Permalink
* Change Disk.plugin_data from '/c0/e252/s1' to '0:252:1'.

* Updated pool_member_info() method for this change.

Changes in V3:

* Rebased on pool_raid_info to pool_member_info rename change.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 256ddd6..46e7916 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -29,6 +29,7 @@
# Naming scheme
# mega_sys_path /c0
# mega_disk_path /c0/e64/s0
+# lsi_disk_id 0:64:0


def _handle_errors(method):
@@ -393,7 +394,8 @@ def disks(self, search_key=None, search_value=None,
status = _disk_status_of(
disk_show_basic_dict, disk_show_stat_dict)

- plugin_data = mega_disk_path
+ plugin_data = "%s:%s" % (
+ ctrl_num, disk_show_basic_dict['EID:Slt'])

rc_lsm_disks.append(
Disk(
@@ -576,15 +578,14 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
lsm_disk_map[lsm_disk.plugin_data] = lsm_disk.id

for dg_disk_info in dg_show_all_output['DG Drive LIST']:
- cur_lsi_disk_path = "/c%s" % ctrl_num + "/e%s/s%s" % tuple(
- dg_disk_info['EID:Slt'].split(':'))
- if cur_lsi_disk_path in lsm_disk_map.keys():
- disk_ids.append(lsm_disk_map[cur_lsi_disk_path])
+ cur_lsi_disk_id = "%s:%s" % (ctrl_num, dg_disk_info['EID:Slt'])
+ if cur_lsi_disk_id in lsm_disk_map.keys():
+ disk_ids.append(lsm_disk_map[cur_lsi_disk_id])
else:
raise LsmError(
ErrorNumber.PLUGIN_BUG,
"pool_member_info(): Failed to find disk id of %s" %
- cur_lsi_disk_path)
+ cur_lsi_disk_id)

raid_type = Volume.RAID_TYPE_UNKNOWN
dg_num = lsi_dg_path.split('/')[2][1:]
--
1.8.3.1
Gris Ge
2015-04-24 12:06:26 UTC
Permalink
* New methods:
For detail information, please refer to docstring of python methods.

* Python:
* lsm.Client.volume_raid_create_cap_get(self, system,
flags=lsm.Client.FLAG_RSVD)

Query supported RAID types and strip sizes of volume_raid_create().

* lsm.Client.volume_raid_create(self, name, raid_type, disks,
strip_size,
flags=lsm.Client.FLAG_RSVD)
Create a disk RAID pool and allocate its full space to single
volume.

* C user API:
* int lsm_volume_raid_create_cap_get(
lsm_connect *c, lsm_system *system,
uint32_t **supported_raid_types,
uint32_t *supported_raid_type_count,
uint32_t **supported_strip_sizes,
uint32_t *supported_strip_size_count,
lsm_flag flags);
* int lsm_volume_raid_create(
lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
lsm_disk *disks[], uint32_t disk_count, uint32_t strip_size,
lsm_volume **new_volume, lsm_flag flags);

* C plugin interfaces:
* typedef int (*lsm_plug_volume_raid_create_cap_get)(
lsm_plugin_ptr c, lsm_system *system,
uint32_t **supported_raid_types,
uint32_t *supported_raid_type_count,
uint32_t **supported_strip_sizes, uint32_t
*supported_strip_size_count, lsm_flag flags)

* typedef int (*lsm_plug_volume_raid_create)(
lsm_plugin_ptr c, const char *name,
lsm_volume_raid_type raid_type, lsm_disk *disks[], uint32_t
disk_count, uint32_t strip_size, lsm_volume **new_volume,
lsm_flag flags);

* Added two above into 'lsm_ops_v1_2' struct.

* C internal functions for converting C uint32 array to C++ Value:
* values_to_uint32_array(
Value &value, uint32_t **uint32_array, uint32_t *count)
* uint32_array_to_value(
uint32_t *uint32_array, uint32_t count)

* New capability:
* Python: lsm.Capabilities.VOLUME_RAID_CREATE
* C: LSM_CAP_VOLUME_RAID_CREATE

* New errors:
* Python: lsm.ErrorNumber.NOT_FOUND_DISK, lsm.ErrorNumber.DISK_NOT_FREE
* C: LSM_ERR_NOT_FOUND_DISK, LSM_ERR_DISK_NOT_FREE

* Design notes:
* Why not create pool only and allocate volume from it?
Neither HP Smart Array nor LSI MegaRAID support this.
Both require at least one volume on any pool(RAID group) and
pool(RAID group) only created when creating volume.
This is also the root reason that this method is dedicate for
hardware RAID cards, not for SAN or NAS.

* Why not allowing create multiple volumes but use single volume
consuming all space?
Even both vendors above support creating creating multiple volumes
in simple command, but it will be complex design API and many new
errors. In my humble options, most use case is to create single
volume and let OS create partitions on it.

* Why not store supported RAID type and stripe size in existing
lsm.Client.capabilities() call?
RAID type could be stored in capabilities(), but both plugin
implementation and user code has to do a capability to raid type
conversion. The strip size is number, even two vendors mentioned above
are supporting the same list of strip size, but there is no reason
create this limitation, again, conversion is also required for
capability to strip size integer if saving it in capabilities().

* Why not storing supported RAID type returned by
volume_raid_create_cap_get() in bitmap?
A conversion is require is do so. Just want to make plugin and
API user life easier.

* Why strip size is mandatory argument and where is strip size change
method?
Changing strip size after volume creation will normally take a long
time for data migration. We will not introduce strip size change
method without convincing user case.

Changes in V2:

* Rebased on pool_raid_info->pool_member_info rename changes.
* Fix 'comparison between signed and unsigned integer' in
lsm_volume_raid_create() of lsm_mgmt.cpp.
* Initialize pointers in handle_volume_raid_create_cap_get() of
lsm_plugin_ipc.cpp.

Changes in V4:

* Use 'volume_raid_create' instead:
* lsm.Client.volume_raid_create()
* lsm.Client.volume_raid_create_cap_get()
* lsm_volume_raid_create_cap_get()
* lsm_volume_raid_create()
* lsm_plug_volume_raid_create()
* lsm_plug_volume_raid_create_cap_get()

* Use this capability instead:
* LSM_CAP_VOLUME_RAID_CREATE/lsm.Capabilities.VOLUME_RAID_CREATE

* Changed method description to 'Create a disk RAID pool and allocate entire
full space to new volume."

* Free member and set output values to sane if error occurred in
lsm_volume_raid_create_cap_get() of lsm_mgmt.cpp.

* Only free uint32 array if not NULL in handle_volume_create_raid_cap_get()
of lsm_plugin_ipc.cpp.

* Free lsm_sys memory before quit in handle_volume_create_raid_cap_get() of
lsm_plugin_ipc.cpp.

* Fix incorrect strip_size data conversion from JSON to C in
handle_volume_raid_create() of lsm_plugin_ipc.cpp.

* Free lsm_volume memory before quit in handle_volume_create_raid() of
lsm_plugin_ipc.cpp.

* Updated docstring of lsm.Client.volume_raid_create_cap_get() by
removing incorrect RAID types(RAID 15/16/51/61) as no plugin support them
yet.

* Updated docstring of lsm.Client.volume_raid_create() by removing incorrect
capabilities.

* Updated docstring of lsm.Client.volume_raid_create() by removing
NAME_CONFLICT error of duplicate call. Duplicate call should always
raise DISK_NOT_FREE error.

* Updated docstring of lsm.Client.volume_raid_create() by mentioning
the name of new pool should be automatically chose by plugin.

Signed-off-by: Gris Ge <***@redhat.com>
---
c_binding/include/libstoragemgmt/libstoragemgmt.h | 62 +++++++
.../libstoragemgmt/libstoragemgmt_capabilities.h | 1 +
.../include/libstoragemgmt/libstoragemgmt_error.h | 2 +
.../libstoragemgmt/libstoragemgmt_plug_interface.h | 57 +++++++
.../include/libstoragemgmt/libstoragemgmt_types.h | 3 +
c_binding/lsm_convert.cpp | 41 +++++
c_binding/lsm_convert.hpp | 12 ++
c_binding/lsm_mgmt.cpp | 128 ++++++++++++++
c_binding/lsm_plugin_ipc.cpp | 96 ++++++++++-
python_binding/lsm/_client.py | 187 +++++++++++++++++++++
python_binding/lsm/_common.py | 3 +
python_binding/lsm/_data.py | 3 +
12 files changed, 594 insertions(+), 1 deletion(-)

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt.h b/c_binding/include/libstoragemgmt/libstoragemgmt.h
index 85022b4..869d5fd 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt.h
@@ -891,6 +891,68 @@ int LSM_DLL_EXPORT lsm_pool_member_info(
lsm_pool_member_type *member_type, lsm_string_list **member_ids,
lsm_flag flags);

+/**
+ * Query all supported RAID types and strip sizes which could be used
+ * in lsm_volume_raid_create() functions.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid connection
+ * @param[in] system
+ * The lsm_sys type.
+ * @param[out] supported_raid_types
+ * The pointer of uint32_t array. Containing
+ * lsm_volume_raid_type values.
+ * You need to free this memory by yourself.
+ * @param[out] supported_raid_type_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_raid_types array.
+ * @param[out] supported_strip_sizes
+ * The pointer of uint32_t array. Containing
+ * all supported strip sizes.
+ * You need to free this memory by yourself.
+ * @param[out] supported_strip_size_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_strip_sizes array.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_volume_raid_create_cap_get(
+ lsm_connect *c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags);
+
+/**
+ * Create a disk RAID pool and allocate entire full space to new volume.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid connection
+ * @param[in] name String. Name for the new volume. It might be ignored or
+ * altered on some hardwardware raid cards in order to fit
+ * their limitation.
+ * @param[in] raid_type
+ * Enum of lsm_volume_raid_type. Please refer to the returns
+ * of lsm_volume_raid_create_cap_get() function for
+ * supported strip sizes.
+ * @param[in] disks
+ * An array of lsm_disk types
+ * @param[in] disk_count
+ * The count of lsm_disk in 'disks' argument.
+ * Count starts with 1.
+ * @param[in] strip_size
+ * uint32_t. The strip size in bytes. Please refer to
+ * the returns of lsm_volume_raid_create_cap_get() function
+ * for supported strip sizes.
+ * @param[out] new_volume
+ * Newly created volume, Pointer to the lsm_volume type
+ * pointer.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+int LSM_DLL_EXPORT lsm_volume_raid_create(
+ lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags);
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
index 943f4a4..07bf8b0 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_capabilities.h
@@ -117,6 +117,7 @@ typedef enum {
LSM_CAP_POOL_MEMBER_INFO = 221,
/**^ Query pool member information */

+ LSM_CAP_VOLUME_RAID_CREATE = 222,
} lsm_capability_type;

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_error.h b/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
index 0b28ddc..29c460a 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_error.h
@@ -63,6 +63,7 @@ typedef enum {
LSM_ERR_NOT_FOUND_VOLUME = 205, /**< Specified volume not found */
LSM_ERR_NOT_FOUND_NFS_EXPORT = 206, /**< NFS export not found */
LSM_ERR_NOT_FOUND_SYSTEM = 208, /**< System not found */
+ LSM_ERR_NOT_FOUND_DISK = 209,

LSM_ERR_NOT_LICENSED = 226, /**< Need license for feature */

@@ -90,6 +91,7 @@ typedef enum {

LSM_ERR_EMPTY_ACCESS_GROUP = 511,
LSM_ERR_POOL_NOT_READY = 512,
+ LSM_ERR_DISK_NOT_FREE = 513,

} lsm_error_number;

diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
index cdc8dc9..460054d 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_plug_interface.h
@@ -853,6 +853,61 @@ typedef int (*lsm_plug_pool_member_info)(
lsm_pool_member_type *member_type, lsm_string_list **member_ids,
lsm_flag flags);

+/**
+ * Query all supported RAID types and strip sizes which could be used
+ * in lsm_volume_raid_create() functions.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] system
+ * The lsm_sys type.
+ * @param[out] supported_raid_types
+ * The pointer of uint32_t array. Containing
+ * lsm_volume_raid_type values.
+ * @param[out] supported_raid_type_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_raid_types array.
+ * @param[out] supported_strip_sizes
+ * The pointer of uint32_t array. Containing
+ * all supported strip sizes.
+ * @param[out] supported_strip_size_count
+ * The pointer of uint32_t. Indicate the item count of
+ * supported_strip_sizes array.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_volume_raid_create_cap_get)(
+ lsm_plugin_ptr c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags);
+
+/**
+ * Create a disk RAID pool and allocate entire full space to new volume.
+ * New in version 1.2, only available for hardware RAID cards.
+ * @param[in] c Valid lsm plug-in pointer
+ * @param[in] name String. Name for the new volume. It might be ignored or
+ * altered on some hardwardware raid cards in order to fit
+ * their limitation.
+ * @param[in] raid_type
+ * Enum of lsm_volume_raid_type.
+ * @param[in] disks
+ * An array of lsm_disk types
+ * @param[in] disk_count
+ * The count of lsm_disk in 'disks' argument.
+ * @param[in] strip_size
+ * uint32_t. The strip size in bytes.
+ * @param[out] new_volume
+ * Newly created volume, Pointer to the lsm_volume type
+ * pointer.
+ * @param[in] flags Reserved, set to 0
+ * @return LSM_ERR_OK on success else error reason.
+ */
+typedef int (*lsm_plug_volume_raid_create)(
+ lsm_plugin_ptr c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags);
+
/** \struct lsm_ops_v1_2
* \brief Functions added in version 1.2
* NOTE: This structure will change during the developement util version 1.2
@@ -862,6 +917,8 @@ struct lsm_ops_v1_2 {
lsm_plug_volume_raid_info vol_raid_info;
/**^ Query volume RAID information*/
lsm_plug_pool_member_info pool_member_info;
+ lsm_plug_volume_raid_create_cap_get vol_create_raid_cap_get;
+ lsm_plug_volume_raid_create vol_create_raid;
};

/**
diff --git a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
index 115ec5a..667c320 100644
--- a/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
+++ b/c_binding/include/libstoragemgmt/libstoragemgmt_types.h
@@ -297,6 +297,9 @@ typedef enum {
LSM_TARGET_PORT_TYPE_ISCSI = 4
} lsm_target_port_type;

+#define LSM_VOLUME_VCR_STRIP_SIZE_DEFAULT 0
+/** ^ Plugin and hardware RAID will use their default strip size */
+
#ifdef __cplusplus
}
#endif
diff --git a/c_binding/lsm_convert.cpp b/c_binding/lsm_convert.cpp
index 7baf2e6..9b799ab 100644
--- a/c_binding/lsm_convert.cpp
+++ b/c_binding/lsm_convert.cpp
@@ -617,3 +617,44 @@ Value target_port_to_value(lsm_target_port *tp)
}
return Value();
}
+
+int values_to_uint32_array(
+ Value &value, uint32_t **uint32_array, uint32_t *count)
+{
+ int rc = LSM_ERR_OK;
+ try {
+ std::vector<Value> data = value.asArray();
+ *count = data.size();
+ if (*count){
+ *uint32_array = (uint32_t *)malloc(sizeof(uint32_t) * *count);
+ if (*uint32_array){
+ uint32_t i;
+ for( i = 0; i < *count; i++ ){
+ (*uint32_array)[i] = data[i].asUint32_t();
+ }
+ }else{
+ rc = LSM_ERR_NO_MEMORY;
+ }
+ }
+ } catch( const ValueException &ve) {
+ if( *count ) {
+ free(*uint32_array);
+ *uint32_array = NULL;
+ *count = 0;
+ }
+ rc = LSM_ERR_LIB_BUG;
+ }
+ return rc;
+}
+
+Value uint32_array_to_value(uint32_t *uint32_array, uint32_t count)
+{
+ std::vector<Value> rc;
+ if( uint32_array && count ) {
+ uint32_t i;
+ for( i = 0; i < count; i++ ) {
+ rc.push_back(uint32_array[i]);
+ }
+ }
+ return rc;
+}
diff --git a/c_binding/lsm_convert.hpp b/c_binding/lsm_convert.hpp
index fe1e91d..907a229 100644
--- a/c_binding/lsm_convert.hpp
+++ b/c_binding/lsm_convert.hpp
@@ -284,4 +284,16 @@ lsm_target_port LSM_DLL_LOCAL *value_to_target_port(Value &tp);
*/
Value LSM_DLL_LOCAL target_port_to_value(lsm_target_port *tp);

+/**
+ * Converts a value to array of uint32.
+ */
+int LSM_DLL_LOCAL values_to_uint32_array(
+ Value &value, uint32_t **uint32_array, uint32_t *count);
+
+/**
+ * Converts an array of uint32 to a value.
+ */
+Value LSM_DLL_LOCAL uint32_array_to_value(
+ uint32_t *uint32_array, uint32_t count);
+
#endif
diff --git a/c_binding/lsm_mgmt.cpp b/c_binding/lsm_mgmt.cpp
index 7469127..d57edcc 100644
--- a/c_binding/lsm_mgmt.cpp
+++ b/c_binding/lsm_mgmt.cpp
@@ -2268,3 +2268,131 @@ int lsm_nfs_export_delete( lsm_connect *c, lsm_nfs_export *e, lsm_flag flags)
int rc = rpc(c, "export_remove", parameters, response);
return rc;
}
+
+int lsm_volume_raid_create_cap_get(
+ lsm_connect *c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags)
+{
+ CONN_SETUP(c);
+
+ if( !supported_raid_types || !supported_raid_type_count ||
+ !supported_strip_sizes || !supported_strip_size_count) {
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["system"] = system_to_value(system);
+ p["flags"] = Value(flags);
+
+ Value parameters(p);
+ Value response;
+
+ int rc = rpc(c, "volume_raid_create_cap_get", parameters, response);
+ try {
+ std::vector<Value> j = response.asArray();
+
+ rc = values_to_uint32_array(
+ j[0], supported_raid_types, supported_raid_type_count);
+
+ if( rc != LSM_ERR_OK ){
+ *supported_raid_types = NULL;
+ *supported_raid_type_count = 0;
+ *supported_strip_sizes = NULL;
+ *supported_strip_size_count = 0;
+ return rc;
+ }
+
+ rc = values_to_uint32_array(
+ j[1], supported_strip_sizes, supported_strip_size_count);
+ if( rc != LSM_ERR_OK ){
+ free(*supported_raid_types);
+ *supported_raid_types = NULL;
+ *supported_raid_type_count = 0;
+ *supported_strip_sizes = NULL;
+ *supported_strip_size_count = 0;
+ }
+ } catch( const ValueException &ve ) {
+ rc = logException(c, LSM_ERR_LIB_BUG, "Unexpected type",
+ ve.what());
+ }
+ return rc;
+}
+
+int lsm_volume_raid_create(
+ lsm_connect *c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags)
+{
+ CONN_SETUP(c);
+
+ if (disk_count == 0){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "Require at least one disks", NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID1 && disk_count != 2){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 1 only allows two disks", NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID5 && disk_count < 3){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 5 require 3 or more disks",
+ NULL);
+ }
+
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID6 && disk_count < 4){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT, "RAID 5 require 4 or more disks",
+ NULL);
+ }
+
+ if ( disk_count % 2 ){
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID10 && disk_count < 4){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 10 require even disks count and 4 or more disks",
+ NULL);
+ }
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID50 && disk_count < 6){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 50 require even disks count and 6 or more disks",
+ NULL);
+ }
+ if (raid_type == LSM_VOLUME_RAID_TYPE_RAID60 && disk_count < 8){
+ return logException(
+ c, LSM_ERR_INVALID_ARGUMENT,
+ "RAID 60 require even disks count and 8 or more disks",
+ NULL);
+ }
+ }
+
+ if (CHECK_RP(new_volume)){
+ return LSM_ERR_INVALID_ARGUMENT;
+ }
+
+ std::map<std::string, Value> p;
+ p["name"] = Value(name);
+ p["raid_type"] = Value((int32_t)raid_type);
+ p["strip_size"] = Value((int32_t)strip_size);
+ p["flags"] = Value(flags);
+ std::vector<Value> disks_value;
+ for (uint32_t i = 0; i < disk_count; i++){
+ disks_value.push_back(disk_to_value(disks[i]));
+ }
+ p["disks"] = disks_value;
+
+ Value parameters(p);
+ Value response;
+
+ int rc = rpc(c, "volume_raid_create", parameters, response);
+ if( LSM_ERR_OK == rc ) {
+ *new_volume = value_to_volume(response);
+ }
+ return rc;
+
+}
diff --git a/c_binding/lsm_plugin_ipc.cpp b/c_binding/lsm_plugin_ipc.cpp
index dcae563..80baf1a 100644
--- a/c_binding/lsm_plugin_ipc.cpp
+++ b/c_binding/lsm_plugin_ipc.cpp
@@ -2203,6 +2203,98 @@ static int iscsi_chap(lsm_plugin_ptr p, Value &params, Value &response)
}
return rc;
}
+static int handle_volume_raid_create_cap_get(
+ lsm_plugin_ptr p, Value &params, Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->vol_create_raid_cap_get ) {
+ Value v_system = params["system"];
+
+ if( IS_CLASS_SYSTEM(v_system) &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+ lsm_system *sys = value_to_system(v_system);
+
+ uint32_t *supported_raid_types = NULL;
+ uint32_t supported_raid_type_count = 0;
+
+ uint32_t *supported_strip_sizes = NULL;
+ uint32_t supported_strip_size_count = 0;
+
+ rc = p->ops_v1_2->vol_create_raid_cap_get(
+ p, sys, &supported_raid_types, &supported_raid_type_count,
+ &supported_strip_sizes, &supported_strip_size_count,
+ LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ std::vector<Value> result;
+ result.push_back(
+ uint32_array_to_value(
+ supported_raid_types, supported_raid_type_count));
+ result.push_back(
+ uint32_array_to_value(
+ supported_strip_sizes, supported_strip_size_count));
+ response = Value(result);
+ free(supported_raid_types);
+ free(supported_strip_sizes);
+ }
+
+ lsm_system_record_free(sys);
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+
+}
+
+static int handle_volume_raid_create(
+ lsm_plugin_ptr p, Value &params, Value &response)
+{
+ int rc = LSM_ERR_NO_SUPPORT;
+ if( p && p->ops_v1_2 && p->ops_v1_2->vol_create_raid ) {
+ Value v_name = params["name"];
+ Value v_raid_type = params["raid_type"];
+ Value v_strip_size = params["strip_size"];
+ Value v_disks = params["disks"];
+
+ if( Value::string_t == v_name.valueType() &&
+ Value::numeric_t == v_raid_type.valueType() &&
+ Value::numeric_t == v_strip_size.valueType() &&
+ Value::array_t == v_disks.valueType() &&
+ LSM_FLAG_EXPECTED_TYPE(params) ) {
+
+ lsm_disk **disks = NULL;
+ uint32_t disk_count = 0;
+ rc = value_array_to_disks(v_disks, &disks, &disk_count);
+ if( LSM_ERR_OK != rc ) {
+ return rc;
+ }
+
+ const char *name = v_name.asC_str();
+ lsm_volume_raid_type raid_type =
+ (lsm_volume_raid_type) v_raid_type.asInt32_t();
+ uint32_t strip_size = v_strip_size.asUint32_t();
+
+ lsm_volume *new_vol = NULL;
+
+ rc = p->ops_v1_2->vol_create_raid(
+ p, name, raid_type, disks, disk_count, strip_size,
+ &new_vol, LSM_FLAG_GET_VALUE(params));
+
+ if( LSM_ERR_OK == rc ) {
+ response = volume_to_value(new_vol);
+ lsm_volume_record_free(new_vol);
+ }
+
+ lsm_disk_record_array_free(disks, disk_count);
+
+ } else {
+ rc = LSM_ERR_TRANSPORT_INVALID_ARG;
+ }
+ }
+ return rc;
+}

/**
* map of function pointers
@@ -2258,7 +2350,9 @@ static std::map<std::string,handler> dispatch = static_map<std::string,handler>
("volumes_accessible_by_access_group", vol_accessible_by_ag)
("volumes", handle_volumes)
("volume_raid_info", handle_volume_raid_info)
- ("pool_member_info", handle_pool_member_info);
+ ("pool_member_info", handle_pool_member_info)
+ ("volume_raid_create", handle_volume_raid_create)
+ ("volume_raid_create_cap_get", handle_volume_raid_create_cap_get);

static int process_request(lsm_plugin_ptr p, const std::string &method, Value &request,
Value &response)
diff --git a/python_binding/lsm/_client.py b/python_binding/lsm/_client.py
index f3c9e6c..16929a5 100644
--- a/python_binding/lsm/_client.py
+++ b/python_binding/lsm/_client.py
@@ -1172,3 +1172,190 @@ def pool_member_info(self, pool, flags=FLAG_RSVD):
lsm.Capabilities.POOL_MEMBER_INFO
"""
return self._tp.rpc('pool_member_info', _del_self(locals()))
+
+ @_return_requires([[int], [int]])
+ def volume_raid_create_cap_get(self, system, flags=FLAG_RSVD):
+ """
+ lsm.Client.volume_raid_create_cap_get(
+ self, system, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ This method is dedicated to local hardware RAID cards.
+ Query out all supported RAID types and strip sizes which could
+ be used by lsm.Client.volume_raid_create() method.
+ Parameters:
+ system (lsm.System)
+ Instance of lsm.System
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ [raid_types, strip_sizes]
+ raid_types ([int])
+ List of integer, possible values are:
+ Volume.RAID_TYPE_RAID0
+ Volume.RAID_TYPE_RAID1
+ Volume.RAID_TYPE_RAID5
+ Volume.RAID_TYPE_RAID6
+ Volume.RAID_TYPE_RAID10
+ Volume.RAID_TYPE_RAID50
+ Volume.RAID_TYPE_RAID60
+ strip_sizes ([int])
+ List of integer. Stripe size in bytes.
+ SpecialExceptions:
+ LsmError
+ lsm.ErrorNumber.NO_SUPPORT
+ Method not supported.
+ Sample:
+ lsm_client = lsm.Client('sim://')
+ lsm_sys = lsm_client.systems()[0]
+ disks = lsm_client.disks(
+ search_key='system_id', search_value=lsm_sys.id)
+
+ free_disks = [d for d in disks if d.status == Disk.STATUS_FREE]
+ supported_raid_types, supported_strip_sizes = \
+ lsm_client.volume_raid_create_cap_get(lsm_sys)
+ new_vol = lsm_client.volume_raid_create(
+ 'test_volume_raid_create', supported_raid_types[0],
+ free_disks, supported_strip_sizes[0])
+
+ Capability:
+ lsm.Capabilities.VOLUME_CREATE_RAID
+ This method is mandatory when volume_raid_create() is
+ supported.
+ """
+ return self._tp.rpc('volume_raid_create_cap_get', _del_self(locals()))
+
+ @_return_requires(Volume)
+ def volume_raid_create(self, name, raid_type, disks, strip_size,
+ flags=FLAG_RSVD):
+ """
+ lsm.Client.volume_raid_create(self, name, raid_type, disks,
+ strip_size, flags=lsm.Client.FLAG_RSVD)
+
+ Version:
+ 1.2
+ Usage:
+ This method is dedicated to local hardware RAID cards.
+ Create a disk RAID pool and allocate entire storage space to
+ new volume using requested volume name.
+ When dealing with RAID10, 50 or 60, the first half part of
+ 'disks' will be located in one bottom layer RAID group.
+ The new volume and new pool will created within the same system
+ of provided disks.
+ This method does not allow duplicate call, when duplicate call
+ was issued, LsmError with ErrorNumber.DISK_NOT_FREE will be raise.
+ User should check disk.status for Disk.STATUS_FREE before
+ invoking this method.
+ Parameters:
+ name (string)
+ The name for new volume.
+ The requested volume name might be ignored due to restriction
+ of hardware RAID vendors.
+ The pool name will be automatically choosed by plugin.
+ raid_type (int)
+ The RAID type for the RAID group, possible values are:
+ Volume.RAID_TYPE_RAID0
+ Volume.RAID_TYPE_RAID1
+ Volume.RAID_TYPE_RAID5
+ Volume.RAID_TYPE_RAID6
+ Volume.RAID_TYPE_RAID10
+ Volume.RAID_TYPE_RAID15
+ Volume.RAID_TYPE_RAID16
+ Volume.RAID_TYPE_RAID50
+ Please check volume_raid_create_cap_get() returns to get
+ supported all raid types of current hardware RAID card.
+ disks ([lsm.Disks,])
+ A list of lsm.Disk objects. Free disks used for new RAID group.
+ strip_size (int)
+ The size in bytes of strip.
+ When setting strip_size to Volume.VCR_STRIP_SIZE_DEFAULT, it
+ allow hardware RAID cards to choose their default value.
+ Please use volume_raid_create_cap_get() method to get all
+ supported strip size of current hardware RAID card.
+ The Volume.VCR_STRIP_SIZE_DEFAULT is always supported when
+ lsm.Capabilities.VOLUME_CREATE_RAID is supported.
+ flags (int)
+ Optional. Reserved for future use.
+ Should be set as lsm.Client.FLAG_RSVD.
+ Returns:
+ lsm.Volume
+ The lsm.Volume object for newly created volume.
+ SpecialExceptions:
+ LsmError
+ lsm.ErrorNumber.NO_SUPPORT
+ Method not supported or RAID type not supported.
+ lsm.ErrorNumber.DISK_NOT_FREE
+ Disk is not in Disk.STATUS_FREE status.
+ lsm.ErrorNumber.NOT_FOUND_DISK
+ Disk not found
+ lsm.ErrorNumber.INVALID_ARGUMENT
+ 1. Invalid input argument data.
+ 2. Disks are not from the same system.
+ 3. Disks are not from the same enclosure.
+ 4. Invalid strip_size.
+ 5. Disk count are meet the minimum requirement:
+ RAID1: len(disks) == 2
+ RAID5: len(disks) >= 3
+ RAID6: len(disks) >= 4
+ RAID10: len(disks) % 2 == 0 and len(disks) >= 4
+ RAID50: len(disks) % 2 == 0 and len(disks) >= 6
+ RAID60: len(disks) % 2 == 0 and len(disks) >= 8
+ lsm.ErrorNumber.NAME_CONFLICT
+ Requested name is already be used by other volume.
+ Sample:
+ lsm_client = lsm.Client('sim://')
+ disks = lsm_client.disks()
+ free_disks = [d for d in disks if d.status == Disk.STATUS_FREE]
+ new_vol = lsm_client.volume_raid_create(
+ 'raid0_vol1', Volume.RAID_TYPE_RAID0, free_disks)
+ Capability:
+ lsm.Capabilities.VOLUME_CREATE_RAID
+ Indicate current system support volume_raid_create() method.
+ At least one RAID type should be supported.
+ The strip_size == Volume.VCR_STRIP_SIZE_DEFAULT is supported.
+ """
+ if len(disks) == 0:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: no disk included")
+
+ if raid_type == Volume.RAID_TYPE_RAID1 and len(disks) != 2:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 1 only allow 2 disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID5 and len(disks) < 3:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 5 require 3 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID6 and len(disks) < 4:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: RAID 6 require 4 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID10:
+ if len(disks) % 2 or len(disks) < 4:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 10 require even disks count and 4 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID50:
+ if len(disks) % 2 or len(disks) < 6:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 50 require even disks count and 6 or more disks")
+
+ if raid_type == Volume.RAID_TYPE_RAID60:
+ if len(disks) % 2 or len(disks) < 8:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: "
+ "RAID 60 require even disks count and 8 or more disks")
+
+ return self._tp.rpc('volume_raid_create', _del_self(locals()))
diff --git a/python_binding/lsm/_common.py b/python_binding/lsm/_common.py
index 4c87661..be0c734 100644
--- a/python_binding/lsm/_common.py
+++ b/python_binding/lsm/_common.py
@@ -447,6 +447,7 @@ class ErrorNumber(object):
NOT_FOUND_VOLUME = 205
NOT_FOUND_NFS_EXPORT = 206
NOT_FOUND_SYSTEM = 208
+ NOT_FOUND_DISK = 209

NOT_LICENSED = 226

@@ -478,6 +479,8 @@ class ErrorNumber(object):

POOL_NOT_READY = 512 # Pool is not ready for create/resize/etc

+ DISK_NOT_FREE = 513 # Disk is not in DISK.STATUS_FREE status.
+
_LOCALS = locals()

@staticmethod
diff --git a/python_binding/lsm/_data.py b/python_binding/lsm/_data.py
index 2f42eaf..0e16311 100644
--- a/python_binding/lsm/_data.py
+++ b/python_binding/lsm/_data.py
@@ -306,6 +306,8 @@ class Volume(IData):
MIN_IO_SIZE_UNKNOWN = 0
OPT_IO_SIZE_UNKNOWN = 0

+ VCR_STRIP_SIZE_DEFAULT = 0
+
def __init__(self, _id, _name, _vpd83, _block_size, _num_of_blocks,
_admin_state, _system_id, _pool_id, _plugin_data=None):
self._id = _id # Identifier
@@ -760,6 +762,7 @@ class Capabilities(IData):

DISKS = 220
POOL_MEMBER_INFO = 221
+ VOLUME_RAID_CREATE = 222

def _to_dict(self):
return {'class': self.__class__.__name__,
--
1.8.3.1
Gris Ge
2015-04-24 12:06:27 UTC
Permalink
* New commands:
* volume-raid-create
Create RAID group and allocate all space to single new volume.

* volume-raid-create-cap
Query supported RAID types and strip sizes of volume-create-raid
command.

* Manpage updated.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Changes in V4:

* Renamed to 'volume-raid-create' and 'volume-raid-create-cap' commands.

* Add bash auto completion for these commands and their aliases.

Signed-off-by: Gris Ge <***@redhat.com>
---
doc/man/lsmcli.1.in | 32 +++++++++++++++++++++
tools/bash_completion/lsmcli | 13 +++++++++
tools/lsmcli/cmdline.py | 67 +++++++++++++++++++++++++++++++++++++++++++-
tools/lsmcli/data_display.py | 36 +++++++++++++++++++++++-
4 files changed, 146 insertions(+), 2 deletions(-)

diff --git a/doc/man/lsmcli.1.in b/doc/man/lsmcli.1.in
index 6faf690..b1ef119 100644
--- a/doc/man/lsmcli.1.in
+++ b/doc/man/lsmcli.1.in
@@ -232,6 +232,34 @@ Optional. Provisioning type. Valid values are: DEFAULT, THIN, FULL.
Provisioning enabled volume. \fBFULL\fR means requiring a fully allocated
volume.

+.SS volume-raid-create
+Creates a volume on hardware RAID on given disks.
+.TP 15
+\fB--name\fR \fI<NAME>\fR
+Required. Volume name. Might be altered or ignored due to hardware RAID card
+vendor limitation.
+.TP
+\fB--raid-type\fR \fI<RAID_TYPE>\fR
+Required. Could be one of these values: \fBRAID0\fR, \fBRAID1\fR, \fBRAID5\fR,
+\fBRAID6\fR, \fBRAID10\fR, \fBRAID50\fR, \fBRAID60\fR. The supported RAID
+types of current RAID card could be queried via command
+"\fBvolume-raid-create-cap\fR".
+.TP
+\fB--disk\fR \fI<DISK_ID>\fR
+Required. Repeatable. The disk ID for new RAID group.
+.TP
+\fB--strip-size\fR \fI<STRIP_SIZE>\fR
+Optional. The size in bytes of strip on each disks. If not defined, will let
+hardware card to use the vendor default value. The supported stripe size of
+current RAID card could be queried via command "\fBvolume-raid-create-cap\fR".
+
+.SS volume-raid-create-cap
+Query support status of volume-raid-create command for current hardware RAID
+card.
+.TP 15
+\fB--sys\fR \fI<SYS_ID>\fR
+Required. ID of the system to query for capabilities.
+
.SS volume-delete
.TP 15
Delete a volume given its ID
@@ -586,6 +614,10 @@ Alias of 'list --type target_ports'
Alias of 'plugin-info'
.SS vc
Alias of 'volume-create'
+.SS vrc
+Alias of 'volume-raid-create'
+.SS vrcc
+Alias of 'volume-raid-create-cap'
.SS vd
Alias of 'volume-delete'
.SS vr
diff --git a/tools/bash_completion/lsmcli b/tools/bash_completion/lsmcli
index 164604f..eb8fe8e 100644
--- a/tools/bash_completion/lsmcli
+++ b/tools/bash_completion/lsmcli
@@ -125,6 +125,8 @@ function _lsm()
fs_dependants_args="--fs"
file_clone_args="--fs --src --dst --backing-snapshot"
pool_member_info_args="--pool"
+ volume_raid_create_args="--name --disk --raid-type --strip-size"
+ volume_raid_create_cap_args="--sys"

# These operations can potentially be slow and cause hangs depending on plugin and configuration
if [[ ${NO_VALUE_LOOKUP} -ne 0 ]] ; then
@@ -403,6 +405,17 @@ function _lsm()
return 0
;;
*)
+ volume-raid-create|vrc)
+ possible_args "${volume_raid_create_args}"
+ COMPREPLY=( $(compgen -W "${potential_args}" -- ${cur}) )
+ return 0
+ ;;
+ volume-raid-create-cap|vrcc)
+ possible_args "${volume_raid_create_cap_args}"
+ COMPREPLY=( $(compgen -W "${potential_args}" -- ${cur}) )
+ return 0
+ ;;
+ *)
;;
esac

diff --git a/tools/lsmcli/cmdline.py b/tools/lsmcli/cmdline.py
index d26da80..cf0f3d6 100644
--- a/tools/lsmcli/cmdline.py
+++ b/tools/lsmcli/cmdline.py
@@ -40,7 +40,7 @@
from lsm.lsmcli.data_display import (
DisplayData, PlugData, out,
vol_provision_str_to_type, vol_rep_type_str_to_type, VolumeRAIDInfo,
- PoolRAIDInfo)
+ PoolRAIDInfo, VcrCap)


## Wraps the invocation to the command line
@@ -242,6 +242,37 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
),

dict(
+ name='volume-raid-create',
+ help='Creates a RAIDed volume on hardware RAID',
+ args=[
+ dict(name="--name", help='volume name', metavar='<NAME>'),
+ dict(name="--disk", metavar='<DISK>',
+ help='Free disks for new RAIDed volume.\n'
+ 'This is repeatable argument.',
+ action='append'),
+ dict(name="--raid-type",
+ help="RAID type for the new RAID group. "
+ "Should be one of these:\n %s" %
+ "\n ".join(
+ VolumeRAIDInfo.VOL_CREATE_RAID_TYPES_STR),
+ choices=VolumeRAIDInfo.VOL_CREATE_RAID_TYPES_STR,
+ type=str.upper),
+ ],
+ optional=[
+ dict(name="--strip-size",
+ help="Strip size. " + size_help),
+ ],
+ ),
+
+ dict(
+ name='volume-raid-create-cap',
+ help='Query capablity of creating a RAIDed volume on hardware RAID',
+ args=[
+ dict(sys_id_opt),
+ ],
+ ),
+
+ dict(
name='volume-delete',
help='Deletes a volume given its id',
args=[
@@ -635,6 +666,8 @@ def _get_item(l, the_id, friendly_name='item', raise_error=True):
['c', 'capabilities'],
['p', 'plugin-info'],
['vc', 'volume-create'],
+ ['vrc', 'volume-raid-create'],
+ ['vrcc', 'volume-raid-create-cap'],
['vd', 'volume-delete'],
['vr', 'volume-resize'],
['vm', 'volume-mask'],
@@ -1351,6 +1384,38 @@ def pool_member_info(self, args):
PoolRAIDInfo(
lsm_pool.id, *self.c.pool_member_info(lsm_pool))])

+ def volume_raid_create(self, args):
+ raid_type = VolumeRAIDInfo.raid_type_str_to_lsm(args.raid_type)
+
+ all_lsm_disks = self.c.disks()
+ lsm_disks = [d for d in all_lsm_disks if d.id in args.disk]
+ if len(lsm_disks) != len(args.disk):
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_DISK,
+ "Disk ID %s not found" %
+ ', '.join(set(args.disk) - set(d.id for d in all_lsm_disks)))
+
+ busy_disks = [d.id for d in lsm_disks if not d.status & Disk.STATUS_FREE]
+
+ if len(busy_disks) >= 1:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not free" % ", ".join(busy_disks))
+
+ if args.strip_size:
+ strip_size = size_human_2_size_bytes(args.strip_size)
+ else:
+ strip_size = Volume.VCR_STRIP_SIZE_DEFAULT
+
+ self.display_data([
+ self.c.volume_raid_create(
+ args.name, raid_type, lsm_disks, strip_size)])
+
+ def volume_raid_create_cap(self, args):
+ lsm_sys = _get_item(self.c.systems(), args.sys, "System")
+ self.display_data([
+ VcrCap(lsm_sys.id, *self.c.volume_raid_create_cap_get(lsm_sys))])
+
## Displays file system dependants
def fs_dependants(self, args):
fs = _get_item(self.c.fs(), args.fs, "File System")
diff --git a/tools/lsmcli/data_display.py b/tools/lsmcli/data_display.py
index 115f15a..b9edf68 100644
--- a/tools/lsmcli/data_display.py
+++ b/tools/lsmcli/data_display.py
@@ -265,6 +265,9 @@ class VolumeRAIDInfo(object):
Volume.RAID_TYPE_UNKNOWN: 'UNKNOWN',
}

+ VOL_CREATE_RAID_TYPES_STR = [
+ 'RAID0', 'RAID1', 'RAID5', 'RAID6', 'RAID10', 'RAID50', 'RAID60']
+
def __init__(self, vol_id, raid_type, strip_size, disk_count,
min_io_size, opt_io_size):
self.vol_id = vol_id
@@ -278,6 +281,10 @@ def __init__(self, vol_id, raid_type, strip_size, disk_count,
def raid_type_to_str(raid_type):
return _enum_type_to_str(raid_type, VolumeRAIDInfo._RAID_TYPE_MAP)

+ @staticmethod
+ def raid_type_str_to_lsm(raid_type_str):
+ return _str_to_enum(raid_type_str, VolumeRAIDInfo._RAID_TYPE_MAP)
+

class PoolRAIDInfo(object):
_MEMBER_TYPE_MAP = {
@@ -298,6 +305,11 @@ def member_type_to_str(member_type):
return _enum_type_to_str(
member_type, PoolRAIDInfo._MEMBER_TYPE_MAP)

+class VcrCap(object):
+ def __init__(self, system_id, raid_types, strip_sizes):
+ self.system_id = system_id
+ self.raid_types = raid_types
+ self.strip_sizes = strip_sizes

class DisplayData(object):

@@ -598,6 +610,25 @@ def __init__(self):
'value_conv_human': POOL_RAID_INFO_VALUE_CONV_HUMAN,
}

+ VCR_CAP_HEADER = OrderedDict()
+ VCR_CAP_HEADER['system_id'] = 'System ID'
+ VCR_CAP_HEADER['raid_types'] = 'Supported RAID Types'
+ VCR_CAP_HEADER['strip_sizes'] = 'Supported Strip Sizes'
+
+ VCR_CAP_COLUMN_SKIP_KEYS = []
+
+ VCR_CAP_VALUE_CONV_ENUM = {
+ 'raid_types': lambda i: [VolumeRAIDInfo.raid_type_to_str(x) for x in i]
+ }
+ VCR_CAP_VALUE_CONV_HUMAN = ['strip_sizes']
+
+ VALUE_CONVERT[VcrCap] = {
+ 'headers': VCR_CAP_HEADER,
+ 'column_skip_keys': VCR_CAP_COLUMN_SKIP_KEYS,
+ 'value_conv_enum': VCR_CAP_VALUE_CONV_ENUM,
+ 'value_conv_human': VCR_CAP_VALUE_CONV_HUMAN,
+ }
+
@staticmethod
def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
flag_human, flag_enum):
@@ -607,7 +638,10 @@ def _get_man_pro_value(obj, key, value_conv_enum, value_conv_human,
value = value_conv_enum[key](value)
if flag_human:
if key in value_conv_human:
- value = size_bytes_2_size_human(value)
+ if type(value) is list:
+ value = list(size_bytes_2_size_human(s) for s in value)
+ else:
+ value = size_bytes_2_size_human(value)
return value

@staticmethod
--
1.8.3.1
Gris Ge
2015-04-24 12:06:29 UTC
Permalink
* Updated lsm_ops_v1_2 structure by adding two methods:
* lsm_plug_volume_raid_create_cap_get()
* lsm_plug_volume_raid_create()

* These two methods simply return no support.

* Tested C plugin interface by returning fake data, but changed back to
no support. We will implement the real method in the future.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Changes in V4:

* Method renamed to volume_raid_create().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/simc/simc_lsmplugin.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)

diff --git a/plugin/simc/simc_lsmplugin.c b/plugin/simc/simc_lsmplugin.c
index 6aa7bdb..5b9be3f 100644
--- a/plugin/simc/simc_lsmplugin.c
+++ b/plugin/simc/simc_lsmplugin.c
@@ -1001,9 +1001,29 @@ static int pool_member_info(
return rc;
}

+static int volume_raid_create_cap_get(
+ lsm_plugin_ptr c, lsm_system *system,
+ uint32_t **supported_raid_types, uint32_t *supported_raid_type_count,
+ uint32_t **supported_strip_sizes, uint32_t *supported_strip_size_count,
+ lsm_flag flags)
+{
+ return LSM_ERR_NO_SUPPORT;
+}
+
+static int volume_raid_create(
+ lsm_plugin_ptr c, const char *name, lsm_volume_raid_type raid_type,
+ lsm_disk *disks[], uint32_t disk_count,
+ uint32_t strip_size, lsm_volume **new_volume,
+ lsm_flag flags)
+{
+ return LSM_ERR_NO_SUPPORT;
+}
+
static struct lsm_ops_v1_2 ops_v1_2 = {
volume_raid_info,
pool_member_info,
+ volume_raid_create_cap_get,
+ volume_raid_create,
};

static int volume_enable_disable(lsm_plugin_ptr c, lsm_volume *v,
--
1.8.3.1
Gris Ge
2015-04-24 12:06:28 UTC
Permalink
* Add support of these methods:
* volume_raid_create()
* volume_raid_create_cap_get()

* Add 'strip_size' property into pool.
* Add 'is_hw_raid_vol' property into volume.
If true, volume_delete() will also delete parent pool.

* Bumped data version to 3.4.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Changes in V4:

* Method renamed to volume_raid_create().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 119 ++++++++++++++++++++++++++++++++++++++++++------
plugin/sim/simulator.py | 8 ++++
2 files changed, 113 insertions(+), 14 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index a0809f6..3b4ec2a 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -123,7 +123,7 @@ def data_disk_count(raid_type, disk_count):


class BackStore(object):
- VERSION = "3.3"
+ VERSION = "3.4"
VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
JOB_DEFAULT_DURATION = 1
JOB_DATA_TYPE_VOL = 1
@@ -133,7 +133,7 @@ class BackStore(object):
SYS_ID = "sim-01"
SYS_NAME = "LSM simulated storage plug-in"
BLK_SIZE = 512
- STRIP_SIZE = 131072 # 128 KiB
+ DEFAULT_STRIP_SIZE = 128 * 1024 # 128 KiB

_LIST_SPLITTER = '#'

@@ -142,7 +142,8 @@ class BackStore(object):
POOL_KEY_LIST = [
'id', 'name', 'status', 'status_info',
'element_type', 'unsupported_actions', 'raid_type',
- 'member_type', 'parent_pool_id', 'total_space', 'free_space']
+ 'member_type', 'parent_pool_id', 'total_space', 'free_space',
+ 'strip_size']

DISK_KEY_LIST = [
'id', 'name', 'total_space', 'disk_type', 'status',
@@ -150,7 +151,7 @@ class BackStore(object):

VOL_KEY_LIST = [
'id', 'vpd83', 'name', 'total_space', 'consumed_size',
- 'pool_id', 'admin_state', 'thinp']
+ 'pool_id', 'admin_state', 'thinp', 'is_hw_raid_vol']

TGT_KEY_LIST = [
'id', 'port_type', 'service_address', 'network_address',
@@ -172,6 +173,16 @@ class BackStore(object):
'options', 'exp_root_hosts_str', 'exp_rw_hosts_str',
'exp_ro_hosts_str']

+ SUPPORTED_VCR_RAID_TYPES = [
+ Volume.RAID_TYPE_RAID0, Volume.RAID_TYPE_RAID1,
+ Volume.RAID_TYPE_RAID5, Volume.RAID_TYPE_RAID6,
+ Volume.RAID_TYPE_RAID10, Volume.RAID_TYPE_RAID50,
+ Volume.RAID_TYPE_RAID60]
+
+ SUPPORTED_VCR_STRIP_SIZES = [
+ 8 * 1024, 16 * 1024, 32 * 1024, 64 * 1024, 128 * 1024, 256 * 1024,
+ 512 * 1024, 1024 * 1024]
+
def __init__(self, statefile, timeout):
if not os.path.exists(statefile):
os.close(os.open(statefile, os.O_WRONLY | os.O_CREAT))
@@ -218,6 +229,7 @@ def __init__(self, statefile, timeout):
"parent_pool_id INTEGER, " # Indicate this pool is allocated from
# other pool
"member_type INTEGER, "
+ "strip_size INTEGER, "
"total_space LONG);\n") # total_space here is only for
# sub-pool (pool from pool)

@@ -244,6 +256,10 @@ def __init__(self, statefile, timeout):
# support.
"admin_state INTEGER, "
"thinp INTEGER NOT NULL, "
+ "is_hw_raid_vol INTEGER, " # Once its volume deleted, pool will
+ # be delete also. For HW RAID
+ # simulation only.
+
"pool_id INTEGER NOT NULL, "
"FOREIGN KEY(pool_id) "
"REFERENCES pools(id) ON DELETE CASCADE);\n")
@@ -362,6 +378,7 @@ def __init__(self, statefile, timeout):
pool0.raid_type,
pool0.member_type,
pool0.parent_pool_id,
+ pool0.strip_size,
pool1.total_space total_space,
pool1.total_space -
pool2.vol_consumed_size -
@@ -450,6 +467,7 @@ def __init__(self, statefile, timeout):
vol.pool_id,
vol.admin_state,
vol.thinp,
+ vol.is_hw_raid_vol,
vol_mask.ag_id ag_id
FROM
volumes vol
@@ -926,7 +944,11 @@ def sim_pool_of_id(self, sim_pool_id):
ErrorNumber.NOT_FOUND_POOL, "Pool")

def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
- element_type, unsupported_actions=0):
+ element_type, unsupported_actions=0,
+ strip_size=0):
+ if strip_size == 0:
+ strip_size = BackStore.DEFAULT_STRIP_SIZE
+
self._data_add(
'pools',
{
@@ -937,6 +959,7 @@ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
'unsupported_actions': unsupported_actions,
'raid_type': raid_type,
'member_type': Pool.MEMBER_TYPE_DISK,
+ 'strip_size': strip_size,
})

data_disk_count = PoolRAID.data_disk_count(
@@ -1029,7 +1052,9 @@ def _block_rounding(size_bytes):
return (size_bytes + BackStore.BLK_SIZE - 1) / \
BackStore.BLK_SIZE * BackStore.BLK_SIZE

- def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp):
+ def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp,
+ is_hw_raid_vol=0):
+
size_bytes = BackStore._block_rounding(size_bytes)
self._check_pool_free_space(sim_pool_id, size_bytes)
sim_vol = dict()
@@ -1040,6 +1065,7 @@ def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp):
sim_vol['total_space'] = size_bytes
sim_vol['consumed_size'] = size_bytes
sim_vol['admin_state'] = Volume.ADMIN_STATE_ENABLED
+ sim_vol['is_hw_raid_vol'] = is_hw_raid_vol

try:
self._data_add("volumes", sim_vol)
@@ -1055,7 +1081,7 @@ def sim_vol_delete(self, sim_vol_id):
This does not check whether volume exist or not.
"""
# Check existence.
- self.sim_vol_of_id(sim_vol_id)
+ sim_vol = self.sim_vol_of_id(sim_vol_id)

if self._sim_ag_ids_of_masked_vol(sim_vol_id):
raise LsmError(
@@ -1070,8 +1096,11 @@ def sim_vol_delete(self, sim_vol_id):
raise LsmError(
ErrorNumber.PLUGIN_BUG,
"Requested volume is a replication source")
-
- self._data_delete("volumes", 'id="%s"' % sim_vol_id)
+ if sim_vol['is_hw_raid_vol']:
+ # Delete the parent pool instead if found a HW RAID volume.
+ self._data_delete("pools", 'id="%s"' % sim_vol['pool_id'])
+ else:
+ self._data_delete("volumes", 'id="%s"' % sim_vol_id)

def sim_vol_mask(self, sim_vol_id, sim_ag_id):
self.sim_vol_of_id(sim_vol_id)
@@ -1759,7 +1788,7 @@ def disks(self):

@_handle_errors
def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
- _internal_use=False):
+ _internal_use=False, _is_hw_raid_vol=0):
"""
The '_internal_use' parameter is only for SimArray internal use.
This method will return the new sim_vol id instead of job_id when
@@ -1769,7 +1798,8 @@ def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
self.bs_obj.trans_begin()

new_sim_vol_id = self.bs_obj.sim_vol_create(
- vol_name, size_bytes, SimArray._sim_pool_id_of(pool_id), thinp)
+ vol_name, size_bytes, SimArray._sim_pool_id_of(pool_id),
+ thinp, is_hw_raid_vol=_is_hw_raid_vol)

if _internal_use:
return new_sim_vol_id
@@ -2251,9 +2281,9 @@ def volume_raid_info(self, lsm_vol):
min_io_size = BackStore.BLK_SIZE
opt_io_size = BackStore.BLK_SIZE
else:
- strip_size = BackStore.STRIP_SIZE
- min_io_size = BackStore.STRIP_SIZE
- opt_io_size = int(data_disk_count * BackStore.STRIP_SIZE)
+ strip_size = sim_pool['strip_size']
+ min_io_size = strip_size
+ opt_io_size = int(data_disk_count * strip_size)

return [raid_type, strip_size, disk_count, min_io_size, opt_io_size]

@@ -2278,3 +2308,64 @@ def pool_member_info(self, lsm_pool):
member_type = Pool.MEMBER_TYPE_UNKNOWN

return sim_pool['raid_type'], member_type, member_ids
+
+ @_handle_errors
+ def volume_raid_create_cap_get(self, system):
+ if system.id != BackStore.SYS_ID:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+ return (
+ BackStore.SUPPORTED_VCR_RAID_TYPES,
+ BackStore.SUPPORTED_VCR_STRIP_SIZES)
+
+ @_handle_errors
+ def volume_raid_create(self, name, raid_type, disks, strip_size):
+ if raid_type not in BackStore.SUPPORTED_VCR_RAID_TYPES:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'raid_type' is not supported")
+
+ if strip_size == Volume.VCR_STRIP_SIZE_DEFAULT:
+ strip_size = BackStore.DEFAULT_STRIP_SIZE
+ elif strip_size not in BackStore.SUPPORTED_VCR_STRIP_SIZES:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'strip_size' is not supported")
+
+ self.bs_obj.trans_begin()
+ pool_name = "Pool for volume %s" % name
+ sim_disk_ids = [
+ SimArray._lsm_id_to_sim_id(
+ d.id,
+ LsmError(ErrorNumber.NOT_FOUND_DISK, "Disk not found"))
+ for d in disks]
+
+ for disk in disks:
+ if not disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in DISK.STATUS_FREE mode" % disk.id)
+ try:
+ sim_pool_id = self.bs_obj.sim_pool_create_from_disk(
+ name=pool_name,
+ raid_type=raid_type,
+ sim_disk_ids=sim_disk_ids,
+ element_type=Pool.ELEMENT_TYPE_VOLUME,
+ unsupported_actions=Pool.UNSUPPORTED_VOLUME_GROW |
+ Pool.UNSUPPORTED_VOLUME_SHRINK,
+ strip_size=strip_size)
+ except sqlite3.IntegrityError as sql_error:
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other volume" % name)
+
+
+ sim_pool = self.bs_obj.sim_pool_of_id(sim_pool_id)
+ sim_vol_id = self.volume_create(
+ SimArray._sim_id_to_lsm_id(sim_pool_id, 'POOL'), name,
+ sim_pool['free_space'], Volume.PROVISION_FULL,
+ _internal_use=True, _is_hw_raid_vol=1)
+ sim_vol = self.bs_obj.sim_vol_of_id(sim_vol_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_vol_2_lsm(sim_vol)
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index 724574f..610ab81 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
@@ -295,3 +295,11 @@ def volume_raid_info(self, volume, flags=0):

def pool_member_info(self, pool, flags=0):
return self.sim_array.pool_member_info(pool)
+
+ def volume_raid_create_cap_get(self, system, flags=0):
+ return self.sim_array.volume_raid_create_cap_get(system)
+
+ def volume_raid_create(self, name, raid_type, disks, strip_size,
+ flags=0):
+ return self.sim_array.volume_raid_create(
+ name, raid_type, disks, strip_size)
--
1.8.3.1
Gris Ge
2015-04-24 12:06:30 UTC
Permalink
* Add support of these methods:
* volume_raid_create()
* volume_raid_create_cap_get()

* Using hpssacli command to create RAID group volume:

hpssacli ctrl slot=0 create type=ld \
drives=1i:1:13,1i:1:14 size=max raid=1+0 ss=64

* Add new capability lsm.Capabilities.VOLUME_CREATE_RAID.

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

Changes in V4:

* Method renamed to volume_raid_create().

* Use hash table lookup for _hp_raid_level_to_lsm() and
_lsm_raid_type_to_hp().

* Mark '1adm'(3 disk mirror) and '1+0adm' as Volume.RAID_TYPE_OTHER
base on HP document.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/hpsa/hpsa.py | 202 +++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 185 insertions(+), 17 deletions(-)

diff --git a/plugin/hpsa/hpsa.py b/plugin/hpsa/hpsa.py
index 75f507f..d1a580e 100644
--- a/plugin/hpsa/hpsa.py
+++ b/plugin/hpsa/hpsa.py
@@ -195,26 +195,46 @@ def _disk_status_of(hp_disk, flag_free):
return disk_status


-def _hp_raid_type_to_lsm(hp_ld):
+_HP_RAID_LEVEL_CONV = {
+ '0': Volume.RAID_TYPE_RAID0,
+ # TODO(Gris Ge): Investigate whether HP has 4 disks RAID 1.
+ # In LSM, that's RAID10.
+ '1': Volume.RAID_TYPE_RAID1,
+ '5': Volume.RAID_TYPE_RAID5,
+ '6': Volume.RAID_TYPE_RAID6,
+ '1+0': Volume.RAID_TYPE_RAID10,
+ '50': Volume.RAID_TYPE_RAID50,
+ '60': Volume.RAID_TYPE_RAID60,
+}
+
+
+_HP_VENDOR_RAID_LEVELS = ['1adm', '1+0adm']
+
+
+_LSM_RAID_TYPE_CONV = dict(
+ zip(_HP_RAID_LEVEL_CONV.values(), _HP_RAID_LEVEL_CONV.keys()))
+
+
+def _hp_raid_level_to_lsm(hp_ld):
"""
Based on this property:
Fault Tolerance: 0/1/5/6/1+0
"""
hp_raid_level = hp_ld['Fault Tolerance']
- if hp_raid_level == '0':
- return Volume.RAID_TYPE_RAID0
- elif hp_raid_level == '1':
- # TODO(Gris Ge): Investigate whether HP has 4 disks RAID 1.
- # In LSM, that's RAID10.
- return Volume.RAID_TYPE_RAID1
- elif hp_raid_level == '5':
- return Volume.RAID_TYPE_RAID5
- elif hp_raid_level == '6':
- return Volume.RAID_TYPE_RAID6
- elif hp_raid_level == '1+0':
- return Volume.RAID_TYPE_RAID10
-
- return Volume.RAID_TYPE_UNKNOWN
+
+ if hp_raid_level in _HP_VENDOR_RAID_LEVELS:
+ return Volume.RAID_TYPE_OTHER
+
+ return _HP_RAID_LEVEL_CONV.get(hp_raid_level, Volume.RAID_TYPE_UNKNOWN)
+
+
+def _lsm_raid_type_to_hp(raid_type):
+ try:
+ return _LSM_RAID_TYPE_CONV[raid_type]
+ except KeyError:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Not supported raid type %d" % raid_type)


class SmartArray(IPlugin):
@@ -286,6 +306,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.DISKS)
cap.set(Capabilities.VOLUME_RAID_INFO)
cap.set(Capabilities.POOL_MEMBER_INFO)
+ cap.set(Capabilities.VOLUME_RAID_CREATE)
return cap

def _sacli_exec(self, sacli_cmds, flag_convert=True):
@@ -511,7 +532,7 @@ def volume_raid_info(self, volume, flags=Client.FLAG_RSVD):
for array_key_name in ctrl_data[key_name].keys():
if array_key_name == "Logical Drive: %s" % ld_num:
hp_ld = ctrl_data[key_name][array_key_name]
- raid_type = _hp_raid_type_to_lsm(hp_ld)
+ raid_type = _hp_raid_level_to_lsm(hp_ld)
strip_size = _hp_size_to_lsm(hp_ld['Strip Size'])
stripe_size = _hp_size_to_lsm(hp_ld['Full Stripe Size'])
elif array_key_name.startswith("physicaldrive"):
@@ -556,7 +577,7 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
for array_key_name in ctrl_data[key_name].keys():
if array_key_name.startswith("Logical Drive: ") and \
raid_type == Volume.RAID_TYPE_UNKNOWN:
- raid_type = _hp_raid_type_to_lsm(
+ raid_type = _hp_raid_level_to_lsm(
ctrl_data[key_name][array_key_name])
elif array_key_name.startswith("physicaldrive"):
hp_disk = ctrl_data[key_name][array_key_name]
@@ -570,3 +591,150 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
"Pool not found")

return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
+
+ def _vrc_cap_get(self, ctrl_num):
+ supported_raid_types = [
+ Volume.RAID_TYPE_RAID0, Volume.RAID_TYPE_RAID1,
+ Volume.RAID_TYPE_RAID5, Volume.RAID_TYPE_RAID50,
+ Volume.RAID_TYPE_RAID10]
+
+ supported_strip_sizes = [
+ 8 * 1024, 16 * 1024, 32 * 1024, 64 * 1024,
+ 128 * 1024, 256 * 1024, 512 * 1024, 1024 * 1024]
+
+ ctrl_conf = self._sacli_exec([
+ "ctrl", "slot=%s" % ctrl_num, "show", "config", "detail"]
+ ).values()[0]
+
+ if 'RAID 6 (ADG) Status' in ctrl_conf and \
+ ctrl_conf['RAID 6 (ADG) Status'] == 'Enabled':
+ supported_raid_types.extend(
+ [Volume.RAID_TYPE_RAID6, Volume.RAID_TYPE_RAID60])
+
+ return supported_raid_types, supported_strip_sizes
+
+ @_handle_errors
+ def volume_raid_create_cap_get(self, system, flags=Client.FLAG_RSVD):
+ """
+ Depends on this command:
+ hpssacli ctrl slot=0 show config detail
+ All hpsa support RAID 1, 10, 5, 50.
+ If "RAID 6 (ADG) Status: Enabled", it will support RAID 6 and 60.
+ For HP tribile mirror(RAID 1adm and RAID10adm), LSM does support
+ that yet.
+ No command output or document indication special or exceptional
+ support of strip size, assuming all hpsa cards support documented
+ strip sizes.
+ """
+ if not system.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Ilegal input system argument: missing plugin_data property")
+ return self._vrc_cap_get(system.plugin_data)
+
+ @_handle_errors
+ def volume_raid_create(self, name, raid_type, disks, strip_size,
+ flags=Client.FLAG_RSVD):
+ """
+ Depends on these commands:
+ 1. Create LD
+ hpssacli ctrl slot=0 create type=ld \
+ drives=1i:1:13,1i:1:14 size=max raid=1+0 ss=64
+
+ 2. Find out the system ID.
+
+ 3. Find out the pool fist disk belong.
+ hpssacli ctrl slot=0 pd 1i:1:13 show
+
+ 4. List all volumes for this new pool.
+ self.volumes(search_key='pool_id', search_value=pool_id)
+
+ The 'name' argument will be ignored.
+ TODO(Gris Ge): These code only tested for creating 1 disk RAID 0.
+ """
+ hp_raid_level = _lsm_raid_type_to_hp(raid_type)
+ hp_disk_ids = []
+ ctrl_num = None
+ for disk in disks:
+ if not disk.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: missing plugin_data "
+ "property")
+ (cur_ctrl_num, hp_disk_id) = disk.plugin_data.split(':', 1)
+ if ctrl_num and cur_ctrl_num != ctrl_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same controller/system.")
+
+ ctrl_num = cur_ctrl_num
+ hp_disk_ids.append(hp_disk_id)
+
+ cmds = [
+ "ctrl", "slot=%s" % ctrl_num, "create", "type=ld",
+ "drives=%s" % ','.join(hp_disk_ids), 'size=max',
+ 'raid=%s' % hp_raid_level]
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT:
+ cmds.append("ss=%d" % int(strip_size / 1024))
+
+ try:
+ self._sacli_exec(cmds, flag_convert=False)
+ except ExecError:
+ # Check whether disk is free
+ requested_disk_ids = [d.id for d in disks]
+ for cur_disk in self.disks():
+ if cur_disk.id in requested_disk_ids and \
+ not cur_disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in STATUS_FREE state" % cur_disk.id)
+
+ # Check whether got unsupported raid type or strip size
+ supported_raid_types, supported_strip_sizes = \
+ self._vrc_cap_get(ctrl_num)
+
+ if raid_type not in supported_raid_types:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided raid_type is not supported")
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT and \
+ strip_size not in supported_strip_sizes:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided strip_size is not supported")
+
+ raise
+
+ # Find out the system id to gernerate pool_id
+ sys_output = self._sacli_exec(
+ ['ctrl', "slot=%s" % ctrl_num, 'show'])
+
+ sys_id = sys_output.values()[0]['Serial Number']
+ # API code already checked empty 'disks', we will for sure get
+ # valid 'ctrl_num' and 'hp_disk_ids'.
+
+ pd_output = self._sacli_exec(
+ ['ctrl', "slot=%s" % ctrl_num, 'pd', hp_disk_ids[0], 'show'])
+
+ if pd_output.values()[0].keys()[0].lower().startswith("array "):
+ hp_array_id = pd_output.values()[0].keys()[0][len("array "):]
+ hp_array_id = "Array:%s" % hp_array_id
+ else:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_raid_create(): Failed to find out the array ID of "
+ "new array: %s" % pd_output.items())
+
+ pool_id = _pool_id_of(sys_id, hp_array_id)
+
+ lsm_vols = self.volumes(search_key='pool_id', search_value=pool_id)
+
+ if len(lsm_vols) != 1:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_raid_create(): Got unexpected count(not 1) of new "
+ "volumes: %s" % lsm_vols)
+ return lsm_vols[0]
--
1.8.3.1
Gris Ge
2015-04-24 12:06:31 UTC
Permalink
* Add support of these methods:
* volume_raid_create()
* volume_raid_create_cap_get()

* Using storcli command to create RAID group volume:

storcli /c0 add vd RAID10 drives=252:1-4 pdperarray=2 J

* Fixed incorrect lsm.System.plugin_data.

* Add new capability lsm.Capabilities.VOLUME_CREATE_RAID.

Changes in V4:

* Method renamed to volume_raid_create().

* Use hash table lookup in _lsm_raid_type_to_mega() and _vcr_cap_get().

* Use math.log(x, 2) instead of log(x,10)/log(2, 10) in _vcr_cap_get().

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/megaraid/megaraid.py | 175 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 174 insertions(+), 1 deletion(-)

diff --git a/plugin/megaraid/megaraid.py b/plugin/megaraid/megaraid.py
index 46e7916..62ad028 100644
--- a/plugin/megaraid/megaraid.py
+++ b/plugin/megaraid/megaraid.py
@@ -19,6 +19,7 @@
import json
import re
import errno
+import math

from lsm import (uri_parse, search_property, size_human_2_size_bytes,
Capabilities, LsmError, ErrorNumber, System, Client,
@@ -175,6 +176,16 @@ def _pool_id_of(dg_id, sys_id):
'RAID60': Volume.RAID_TYPE_RAID60,
}

+_LSM_RAID_TYPE_CONV = {
+ Volume.RAID_TYPE_RAID0: 'RAID0',
+ Volume.RAID_TYPE_RAID1: 'RAID1',
+ Volume.RAID_TYPE_RAID5: 'RAID5',
+ Volume.RAID_TYPE_RAID6: 'RAID6',
+ Volume.RAID_TYPE_RAID50: 'RAID50',
+ Volume.RAID_TYPE_RAID60: 'RAID60',
+ Volume.RAID_TYPE_RAID10: 'RAID10',
+}
+

def _mega_raid_type_to_lsm(vd_basic_info, vd_prop_info):
raid_type = _RAID_TYPE_MAP.get(
@@ -188,6 +199,15 @@ def _mega_raid_type_to_lsm(vd_basic_info, vd_prop_info):
return raid_type


+def _lsm_raid_type_to_mega(lsm_raid_type):
+ try:
+ return _LSM_RAID_TYPE_CONV[lsm_raid_type]
+ except KeyError:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "RAID type %d not supported" % lsm_raid_type)
+
+
class MegaRAID(IPlugin):
_DEFAULT_BIN_PATHS = [
"/opt/MegaRAID/storcli/storcli64", "/opt/MegaRAID/storcli/storcli"]
@@ -261,6 +281,7 @@ def capabilities(self, system, flags=Client.FLAG_RSVD):
cap.set(Capabilities.VOLUMES)
cap.set(Capabilities.VOLUME_RAID_INFO)
cap.set(Capabilities.POOL_MEMBER_INFO)
+ cap.set(Capabilities.VOLUME_RAID_CREATE)
return cap

def _storcli_exec(self, storcli_cmds, flag_json=True):
@@ -347,7 +368,7 @@ def systems(self, flags=Client.FLAG_RSVD):
)
(status, status_info) = self._lsm_status_of_ctrl(
ctrl_show_all_output)
- plugin_data = "/c%d"
+ plugin_data = "/c%d" % ctrl_num
# Since PCI slot sequence might change.
# This string just stored for quick system verification.

@@ -601,3 +622,155 @@ def pool_member_info(self, pool, flags=Client.FLAG_RSVD):
raid_type = Volume.RAID_TYPE_RAID10

return raid_type, Pool.MEMBER_TYPE_DISK, disk_ids
+
+ def _vcr_cap_get(self, mega_sys_path):
+ cap_output = self._storcli_exec(
+ [mega_sys_path, "show", "all"])['Capabilities']
+
+ mega_raid_types = \
+ cap_output['RAID Level Supported'].replace(', \n', '').split(', ')
+
+ supported_raid_types = []
+ for cur_mega_raid_type in _RAID_TYPE_MAP.keys():
+ if cur_mega_raid_type in mega_raid_types:
+ supported_raid_types.append(
+ _RAID_TYPE_MAP[cur_mega_raid_type])
+
+ supported_raid_types = sorted(list(set(supported_raid_types)))
+
+ min_strip_size = _mega_size_to_lsm(cap_output['Min Strip Size'])
+ max_strip_size = _mega_size_to_lsm(cap_output['Max Strip Size'])
+
+ supported_strip_sizes = list(
+ min_strip_size * (2 ** i)
+ for i in range(
+ 0, int(math.log(max_strip_size / min_strip_size, 2) + 1)))
+
+ # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ # The math above is to generate a list like:
+ # min_strip_size, ... n^2 , max_strip_size
+
+ return supported_raid_types, supported_strip_sizes
+
+ @_handle_errors
+ def volume_raid_create_cap_get(self, system, flags=Client.FLAG_RSVD):
+ """
+ Depend on the 'Capabilities' section of "storcli /c0 show all" output.
+ """
+ cur_lsm_syss = list(s for s in self.systems() if s.id == system.id)
+ if len(cur_lsm_syss) != 1:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+
+ lsm_sys = cur_lsm_syss[0]
+ return self._vcr_cap_get(lsm_sys.plugin_data)
+
+ @_handle_errors
+ def volume_raid_create(self, name, raid_type, disks, strip_size,
+ flags=Client.FLAG_RSVD):
+ """
+ Work flow:
+ 1. Create RAID volume
+ storcli /c0 add vd RAID10 drives=252:1-4 pdperarray=2 J
+ 2. Find out pool/DG base on one disk.
+ storcli /c0/e252/s1 show J
+ 3. Find out the volume/VD base on pool/DG using self.volumes()
+ """
+ mega_raid_type = _lsm_raid_type_to_mega(raid_type)
+ ctrl_num = None
+ slot_nums = []
+ enclosure_num = None
+
+ for disk in disks:
+ if not disk.plugin_data:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: missing plugin_data "
+ "property")
+ # Disk should from the same controller.
+ (cur_ctrl_num, cur_enclosure_num, slot_num) = \
+ disk.plugin_data.split(':')
+
+ if ctrl_num and cur_ctrl_num != ctrl_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same controller/system.")
+
+ if enclosure_num and cur_enclosure_num != enclosure_num:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Illegal input disks argument: disks are not from the "
+ "same disk enclosure.")
+
+ ctrl_num = int(cur_ctrl_num)
+ enclosure_num = cur_enclosure_num
+ slot_nums.append(slot_num)
+
+ # Handle request volume name, LSI only allow 15 characters.
+ name = re.sub('[^0-9a-zA-Z_\-]+', '', name)[:15]
+
+ cmds = [
+ "/c%s" % ctrl_num, "add", "vd", mega_raid_type,
+ 'size=all', "name=%s" % name,
+ "drives=%s:%s" % (enclosure_num, ','.join(slot_nums))]
+
+ if raid_type == Volume.RAID_TYPE_RAID10 or \
+ raid_type == Volume.RAID_TYPE_RAID50 or \
+ raid_type == Volume.RAID_TYPE_RAID60:
+ cmds.append("pdperarray=%d" % int(len(disks) / 2))
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT:
+ cmds.append("strip=%d" % int(strip_size / 1024))
+
+ try:
+ self._storcli_exec(cmds)
+ except ExecError:
+ req_disk_ids = [d.id for d in disks]
+ for cur_disk in self.disks():
+ if cur_disk.id in req_disk_ids and \
+ not cur_disk.status & Disk.STATUS_FREE:
+ raise LsmError(
+ ErrorNumber.DISK_NOT_FREE,
+ "Disk %s is not in STATUS_FREE state" % cur_disk.id)
+ # Check whether got unsupported RAID type or stripe size
+ supported_raid_types, supported_strip_sizes = \
+ self._vcr_cap_get("/c%s" % ctrl_num)
+
+ if raid_type not in supported_raid_types:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'raid_type' is not supported")
+
+ if strip_size != Volume.VCR_STRIP_SIZE_DEFAULT and \
+ strip_size not in supported_strip_sizes:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Provided 'strip_size' is not supported")
+
+ raise
+
+ # Find out the DG ID from one disk.
+ dg_show_output = self._storcli_exec(
+ ["/c%s/e%s/s%s" % tuple(disks[0].plugin_data.split(":")), "show"])
+
+ dg_id = dg_show_output['Drive Information'][0]['DG']
+ if dg_id == '-':
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_raid_create(): No error found in output, "
+ "but RAID is not created: %s" % dg_show_output.items())
+ else:
+ dg_id = int(dg_id)
+
+ pool_id = _pool_id_of(dg_id, self._sys_id_of_ctrl_num(ctrl_num))
+
+ lsm_vols = self.volumes(search_key='pool_id', search_value=pool_id)
+ if len(lsm_vols) != 1:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "volume_raid_create(): Got unexpected volume count(not 1) "
+ "when creating RAID volume")
+
+ return lsm_vols[0]
--
1.8.3.1
Gris Ge
2015-04-24 12:06:32 UTC
Permalink
* lsmcli test: new test function volume_raid_create_test().

* Plugin test: new test function test_volume_raid_create().

* C unit test: test_volume_raid_create_cap_get() and
test_volume_raid_create()

Changes in V3:

* Rebased to the pool_raid_info->pool_member_info changes.

* Initialize pointers in test_volume_raid_create_cap_get() of tester.c.

* Remove unused variables in tester.c.

Changes in V4:

* Method renamed to volume_raid_create().

Signed-off-by: Gris Ge <***@redhat.com>
---
test/cmdtest.py | 55 +++++++++++++++++++++++++++++
test/plugin_test.py | 50 +++++++++++++++++++++++++++
test/tester.c | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 204 insertions(+)

diff --git a/test/cmdtest.py b/test/cmdtest.py
index 60bef93..247a291 100755
--- a/test/cmdtest.py
+++ b/test/cmdtest.py
@@ -713,6 +713,59 @@ def pool_member_info_test(cap, system_id):
exit(10)
return

+def volume_raid_create_test(cap, system_id):
+ if cap['VOLUME_RAID_CREATE']:
+ out = call(
+ [cmd, '-t' + sep, 'volume-create-raid-cap', '--sys', system_id])[1]
+
+ if 'RAID1' not in [r[1] for r in parse(out)]:
+ return
+
+ out = call([cmd, '-t' + sep, 'list', '--type', 'disks'])[1]
+ free_disk_ids = []
+ disk_list = parse(out)
+ for disk in disk_list:
+ if 'Free' in disk:
+ if len(free_disk_ids) == 2:
+ break
+ free_disk_ids.append(disk[0])
+
+ if len(free_disk_ids) != 2:
+ print "Require two free disks to test volume-create-raid"
+ exit(10)
+
+ out = call([
+ cmd, '-t' + sep, 'volume-create-raid', '--disk', free_disk_ids[0],
+ '--disk', free_disk_ids[1], '--name', 'test_volume_raid_create',
+ '--raid-type', 'raid1'])[1]
+
+ volume = parse(out)
+ vol_id = volume[0][0]
+ pool_id = volume[0][-2]
+
+ if cap['VOLUME_RAID_INFO']:
+ out = call(
+ [cmd, '-t' + sep, 'volume-raid-info', '--vol', vol_id])[1]
+ if parse(out)[0][1] != 'RAID1':
+ print "New volume is not RAID 1"
+ exit(10)
+
+ if cap['POOL_MEMBER_INFO']:
+ out = call(
+ [cmd, '-t' + sep, 'pool-member-info', '--pool', pool_id])[1]
+ if parse(out)[0][1] != 'RAID1':
+ print "New pool is not RAID 1"
+ exit(10)
+ for disk_id in free_disk_ids:
+ if disk_id not in [p[3] for p in parse(out)]:
+ print "New pool does not contain requested disks"
+ exit(10)
+
+ if cap['VOLUME_DELETE']:
+ volume_delete(vol_id)
+
+ return
+

def run_all_tests(cap, system_id):
test_display(cap, system_id)
@@ -729,6 +782,8 @@ def run_all_tests(cap, system_id):

pool_member_info_test(cap,system_id)

+ volume_raid_create_test(cap, system_id)
+
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("-c", "--command", action="store", type="string",
diff --git a/test/plugin_test.py b/test/plugin_test.py
index 5cb48ae..2147d43 100755
--- a/test/plugin_test.py
+++ b/test/plugin_test.py
@@ -1294,6 +1294,56 @@ def test_pool_member_info(self):
self.assertTrue(type(member_type) is int)
self.assertTrue(type(member_ids) is list)

+ def _skip_current_test(self, messsage):
+ """
+ If skipTest is supported, skip this test with provided message.
+ Sliently return if not supported.
+ """
+ if hasattr(unittest.TestCase, 'skipTest') is True:
+ self.skipTest(messsage)
+ return
+
+ def test_volume_raid_create(self):
+ for s in self.systems:
+ cap = self.c.capabilities(s)
+ # TODO(Gris Ge): Add test code for other RAID type and strip size
+ if supported(cap, [Cap.VOLUME_RAID_CREATE]):
+ supported_raid_types, supported_strip_sizes = \
+ self.c.volume_raid_create_cap_get(s)
+ if lsm.Volume.RAID_TYPE_RAID1 not in supported_raid_types:
+ self._skip_current_test(
+ "Skip test: current system does not support "
+ "creating RAID1 volume")
+
+ # Find two free disks
+ free_disks = []
+ for disk in self.c.disks():
+ if len(free_disks) == 2:
+ break
+ if disk.status & lsm.Disk.STATUS_FREE:
+ free_disks.append(disk)
+
+ if len(free_disks) != 2:
+ self._skip_current_test(
+ "Skip test: Failed to find two free disks for RAID 1")
+ return
+
+ new_vol = self.c.volume_raid_create(
+ rs('v'), lsm.Volume.RAID_TYPE_RAID1, free_disks,
+ lsm.Volume.VCR_STRIP_SIZE_DEFAULT)
+
+ self.assertTrue(new_vol is not None)
+
+ # TODO(Gris Ge): Use volume_raid_info() and pool_member_info()
+ # to verify size, raid_type, member type, member ids.
+
+ if supported(cap, [Cap.VOLUME_DELETE]):
+ self._volume_delete(new_vol)
+
+ else:
+ self._skip_current_test(
+ "Skip test: not support of VOLUME_RAID_CREATE")
+
def dump_results():
"""
unittest.main exits when done so we need to register this handler to
diff --git a/test/tester.c b/test/tester.c
index ee12bcc..5ac2ecb 100644
--- a/test/tester.c
+++ b/test/tester.c
@@ -2917,6 +2917,103 @@ START_TEST(test_pool_member_info)
}
END_TEST

+START_TEST(test_volume_raid_create_cap_get)
+{
+ if (which_plugin == 1){
+ // silently skip on simc which does not support this method yet.
+ return;
+ }
+ int rc;
+ lsm_system **sys = NULL;
+ uint32_t sys_count = 0;
+
+ G(rc, lsm_system_list, c, &sys, &sys_count, LSM_CLIENT_FLAG_RSVD);
+ fail_unless( sys_count >= 1, "count = %d", sys_count);
+
+ if( sys_count > 0 ) {
+ uint32_t *supported_raid_types = NULL;
+ uint32_t supported_raid_type_count = 0;
+ uint32_t *supported_strip_sizes = NULL;
+ uint32_t supported_strip_size_count = 0;
+
+ G(
+ rc, lsm_volume_raid_create_cap_get, c, sys[0],
+ &supported_raid_types, &supported_raid_type_count,
+ &supported_strip_sizes, &supported_strip_size_count,
+ 0);
+
+ free(supported_raid_types);
+ free(supported_strip_sizes);
+
+ }
+ G(rc, lsm_system_record_array_free, sys, sys_count);
+}
+END_TEST
+
+START_TEST(test_volume_raid_create)
+{
+ if (which_plugin == 1){
+ // silently skip on simc which does not support this method yet.
+ return;
+ }
+ int rc;
+
+ lsm_disk **disks = NULL;
+ uint32_t disk_count = 0;
+
+ G(rc, lsm_disk_list, c, NULL, NULL, &disks, &disk_count, 0);
+
+ // Try to create two disks RAID 1.
+ uint32_t free_disk_count = 0;
+ lsm_disk *free_disks[2];
+ int i;
+ for (i = 0; i< disk_count; i++){
+ if (lsm_disk_status_get(disks[i]) & LSM_DISK_STATUS_FREE){
+ free_disks[free_disk_count++] = disks[i];
+ if (free_disk_count == 2){
+ break;
+ }
+ }
+ }
+ fail_unless(free_disk_count == 2, "Failed to find two free disks");
+
+ lsm_volume *new_volume = NULL;
+
+ G(rc, lsm_volume_raid_create, c, "test_volume_raid_create",
+ LSM_VOLUME_RAID_TYPE_RAID1, free_disks, free_disk_count,
+ LSM_VOLUME_VCR_STRIP_SIZE_DEFAULT, &new_volume, LSM_CLIENT_FLAG_RSVD);
+
+ char *job_del = NULL;
+ int del_rc = lsm_volume_delete(
+ c, new_volume, &job_del, LSM_CLIENT_FLAG_RSVD);
+
+ fail_unless( del_rc == LSM_ERR_OK || del_rc == LSM_ERR_JOB_STARTED,
+ "lsm_volume_delete %d (%s)", rc, error(lsm_error_last_get(c)));
+
+ if( LSM_ERR_JOB_STARTED == del_rc ) {
+ wait_for_job_vol(c, &job_del);
+ }
+
+ G(rc, lsm_disk_record_array_free, disks, disk_count);
+ // The new pool should be automatically be deleted when volume got
+ // deleted.
+ lsm_pool **pools = NULL;
+ uint32_t count = 0;
+ G(
+ rc, lsm_pool_list, c, "id", lsm_volume_pool_id_get(new_volume),
+ &pools, &count, LSM_CLIENT_FLAG_RSVD);
+
+ fail_unless(
+ count == 0,
+ "New HW RAID pool still exists, it should be deleted along with "
+ "lsm_volume_delete()");
+
+ lsm_pool_record_array_free(pools, count);
+
+ G(rc, lsm_volume_record_free, new_volume);
+}
+END_TEST
+
Suite * lsm_suite(void)
{
Suite *s = suite_create("libStorageMgmt");
@@ -2954,6 +3051,8 @@ Suite * lsm_suite(void)
tcase_add_test(basic, test_invalid_input);
tcase_add_test(basic, test_volume_raid_info);
tcase_add_test(basic, test_pool_member_info);
+ tcase_add_test(basic, test_volume_raid_create_cap_get);
+ tcase_add_test(basic, test_volume_raid_create);

suite_add_tcase(s, basic);
return s;
--
1.8.3.1
Gris Ge
2015-04-24 12:06:33 UTC
Permalink
* Just free the 'pool' memory before return.

Signed-off-by: Gris Ge <***@redhat.com>
---
test/tester.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/test/tester.c b/test/tester.c
index 5ac2ecb..a106e9c 100644
--- a/test/tester.c
+++ b/test/tester.c
@@ -2883,6 +2883,7 @@ START_TEST(test_volume_raid_info)
&disk_count, &min_io_size, &opt_io_size, LSM_CLIENT_FLAG_RSVD);

G(rc, lsm_volume_record_free, volume);
+ G(rc, lsm_pool_record_free, pool);
volume = NULL;
}
END_TEST
--
1.8.3.1
Loading...