Discussion:
[Libstoragemgmt-devel] [PATCH] [Demo] Simulator Plugin: Use sqlite3 instead risky pickle.
Gris Ge
2015-01-08 15:33:51 UTC
Permalink
* This is just a domo patch. Please don't commit.
* Tested with 100 thread of lsmcli for con-currency action.
* Tested by plugin_test.
* Don't have complex data layout in database. Prepare for sharing same
statefile with 'simc://' plugin.
* Found some blur definition on replication (both volume and fs).
Please check "#TODO" comments of replication methods.

* Big TODOS:
1. 'make check' failed at tester.c.
2. Make statefile global readable and writable. Maybe?
3. Remove public method of BackStore class.

Signed-off-by: Gris Ge <***@redhat.com>
---
plugin/sim/simarray.py | 3066 ++++++++++++++++++++++++++++-------------------
plugin/sim/simulator.py | 7 +-
2 files changed, 1804 insertions(+), 1269 deletions(-)

diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index d733e50..9f7f6c5 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -1,10 +1,10 @@
-# Copyright (C) 2011-2014 Red Hat, Inc.
+# Copyright (C) 2011-2015 Red Hat, Inc.
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or any later version.
#
-# This library is distributed in the hope that it will be useful,
+# This library is distributed in the hope that it is useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
@@ -16,23 +16,64 @@
# Author: tasleson
# Gris Ge <***@redhat.com>

-# TODO: 1. Introduce constant check by using state_to_str() converting.
-# 2. Snapshot should consume space in pool.
+# TODO: 1. Review public method of BackStore
+# 2. Make sure statefile is created with global write.

import random
-import pickle
import tempfile
import os
import time
-import fcntl
+import sqlite3

-from lsm import (size_human_2_size_bytes, size_bytes_2_size_human)
+
+from lsm import (size_human_2_size_bytes)
from lsm import (System, Volume, Disk, Pool, FileSystem, AccessGroup,
FsSnapshot, NfsExport, md5, LsmError, TargetPort,
ErrorNumber, JobStatus)

-# Used for format width for disks
-D_FMT = 5
+
+def _handle_errors(method):
+ def md_wrapper(*args, **kargs):
+ try:
+ return method(*args, **kargs)
+ except sqlite3.OperationalError as sql_error:
+ if type(args[0]) is SimArray:
+ args[0].bs_obj.trans_rollback()
+ if sql_error.message == 'database is locked':
+ raise LsmError(
+ ErrorNumber.TIMEOUT,
+ "Timeout to require lock on state file")
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Got unexpected error from sqlite3: %s" % sql_error.message)
+ except LsmError:
+ if type(args[0]) is SimArray:
+ args[0].bs_obj.trans_rollback()
+ raise
+ except Exception as base_error:
+ if type(args[0]) is SimArray:
+ args[0].bs_obj.trans_rollback()
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Got unexpected error: %s" % str(base_error))
+ return md_wrapper
+
+
+def _list_to_dict_by_id(sim_datas):
+ rc_dict = {}
+ for sim_data in sim_datas:
+ rc_dict[sim_data['id']] = sim_data
+ return rc_dict
+
+
+def _random_vpd():
+ """
+ Generate a random VPD83 NAA_Type3 ID
+ """
+ vpd = ['60']
+ for _ in range(0, 15):
+ vpd.append(str('%02x' % (random.randint(0, 255))))
+ return "".join(vpd)


class PoolRAID(object):
@@ -97,724 +138,527 @@ class PoolRAID(object):
"""
return member_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE

-
-class SimJob(object):
- """
- Simulates a longer running job, uses actual wall time. If test cases
- take too long we can reduce time by shortening time duration.
- """
-
- def _calc_progress(self):
- if self.percent < 100:
- end = self.start + self.duration
- now = time.time()
- if now >= end:
- self.percent = 100
- self.status = JobStatus.COMPLETE
- else:
- diff = now - self.start
- self.percent = int(100 * (diff / self.duration))
-
- def __init__(self, item_to_return):
- duration = os.getenv("LSM_SIM_TIME", 1)
- self.status = JobStatus.INPROGRESS
- self.percent = 0
- self.__item = item_to_return
- self.start = time.time()
- self.duration = float(random.randint(0, int(duration)))
-
- def progress(self):
- """
- Returns a tuple (status, percent, data)
- """
- self._calc_progress()
- return self.status, self.percent, self.item
-
- @property
- def item(self):
- if self.percent >= 100:
- return self.__item
- return None
-
- @item.setter
- def item(self, value):
- self.__item = value
-
-
-class SimArray(object):
- SIM_DATA_FILE = os.getenv("LSM_SIM_DATA",
- tempfile.gettempdir() + '/lsm_sim_data')
-
- @staticmethod
- def _version_error(dump_file):
- raise LsmError(ErrorNumber.INVALID_ARGUMENT,
- "Stored simulator state incompatible with "
- "simulator, please move or delete %s" %
- dump_file)
-
@staticmethod
- def _sort_by_id(resources):
- # isupper() just make sure the 'lsm_test_aggr' come as first one.
- return sorted(resources, key=lambda k: (k.id.isupper(), k.id))
+ def disk_type_to_member_type(disk_type):
+ for m_type, d_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE.items():
+ if disk_type == d_type:
+ return m_type
+ return PoolRAID.MEMBER_TYPE_UNKNOWN
+
+
+class BackStore(object):
+ VERSION = "3.0"
+ VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
+ JOB_DEFAULT_DURATION = 1
+ JOB_DATA_TYPE_VOL = 1
+ JOB_DATA_TYPE_FS = 2
+ JOB_DATA_TYPE_FS_SNAP = 3
+
+ SYS_ID = "sim-01"
+ SYS_NAME = "LSM simulated storage plug-in"
+ BLK_SIZE = 512
+
+ SYS_KEY_LIST = ['id', 'name', 'status', 'status_info', 'version']
+
+ POOL_KEY_LIST = [
+ 'id', 'name', 'status', 'status_info',
+ 'element_type', 'unsupported_actions', 'raid_type',
+ 'member_type', 'total_space', 'parent_pool_id']
+
+ DISK_KEY_LIST = [
+ 'id', 'disk_prefix', 'total_space', 'disk_type', 'status',
+ 'owner_pool_id']
+
+ VOL_KEY_LIST = [
+ 'id', 'vpd83', 'name', 'total_space', 'consumed_size',
+ 'pool_id', 'admin_state', 'rep_source', 'rep_type', 'thinp']
+
+ TGT_KEY_LIST = [
+ 'id', 'port_type', 'service_address', 'network_address',
+ 'physical_address', 'physical_name']
+
+ AG_KEY_LIST = ['id', 'name']
+
+ INIT_KEY_LIST = ['id', 'init_type', 'owner_ag_id']
+
+ JOB_KEY_LIST = ['id', 'duration', 'timestamp', 'data_type', 'data_id']
+
+ FS_KEY_LIST = [
+ 'id', 'name', 'total_space', 'free_space', 'consumed_size',
+ 'pool_id']
+
+ FS_SNAP_KEY_LIST = [
+ 'id', 'fs_id', 'name', 'timestamp']
+
+ EXP_KEY_LIST = [
+ 'id', 'fs_id', 'exp_path', 'auth_type', 'root_hosts', 'rw_hosts',
+ 'ro_hosts', 'anon_uid', 'anon_gid', 'options']
+
+ def __init__(self, statefile, timeout):
+ self.statefile = statefile
+ self.lastrowid = None
+ self.sql_conn = sqlite3.connect(
+ statefile, timeout=int(timeout/1000), isolation_level="IMMEDIATE")
+ # Create tables no matter exit or not. No lock required.
+ sql_cmd = (
+ "CREATE TABLE systems ("
+ "id TEXT PRIMARY KEY, "
+ "name TEXT, "
+ "status INTEGER, "
+ "status_info TEXT, "
+ "version TEXT);\n") # version hold the signature of data
+
+ sql_cmd += (
+ "CREATE TABLE disks ("
+ "id INTEGER PRIMARY KEY, "
+ "total_space LONG, "
+ "disk_type INTEGER, "
+ "status INTEGER, "
+ "owner_pool_id INTEGER, " # Indicate this disk is used to
+ # assemble a pool
+ "disk_prefix TEXT);\n")
+
+ sql_cmd += (
+ "CREATE TABLE tgts ("
+ "id INTEGER PRIMARY KEY, "
+ "port_type INTEGER, "
+ "service_address TEXT, "
+ "network_address TEXT, "
+ "physical_address TEXT, "
+ "physical_name TEXT);\n")
+
+ sql_cmd += (
+ "CREATE TABLE pools ("
+ "id INTEGER PRIMARY KEY, "
+ "name TEXT UNIQUE, "
+ "status INTEGER, "
+ "status_info TEXT, "
+ "element_type INTEGER, "
+ "unsupported_actions INTEGER, "
+ "raid_type INTEGER, "
+ "parent_pool_id INTEGER, " # Indicate this pool is allocated from
+ # other pool
+ "member_type INTEGER, "
+ "total_space LONG);\n") # total_space here is only for
+ # sub-pool (pool from pool)
+ sql_cmd += (
+ "CREATE TABLE volumes ("
+ "id INTEGER PRIMARY KEY, "
+ "vpd83 TEXT, "
+ "name TEXT UNIQUE, "
+ "total_space LONG, "
+ "consumed_size LONG, " # Reserved for future thinp support.
+ "pool_id INTEGER, "
+ "admin_state INTEGER, "
+ "thinp INTEGER, "
+ "rep_source INTEGER, " # Only exist on target volume.
+ "rep_type INTEGER);\n") # Only exist on target volume
+
+ sql_cmd += (
+ "CREATE TABLE ags ("
+ "id INTEGER PRIMARY KEY, "
+ "name TEXT UNIQUE);\n")
+
+ sql_cmd += (
+ "CREATE TABLE inits ("
+ "id TEXT UNIQUE, "
+ "init_type INTEGER, "
+ "owner_ag_id INTEGER);\n")
+
+ sql_cmd += (
+ "CREATE TABLE vol_masks ("
+ "id TEXT UNIQUE, " # md5 hash of vol_id and ag_id
+ "vol_id INTEGER, "
+ "ag_id INTEGER);\n")
+
+ sql_cmd += (
+ "CREATE TABLE vol_reps ("
+ "rep_type INTEGER, "
+ "src_vol_id INTEGER, "
+ "dst_vol_id INTEGER);\n")
+
+ sql_cmd += (
+ "CREATE TABLE vol_rep_ranges ("
+ "src_vol_id INTEGER, "
+ "dst_vol_id INTEGER, "
+ "src_start_blk LONG, "
+ "dst_start_blk LONG, "
+ "blk_count LONG);\n")
+
+ sql_cmd += (
+ "CREATE TABLE fss ("
+ "id INTEGER PRIMARY KEY, "
+ "name TEXT UNIQUE, "
+ "total_space LONG, "
+ "consumed_size LONG, "
+ "free_space LONG, "
+ "pool_id INTEGER);\n")
+
+ sql_cmd += (
+ "CREATE TABLE fs_snaps ("
+ "id INTEGER PRIMARY KEY, "
+ "name TEXT UNIQUE, "
+ "fs_id INTEGER, "
+ "timestamp TEXT);\n")
+
+ sql_cmd += (
+ "CREATE TABLE fs_clones ("
+ "id TEXT UNIQUE, "
+ "src_fs_id INTEGER, "
+ "dst_fs_id INTEGER, "
+ "fs_snap_id INTEGER);\n")
+
+ sql_cmd += (
+ "CREATE TABLE exps ("
+ "id INTEGER PRIMARY KEY, "
+ "fs_id INTEGER, "
+ "exp_path TEXT UNIQUE, "
+ "auth_type TEXT, "
+ "root_hosts TEXT, " # divided by ','
+ "rw_hosts TEXT, " # divided by ','
+ "ro_hosts TEXT, " # divided by ','
+ "anon_uid INTEGER, "
+ "anon_gid INTEGER, "
+ "options TEXT);\n")
+
+ sql_cmd += (
+ "CREATE TABLE jobs ("
+ "id INTEGER PRIMARY KEY, "
+ "duration INTEGER, "
+ "timestamp TEXT, "
+ "data_type INTEGER, "
+ "data_id TEXT);\n")
+
+ sql_cur = self.sql_conn.cursor()
+ try:
+ sql_cur.executescript(sql_cmd)
+ except sqlite3.OperationalError as sql_error:
+ if 'already exists' in sql_error.message:
+ pass
+ else:
+ raise sql_error

- def __init__(self, dump_file=None):
- if dump_file is None:
- self.dump_file = SimArray.SIM_DATA_FILE
- else:
- self.dump_file = dump_file
-
- self.state_fd = os.open(self.dump_file, os.O_RDWR | os.O_CREAT)
- fcntl.lockf(self.state_fd, fcntl.LOCK_EX)
- self.state_fo = os.fdopen(self.state_fd, "r+b")
-
- current = self.state_fo.read()
-
- if current:
- try:
- self.data = pickle.loads(current)
-
- # Going forward we could get smarter about handling this for
- # changes that aren't invasive, but we at least need to check
- # to make sure that the data will work and not cause any
- # undo confusion.
- if self.data.version != SimData.SIM_DATA_VERSION or \
- self.data.signature != SimData.state_signature():
- SimArray._version_error(self.dump_file)
- except AttributeError:
- SimArray._version_error(self.dump_file)
+ def _check_version(self):
+ sim_syss = self.sim_syss()
+ if len(sim_syss) == 0 or not sim_syss[0]:
+ return False
else:
- self.data = SimData()
-
- def save_state(self):
- # Make sure we are at the beginning of the stream
- self.state_fo.seek(0)
- pickle.dump(self.data, self.state_fo)
- self.state_fo.flush()
- self.state_fo.close()
- self.state_fo = None
-
- def job_status(self, job_id, flags=0):
- return self.data.job_status(job_id, flags=0)
-
- def job_free(self, job_id, flags=0):
- return self.data.job_free(job_id, flags=0)
-
- def time_out_set(self, ms, flags=0):
- return self.data.set_time_out(ms, flags)
-
- def time_out_get(self, flags=0):
- return self.data.get_time_out(flags)
-
- def systems(self):
- return self.data.systems()
-
- @staticmethod
- def _sim_vol_2_lsm(sim_vol):
- return Volume(sim_vol['vol_id'], sim_vol['name'], sim_vol['vpd83'],
- SimData.SIM_DATA_BLK_SIZE,
- int(sim_vol['total_space'] / SimData.SIM_DATA_BLK_SIZE),
- sim_vol['admin_state'], sim_vol['sys_id'],
- sim_vol['pool_id'])
-
- def volumes(self):
- sim_vols = self.data.volumes()
- return SimArray._sort_by_id(
- [SimArray._sim_vol_2_lsm(v) for v in sim_vols])
-
- def _sim_pool_2_lsm(self, sim_pool, flags=0):
- pool_id = sim_pool['pool_id']
- name = sim_pool['name']
- total_space = self.data.pool_total_space(pool_id)
- free_space = self.data.pool_free_space(pool_id)
- status = sim_pool['status']
- status_info = sim_pool['status_info']
- sys_id = sim_pool['sys_id']
- unsupported_actions = sim_pool['unsupported_actions']
- return Pool(pool_id, name,
- Pool.ELEMENT_TYPE_VOLUME | Pool.ELEMENT_TYPE_FS,
- unsupported_actions, total_space, free_space, status,
- status_info, sys_id)
-
- def pools(self, flags=0):
- rc = []
- sim_pools = self.data.pools()
- for sim_pool in sim_pools:
- rc.extend([self._sim_pool_2_lsm(sim_pool, flags)])
- return SimArray._sort_by_id(rc)
-
- def disks(self):
- rc = []
- sim_disks = self.data.disks()
- for sim_disk in sim_disks:
- disk = Disk(sim_disk['disk_id'], sim_disk['name'],
- sim_disk['disk_type'], SimData.SIM_DATA_BLK_SIZE,
- int(sim_disk['total_space'] /
- SimData.SIM_DATA_BLK_SIZE), Disk.STATUS_OK,
- sim_disk['sys_id'])
- rc.extend([disk])
- return SimArray._sort_by_id(rc)
-
- def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0):
- sim_vol = self.data.volume_create(
- pool_id, vol_name, size_bytes, thinp, flags)
- return self.data.job_create(SimArray._sim_vol_2_lsm(sim_vol))
-
- def volume_delete(self, vol_id, flags=0):
- self.data.volume_delete(vol_id, flags=0)
- return self.data.job_create(None)[0]
-
- def volume_resize(self, vol_id, new_size_bytes, flags=0):
- sim_vol = self.data.volume_resize(vol_id, new_size_bytes, flags)
- return self.data.job_create(SimArray._sim_vol_2_lsm(sim_vol))
-
- def volume_replicate(self, dst_pool_id, rep_type, src_vol_id, new_vol_name,
- flags=0):
- sim_vol = self.data.volume_replicate(
- dst_pool_id, rep_type, src_vol_id, new_vol_name, flags)
- return self.data.job_create(SimArray._sim_vol_2_lsm(sim_vol))
-
- def volume_replicate_range_block_size(self, sys_id, flags=0):
- return self.data.volume_replicate_range_block_size(sys_id, flags)
-
- def volume_replicate_range(self, rep_type, src_vol_id, dst_vol_id, ranges,
- flags=0):
- return self.data.job_create(
- self.data.volume_replicate_range(
- rep_type, src_vol_id, dst_vol_id, ranges, flags))[0]
-
- def volume_enable(self, vol_id, flags=0):
- return self.data.volume_enable(vol_id, flags)
-
- def volume_disable(self, vol_id, flags=0):
- return self.data.volume_disable(vol_id, flags)
-
- def volume_child_dependency(self, vol_id, flags=0):
- return self.data.volume_child_dependency(vol_id, flags)
-
- def volume_child_dependency_rm(self, vol_id, flags=0):
- return self.data.job_create(
- self.data.volume_child_dependency_rm(vol_id, flags))[0]
-
- @staticmethod
- def _sim_fs_2_lsm(sim_fs):
- return FileSystem(sim_fs['fs_id'], sim_fs['name'],
- sim_fs['total_space'], sim_fs['free_space'],
- sim_fs['pool_id'], sim_fs['sys_id'])
-
- def fs(self):
- sim_fss = self.data.fs()
- return SimArray._sort_by_id(
- [SimArray._sim_fs_2_lsm(f) for f in sim_fss])
-
- def fs_create(self, pool_id, fs_name, size_bytes, flags=0):
- sim_fs = self.data.fs_create(pool_id, fs_name, size_bytes, flags)
- return self.data.job_create(SimArray._sim_fs_2_lsm(sim_fs))
-
- def fs_delete(self, fs_id, flags=0):
- self.data.fs_delete(fs_id, flags=0)
- return self.data.job_create(None)[0]
-
- def fs_resize(self, fs_id, new_size_bytes, flags=0):
- sim_fs = self.data.fs_resize(fs_id, new_size_bytes, flags)
- return self.data.job_create(SimArray._sim_fs_2_lsm(sim_fs))
-
- def fs_clone(self, src_fs_id, dst_fs_name, snap_id, flags=0):
- sim_fs = self.data.fs_clone(src_fs_id, dst_fs_name, snap_id, flags)
- return self.data.job_create(SimArray._sim_fs_2_lsm(sim_fs))
-
- def fs_file_clone(self, fs_id, src_fs_name, dst_fs_name, snap_id, flags=0):
- return self.data.job_create(
- self.data.fs_file_clone(
- fs_id, src_fs_name, dst_fs_name, snap_id, flags))[0]
-
- @staticmethod
- def _sim_snap_2_lsm(sim_snap):
- return FsSnapshot(sim_snap['snap_id'], sim_snap['name'],
- sim_snap['timestamp'])
-
- def fs_snapshots(self, fs_id, flags=0):
- sim_snaps = self.data.fs_snapshots(fs_id, flags)
- return [SimArray._sim_snap_2_lsm(s) for s in sim_snaps]
-
- def fs_snapshot_create(self, fs_id, snap_name, flags=0):
- sim_snap = self.data.fs_snapshot_create(fs_id, snap_name, flags)
- return self.data.job_create(SimArray._sim_snap_2_lsm(sim_snap))
-
- def fs_snapshot_delete(self, fs_id, snap_id, flags=0):
- return self.data.job_create(
- self.data.fs_snapshot_delete(fs_id, snap_id, flags))[0]
-
- def fs_snapshot_restore(self, fs_id, snap_id, files, restore_files,
- flag_all_files, flags):
- return self.data.job_create(
- self.data.fs_snapshot_restore(
- fs_id, snap_id, files, restore_files,
- flag_all_files, flags))[0]
-
- def fs_child_dependency(self, fs_id, files, flags=0):
- return self.data.fs_child_dependency(fs_id, files, flags)
-
- def fs_child_dependency_rm(self, fs_id, files, flags=0):
- return self.data.job_create(
- self.data.fs_child_dependency_rm(fs_id, files, flags))[0]
-
- @staticmethod
- def _sim_exp_2_lsm(sim_exp):
- return NfsExport(sim_exp['exp_id'], sim_exp['fs_id'],
- sim_exp['exp_path'], sim_exp['auth_type'],
- sim_exp['root_hosts'], sim_exp['rw_hosts'],
- sim_exp['ro_hosts'], sim_exp['anon_uid'],
- sim_exp['anon_gid'], sim_exp['options'])
-
- def exports(self, flags=0):
- sim_exps = self.data.exports(flags)
- return SimArray._sort_by_id(
- [SimArray._sim_exp_2_lsm(e) for e in sim_exps])
+ if 'version' in sim_syss[0] and \
+ sim_syss[0]['version'] == BackStore.VERSION_SIGNATURE:
+ return True

- def fs_export(self, fs_id, exp_path, root_hosts, rw_hosts, ro_hosts,
- anon_uid, anon_gid, auth_type, options, flags=0):
- sim_exp = self.data.fs_export(
- fs_id, exp_path, root_hosts, rw_hosts, ro_hosts,
- anon_uid, anon_gid, auth_type, options, flags)
- return SimArray._sim_exp_2_lsm(sim_exp)
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Stored simulator state incompatible with "
+ "simulator, please move or delete %s" % self.statefile)

- def fs_unexport(self, exp_id, flags=0):
- return self.data.fs_unexport(exp_id, flags)
+ def check_version_and_init(self):
+ """
+ Raise error if version not match.
+ If empty database found, initiate.
+ """
+ # The complex lock workflow is all caused by python sqlite3 do
+ # autocommit for "CREATE TABLE" command.
+ self.trans_begin()
+ if self._check_version():
+ self.trans_commit()
+ return
+ else:
+ self._data_add(
+ 'systems',
+ {
+ 'id': BackStore.SYS_ID,
+ 'name': BackStore.SYS_NAME,
+ 'status': System.STATUS_OK,
+ 'status_info': "",
+ 'version': BackStore.VERSION_SIGNATURE,
+ })
+
+ size_bytes_2t = size_human_2_size_bytes('2TiB')
+ size_bytes_512g = size_human_2_size_bytes('512GiB')
+ # Add 2 SATA disks(2TiB)
+ pool_1_disks = []
+ for _ in range(0, 2):
+ self._data_add(
+ 'disks',
+ {
+ 'disk_prefix': "2TiB SATA Disk",
+ 'total_space': size_bytes_2t,
+ 'disk_type': Disk.TYPE_SATA,
+ 'status': Disk.STATUS_OK,
+ })
+ pool_1_disks.append(self.lastrowid)
+
+ test_pool_disks = []
+ # Add 6 SAS disks(2TiB)
+ for _ in range(0, 6):
+ self._data_add(
+ 'disks',
+ {
+ 'disk_prefix': "2TiB SAS Disk",
+ 'total_space': size_bytes_2t,
+ 'disk_type': Disk.TYPE_SAS,
+ 'status': Disk.STATUS_OK,
+ })
+ if len(test_pool_disks) < 2:
+ test_pool_disks.append(self.lastrowid)
+
+ # Add 5 SSD disks(512GiB)
+ for _ in range(0, 5):
+ self._data_add(
+ 'disks',
+ {
+ 'disk_prefix': "512GiB SSD Disk",
+ 'total_space': size_bytes_512g,
+ 'disk_type': Disk.TYPE_SSD,
+ 'status': Disk.STATUS_OK,
+ })
+ # Add 7 SSD disks(2TiB)
+ for _ in range(0, 7):
+ self._data_add(
+ 'disks',
+ {
+ 'disk_prefix': "2TiB SSD Disk",
+ 'total_space': size_bytes_2t,
+ 'disk_type': Disk.TYPE_SSD,
+ 'status': Disk.STATUS_OK,
+ })
+
+ pool_1_id = self.sim_pool_create_from_disk(
+ name='Pool 1',
+ raid_type=PoolRAID.RAID_TYPE_RAID1,
+ sim_disk_ids=pool_1_disks,
+ element_type=Pool.ELEMENT_TYPE_POOL |
+ Pool.ELEMENT_TYPE_FS |
+ Pool.ELEMENT_TYPE_VOLUME |
+ Pool.ELEMENT_TYPE_DELTA |
+ Pool.ELEMENT_TYPE_SYS_RESERVED,
+ unsupported_actions=Pool.UNSUPPORTED_VOLUME_GROW |
+ Pool.UNSUPPORTED_VOLUME_SHRINK)

- @staticmethod
- def _sim_ag_2_lsm(sim_ag):
- return AccessGroup(sim_ag['ag_id'], sim_ag['name'], sim_ag['init_ids'],
- sim_ag['init_type'], sim_ag['sys_id'])
+ self.sim_pool_create_sub_pool(
+ name='Pool 2(sub pool of Pool 1)',
+ parent_pool_id=pool_1_id,
+ element_type=Pool.ELEMENT_TYPE_FS |
+ Pool.ELEMENT_TYPE_VOLUME |
+ Pool.ELEMENT_TYPE_DELTA,
+ size=size_bytes_512g)

- def ags(self):
- sim_ags = self.data.ags()
- return [SimArray._sim_ag_2_lsm(a) for a in sim_ags]
+ self.sim_pool_create_from_disk(
+ name='lsm_test_aggr',
+ element_type=Pool.ELEMENT_TYPE_FS |
+ Pool.ELEMENT_TYPE_VOLUME |
+ Pool.ELEMENT_TYPE_DELTA,
+ raid_type=PoolRAID.RAID_TYPE_RAID0,
+ sim_disk_ids=test_pool_disks)
+
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_FC,
+ 'service_address': '50:0a:09:86:99:4b:8d:c5',
+ 'network_address': '50:0a:09:86:99:4b:8d:c5',
+ 'physical_address': '50:0a:09:86:99:4b:8d:c5',
+ 'physical_name': 'FC_a_0b',
+ })
+
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_FCOE,
+ 'service_address': '50:0a:09:86:99:4b:8d:c6',
+ 'network_address': '50:0a:09:86:99:4b:8d:c6',
+ 'physical_address': '50:0a:09:86:99:4b:8d:c6',
+ 'physical_name': 'FCoE_b_0c',
+ })
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_ISCSI,
+ 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
+ 'network_address': 'sim-iscsi-tgt-3.example.com:3260',
+ 'physical_address': 'a4:4e:31:47:f4:e0',
+ 'physical_name': 'iSCSI_c_0d',
+ })
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_ISCSI,
+ 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
+ 'network_address': '10.0.0.1:3260',
+ 'physical_address': 'a4:4e:31:47:f4:e1',
+ 'physical_name': 'iSCSI_c_0e',
+ })
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_ISCSI,
+ 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
+ 'network_address': '[2001:470:1f09:efe:a64e:31ff::1]:3260',
+ 'physical_address': 'a4:4e:31:47:f4:e1',
+ 'physical_name': 'iSCSI_c_0e',
+ })
+
+ self.trans_commit()
+ return

- def access_group_create(self, name, init_id, init_type, sys_id, flags=0):
- sim_ag = self.data.access_group_create(
- name, init_id, init_type, sys_id, flags)
- return SimArray._sim_ag_2_lsm(sim_ag)
+ def _sql_exec(self, sql_cmd, key_list=None):
+ """
+ Execute sql command and get all output.
+ If key_list is not None, will convert returned sql data to a list of
+ dictionaries.
+ """
+ sql_cur = self.sql_conn.cursor()
+ sql_cur.execute(sql_cmd)
+ self.lastrowid = sql_cur.lastrowid
+ sql_output = sql_cur.fetchall()
+ if key_list and sql_output:
+ return list(
+ dict(zip(key_list, value_list))
+ for value_list in sql_output
+ if value_list)
+ else:
+ return sql_output

- def access_group_delete(self, ag_id, flags=0):
- return self.data.access_group_delete(ag_id, flags)
+ def _get_table(self, table_name, key_list):
+ sql_cmd = "SELECT %s FROM %s" % (",".join(key_list), table_name)
+ return self._sql_exec(sql_cmd, key_list)

- def access_group_initiator_add(self, ag_id, init_id, init_type, flags=0):
- sim_ag = self.data.access_group_initiator_add(
- ag_id, init_id, init_type, flags)
- return SimArray._sim_ag_2_lsm(sim_ag)
+ def trans_begin(self):
+ self.sql_conn.execute("BEGIN IMMEDIATE TRANSACTION;")

- def access_group_initiator_delete(self, ag_id, init_id, init_type,
- flags=0):
- sim_ag = self.data.access_group_initiator_delete(ag_id, init_id,
- init_type, flags)
- return SimArray._sim_ag_2_lsm(sim_ag)
+ def trans_commit(self):
+ self.sql_conn.commit()

- def volume_mask(self, ag_id, vol_id, flags=0):
- return self.data.volume_mask(ag_id, vol_id, flags)
+ def trans_rollback(self):
+ self.sql_conn.rollback()

- def volume_unmask(self, ag_id, vol_id, flags=0):
- return self.data.volume_unmask(ag_id, vol_id, flags)
+ def _data_add(self, table_name, data_dict):
+ keys = data_dict.keys()
+ values = data_dict.values()

- def volumes_accessible_by_access_group(self, ag_id, flags=0):
- sim_vols = self.data.volumes_accessible_by_access_group(ag_id, flags)
- return [SimArray._sim_vol_2_lsm(v) for v in sim_vols]
+ sql_cmd = "INSERT INTO %s (" % table_name
+ for key in keys:
+ sql_cmd += "'%s', " % key

- def access_groups_granted_to_volume(self, vol_id, flags=0):
- sim_ags = self.data.access_groups_granted_to_volume(vol_id, flags)
- return [SimArray._sim_ag_2_lsm(a) for a in sim_ags]
+ # Remove tailing ', '
+ sql_cmd = sql_cmd[:-2]

- def iscsi_chap_auth(self, init_id, in_user, in_pass, out_user, out_pass,
- flags=0):
- return self.data.iscsi_chap_auth(init_id, in_user, in_pass, out_user,
- out_pass, flags)
+ sql_cmd += ") VALUES ( "
+ for value in values:
+ if value is None:
+ value = ''
+ sql_cmd += "'%s', " % value

- @staticmethod
- def _sim_tgt_2_lsm(sim_tgt):
- return TargetPort(
- sim_tgt['tgt_id'], sim_tgt['port_type'],
- sim_tgt['service_address'], sim_tgt['network_address'],
- sim_tgt['physical_address'], sim_tgt['physical_name'],
- sim_tgt['sys_id'])
+ # Remove tailing ', '
+ sql_cmd = sql_cmd[:-2]

- def target_ports(self):
- sim_tgts = self.data.target_ports()
- return [SimArray._sim_tgt_2_lsm(t) for t in sim_tgts]
+ sql_cmd += ");"
+ self._sql_exec(sql_cmd)

+ def _data_find(self, table, condition, key_list, flag_unique=False):
+ sql_cmd = "SELECT %s FROM %s WHERE %s" % (
+ ",".join(key_list), table, condition)
+ sim_datas = self._sql_exec(sql_cmd, key_list)
+ if flag_unique:
+ if len(sim_datas) == 0:
+ return None
+ elif len(sim_datas) == 1:
+ return sim_datas[0]
+ else:
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "_data_find(): Got non-unique data: %s" % locals())
+ else:
+ return sim_datas

-class SimData(object):
- """
- Rules here are:
- * we don't store one data twice
- * we don't srore data which could be caculated out
-
- self.vol_dict = {
- Volume.id = sim_vol,
- }
-
- sim_vol = {
- 'vol_id': self._next_vol_id()
- 'vpd83': SimData._random_vpd(),
- 'name': vol_name,
- 'total_space': size_bytes,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'pool_id': owner_pool_id,
- 'consume_size': size_bytes,
- 'admin_state': Volume.ADMIN_STATE_ENABLED,
- 'replicate': {
- dst_vol_id = [
- {
- 'src_start_blk': src_start_blk,
- 'dst_start_blk': dst_start_blk,
- 'blk_count': blk_count,
- 'rep_type': Volume.REPLICATE_XXXX,
- },
- ],
- },
- 'mask': {
- ag_id = Volume.ACCESS_READ_WRITE|Volume.ACCESS_READ_ONLY,
- },
- 'mask_init': {
- init_id = Volume.ACCESS_READ_WRITE|Volume.ACCESS_READ_ONLY,
- }
- }
-
- self.ag_dict ={
- AccessGroup.id = sim_ag,
- }
- sim_ag = {
- 'init_ids': [init_id,],
- 'init_type': AccessGroup.init_type,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'name': name,
- 'ag_id': self._next_ag_id()
- }
-
- self.fs_dict = {
- FileSystem.id = sim_fs,
- }
- sim_fs = {
- 'fs_id': self._next_fs_id(),
- 'name': fs_name,
- 'total_space': size_bytes,
- 'free_space': size_bytes,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'pool_id': pool_id,
- 'consume_size': size_bytes,
- 'clone': {
- dst_fs_id: {
- 'snap_id': snap_id, # None if no snapshot
- 'files': [ file_path, ] # [] if all files cloned.
- },
- },
- 'snaps' = [snap_id, ],
- }
- self.snap_dict = {
- Snapshot.id: sim_snap,
- }
- sim_snap = {
- 'snap_id': self._next_snap_id(),
- 'name': snap_name,
- 'fs_id': fs_id,
- 'files': [file_path, ],
- 'timestamp': time.time(),
- }
- self.exp_dict = {
- Export.id: sim_exp,
- }
- sim_exp = {
- 'exp_id': self._next_exp_id(),
- 'fs_id': fs_id,
- 'exp_path': exp_path,
- 'auth_type': auth_type,
- 'root_hosts': [root_host, ],
- 'rw_hosts': [rw_host, ],
- 'ro_hosts': [ro_host, ],
- 'anon_uid': anon_uid,
- 'anon_gid': anon_gid,
- 'options': [option, ],
- }
-
- self.pool_dict = {
- Pool.id: sim_pool,
- }
- sim_pool = {
- 'name': pool_name,
- 'pool_id': Pool.id,
- 'raid_type': PoolRAID.RAID_TYPE_XXXX,
- 'member_ids': [ disk_id or pool_id or volume_id ],
- 'member_type': PoolRAID.MEMBER_TYPE_XXXX,
- 'member_size': size_bytes # space allocated from each member pool.
- # only for MEMBER_TYPE_POOL
- 'status': SIM_DATA_POOL_STATUS,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'element_type': SimData.SIM_DATA_POOL_ELEMENT_TYPE,
- }
- """
- SIM_DATA_BLK_SIZE = 512
- SIM_DATA_VERSION = "2.9"
- SIM_DATA_SYS_ID = 'sim-01'
- SIM_DATA_INIT_NAME = 'NULL'
- SIM_DATA_TMO = 30000 # ms
- SIM_DATA_POOL_STATUS = Pool.STATUS_OK
- SIM_DATA_POOL_STATUS_INFO = ''
- SIM_DATA_POOL_ELEMENT_TYPE = Pool.ELEMENT_TYPE_FS \
- | Pool.ELEMENT_TYPE_POOL \
- | Pool.ELEMENT_TYPE_VOLUME
-
- SIM_DATA_POOL_UNSUPPORTED_ACTIONS = 0
-
- SIM_DATA_SYS_POOL_ELEMENT_TYPE = SIM_DATA_POOL_ELEMENT_TYPE \
- | Pool.ELEMENT_TYPE_SYS_RESERVED
-
- SIM_DATA_CUR_VOL_ID = 0
- SIM_DATA_CUR_POOL_ID = 0
- SIM_DATA_CUR_FS_ID = 0
- SIM_DATA_CUR_AG_ID = 0
- SIM_DATA_CUR_SNAP_ID = 0
- SIM_DATA_CUR_EXP_ID = 0
-
- def _next_pool_id(self):
- self.SIM_DATA_CUR_POOL_ID += 1
- return "POOL_ID_%08d" % self.SIM_DATA_CUR_POOL_ID
-
- def _next_vol_id(self):
- self.SIM_DATA_CUR_VOL_ID += 1
- return "VOL_ID_%08d" % self.SIM_DATA_CUR_VOL_ID
-
- def _next_fs_id(self):
- self.SIM_DATA_CUR_FS_ID += 1
- return "FS_ID_%08d" % self.SIM_DATA_CUR_FS_ID
-
- def _next_ag_id(self):
- self.SIM_DATA_CUR_AG_ID += 1
- return "AG_ID_%08d" % self.SIM_DATA_CUR_AG_ID
-
- def _next_snap_id(self):
- self.SIM_DATA_CUR_SNAP_ID += 1
- return "SNAP_ID_%08d" % self.SIM_DATA_CUR_SNAP_ID
-
- def _next_exp_id(self):
- self.SIM_DATA_CUR_EXP_ID += 1
- return "EXP_ID_%08d" % self.SIM_DATA_CUR_EXP_ID
+ def _data_update(self, table, data_id, column_name, value):
+ if value is None:
+ value = ''

- @staticmethod
- def state_signature():
- return 'LSM_SIMULATOR_DATA_%s' % md5(SimData.SIM_DATA_VERSION)
+ sql_cmd = "UPDATE %s SET %s='%s' WHERE id='%s'" % \
+ (table, column_name, value, data_id)

- @staticmethod
- def _disk_id(num):
- return "DISK_ID_%0*d" % (D_FMT, num)
-
- def __init__(self):
- self.tmo = SimData.SIM_DATA_TMO
- self.version = SimData.SIM_DATA_VERSION
- self.signature = SimData.state_signature()
- self.job_num = 0
- self.job_dict = {
- # id: SimJob
- }
- self.syss = [
- System(SimData.SIM_DATA_SYS_ID, 'LSM simulated storage plug-in',
- System.STATUS_OK, '')]
- pool_size_200g = size_human_2_size_bytes('200GiB')
- self.pool_dict = {
- 'POO1': dict(
- pool_id='POO1', name='Pool 1',
- member_type=PoolRAID.MEMBER_TYPE_DISK_SATA,
- member_ids=[SimData._disk_id(0), SimData._disk_id(1)],
- raid_type=PoolRAID.RAID_TYPE_RAID1,
- status=SimData.SIM_DATA_POOL_STATUS,
- status_info=SimData.SIM_DATA_POOL_STATUS_INFO,
- sys_id=SimData.SIM_DATA_SYS_ID,
- element_type=SimData.SIM_DATA_SYS_POOL_ELEMENT_TYPE,
- unsupported_actions=Pool.UNSUPPORTED_VOLUME_GROW |
- Pool.UNSUPPORTED_VOLUME_SHRINK),
- 'POO2': dict(
- pool_id='POO2', name='Pool 2',
- member_type=PoolRAID.MEMBER_TYPE_POOL,
- member_ids=['POO1'], member_size=pool_size_200g,
- raid_type=PoolRAID.RAID_TYPE_NOT_APPLICABLE,
- status=Pool.STATUS_OK,
- status_info=SimData.SIM_DATA_POOL_STATUS_INFO,
- sys_id=SimData.SIM_DATA_SYS_ID,
- element_type=SimData.SIM_DATA_POOL_ELEMENT_TYPE,
- unsupported_actions=SimData.SIM_DATA_POOL_UNSUPPORTED_ACTIONS),
- # lsm_test_aggr pool is required by test/runtest.sh
- 'lsm_test_aggr': dict(
- pool_id='lsm_test_aggr',
- name='lsm_test_aggr',
- member_type=PoolRAID.MEMBER_TYPE_DISK_SAS,
- member_ids=[SimData._disk_id(2), SimData._disk_id(3)],
- raid_type=PoolRAID.RAID_TYPE_RAID0,
- status=Pool.STATUS_OK,
- status_info=SimData.SIM_DATA_POOL_STATUS_INFO,
- sys_id=SimData.SIM_DATA_SYS_ID,
- element_type=SimData.SIM_DATA_POOL_ELEMENT_TYPE,
- unsupported_actions=SimData.SIM_DATA_POOL_UNSUPPORTED_ACTIONS),
- }
- self.vol_dict = {
- }
- self.fs_dict = {
- }
- self.snap_dict = {
- }
- self.exp_dict = {
- }
- disk_size_2t = size_human_2_size_bytes('2TiB')
- disk_size_512g = size_human_2_size_bytes('512GiB')
-
- # Make disks in a loop so that we can create many disks easily
- # if we wish
- self.disk_dict = {}
-
- for i in range(0, 21):
- d_id = SimData._disk_id(i)
- d_size = disk_size_2t
- d_name = "SATA Disk %0*d" % (D_FMT, i)
- d_type = Disk.TYPE_SATA
-
- if 2 <= i <= 8:
- d_name = "SAS Disk %0*d" % (D_FMT, i)
- d_type = Disk.TYPE_SAS
- elif 9 <= i:
- d_name = "SSD Disk %0*d" % (D_FMT, i)
- d_type = Disk.TYPE_SSD
- if i <= 13:
- d_size = disk_size_512g
-
- self.disk_dict[d_id] = dict(disk_id=d_id, name=d_name,
- total_space=d_size,
- disk_type=d_type,
- sys_id=SimData.SIM_DATA_SYS_ID)
-
- self.ag_dict = {
- }
- # Create some volumes, fs and etc
- self.volume_create(
- 'POO1', 'Volume 000', size_human_2_size_bytes('200GiB'),
- Volume.PROVISION_DEFAULT)
- self.volume_create(
- 'POO1', 'Volume 001', size_human_2_size_bytes('200GiB'),
- Volume.PROVISION_DEFAULT)
-
- self.pool_dict['POO3'] = {
- 'pool_id': 'POO3',
- 'name': 'Pool 3',
- 'member_type': PoolRAID.MEMBER_TYPE_DISK_SSD,
- 'member_ids': [
- self.disk_dict[SimData._disk_id(9)]['disk_id'],
- self.disk_dict[SimData._disk_id(10)]['disk_id'],
- ],
- 'raid_type': PoolRAID.RAID_TYPE_RAID1,
- 'status': Pool.STATUS_OK,
- 'status_info': SimData.SIM_DATA_POOL_STATUS_INFO,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'element_type': SimData.SIM_DATA_POOL_ELEMENT_TYPE,
- 'unsupported_actions': SimData.SIM_DATA_POOL_UNSUPPORTED_ACTIONS
- }
-
- self.tgt_dict = {
- 'TGT_PORT_ID_01': {
- 'tgt_id': 'TGT_PORT_ID_01',
- 'port_type': TargetPort.TYPE_FC,
- 'service_address': '50:0a:09:86:99:4b:8d:c5',
- 'network_address': '50:0a:09:86:99:4b:8d:c5',
- 'physical_address': '50:0a:09:86:99:4b:8d:c5',
- 'physical_name': 'FC_a_0b',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- 'TGT_PORT_ID_02': {
- 'tgt_id': 'TGT_PORT_ID_02',
- 'port_type': TargetPort.TYPE_FCOE,
- 'service_address': '50:0a:09:86:99:4b:8d:c6',
- 'network_address': '50:0a:09:86:99:4b:8d:c6',
- 'physical_address': '50:0a:09:86:99:4b:8d:c6',
- 'physical_name': 'FCoE_b_0c',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- 'TGT_PORT_ID_03': {
- 'tgt_id': 'TGT_PORT_ID_03',
- 'port_type': TargetPort.TYPE_ISCSI,
- 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
- 'network_address': 'sim-iscsi-tgt-3.example.com:3260',
- 'physical_address': 'a4:4e:31:47:f4:e0',
- 'physical_name': 'iSCSI_c_0d',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- 'TGT_PORT_ID_04': {
- 'tgt_id': 'TGT_PORT_ID_04',
- 'port_type': TargetPort.TYPE_ISCSI,
- 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
- 'network_address': '10.0.0.1:3260',
- 'physical_address': 'a4:4e:31:47:f4:e1',
- 'physical_name': 'iSCSI_c_0e',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- 'TGT_PORT_ID_05': {
- 'tgt_id': 'TGT_PORT_ID_05',
- 'port_type': TargetPort.TYPE_ISCSI,
- 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
- 'network_address': '[2001:470:1f09:efe:a64e:31ff::1]:3260',
- 'physical_address': 'a4:4e:31:47:f4:e1',
- 'physical_name': 'iSCSI_c_0e',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- }
+ self._sql_exec(sql_cmd)

- return
+ def _data_delete(self, table, condition):
+ sql_cmd = "DELETE FROM %s WHERE %s;" % (table, condition)
+ self._sql_exec(sql_cmd)

- def pool_free_space(self, pool_id):
+ def sim_job_create(self, job_data_type=None, data_id=None):
"""
- Calculate out the free size of certain pool.
+ Return a job id(Integer)
"""
- free_space = self.pool_total_space(pool_id)
- for sim_vol in self.vol_dict.values():
- if sim_vol['pool_id'] != pool_id:
- continue
- if free_space <= sim_vol['consume_size']:
- return 0
- free_space -= sim_vol['consume_size']
- for sim_fs in self.fs_dict.values():
- if sim_fs['pool_id'] != pool_id:
- continue
- if free_space <= sim_fs['consume_size']:
- return 0
- free_space -= sim_fs['consume_size']
- for sim_pool in self.pool_dict.values():
- if sim_pool['member_type'] != PoolRAID.MEMBER_TYPE_POOL:
- continue
- if pool_id in sim_pool['member_ids']:
- free_space -= sim_pool['member_size']
- return free_space
+ self._data_add(
+ "jobs",
+ {
+ "duration": os.getenv(
+ "LSM_SIM_TIME", BackStore.JOB_DEFAULT_DURATION),
+ "timestamp": time.time(),
+ "data_type": job_data_type,
+ "data_id": data_id,
+ })
+ return self.lastrowid
+
+ def sim_job_delete(self, sim_job_id):
+ self._data_delete('jobs', 'id="%s"' % sim_job_id)
+
+ def sim_job_status(self, sim_job_id):
+ """
+ Return (progress, data_type, data) tuple.
+ progress is the integer of percent.
+ """
+ sim_job = self._data_find(
+ 'jobs', 'id=%s' % sim_job_id, BackStore.JOB_KEY_LIST,
+ flag_unique=True)
+ if sim_job is None:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_JOB, "Job not found")
+
+ progress = int(
+ (time.time() - float(sim_job['timestamp'])) /
+ sim_job['duration'] * 100)
+
+ data = None
+ data_type = None
+
+ if progress >= 100:
+ progress = 100
+ if sim_job['data_type'] == BackStore.JOB_DATA_TYPE_VOL:
+ data = self.sim_vol_of_id(sim_job['data_id'])
+ data_type = sim_job['data_type']
+ elif sim_job['data_type'] == BackStore.JOB_DATA_TYPE_FS:
+ data = self.sim_fs_of_id(sim_job['data_id'])
+ data_type = sim_job['data_type']
+ elif sim_job['data_type'] == BackStore.JOB_DATA_TYPE_FS_SNAP:
+ data = self.sim_fs_snap_of_id(sim_job['data_id'])
+ data_type = sim_job['data_type']
+
+ return (progress, data_type, data)
+
+ def sim_syss(self):
+ """
+ Return a list of sim_sys dict.
+ """
+ return self._get_table('systems', BackStore.SYS_KEY_LIST)

- @staticmethod
- def _random_vpd(l=16):
+ def sim_disks(self):
"""
- Generate a random 16 digit number as hex
+ Return a list of sim_disk dict.
"""
- vpd = []
- for i in range(0, l):
- vpd.append(str('%02x' % (random.randint(0, 255))))
- return "".join(vpd)
-
- def _size_of_raid(self, member_type, member_ids, raid_type,
- pool_each_size=0):
- member_sizes = []
- if PoolRAID.member_type_is_disk(member_type):
- for member_id in member_ids:
- member_sizes.extend([self.disk_dict[member_id]['total_space']])
-
- elif member_type == PoolRAID.MEMBER_TYPE_POOL:
- for member_id in member_ids:
- member_sizes.extend([pool_each_size])
+ sim_disks = self._get_table('disks', BackStore.DISK_KEY_LIST)

- else:
- raise LsmError(ErrorNumber.PLUGIN_BUG,
- "Got unsupported member_type in _size_of_raid()" +
- ": %d" % member_type)
+ for sim_disk in sim_disks:
+ sim_disk['name'] = "%s_%s" % (
+ sim_disk['disk_prefix'], sim_disk['id'])
+ del sim_disk['disk_prefix']
+ return sim_disks
+
+ @staticmethod
+ def _size_of_raid(raid_type, member_sizes):
all_size = 0
member_size = 0
- member_count = len(member_ids)
+ member_count = len(member_sizes)
for member_size in member_sizes:
all_size += member_size

@@ -852,681 +696,1373 @@ class SimData(object):
elif raid_type == PoolRAID.RAID_TYPE_RAID61:
if member_count < 8 or member_count % 2 == 1:
return 0
- print "%s" % size_bytes_2_size_human(all_size)
- print "%s" % size_bytes_2_size_human(member_size)
return int(all_size / 2 - member_size * 2)
- raise LsmError(ErrorNumber.PLUGIN_BUG,
- "_size_of_raid() got invalid raid type: " +
- "%s(%d)" % (Pool.raid_type_to_str(raid_type),
- raid_type))
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "_size_of_raid() got invalid raid type: %d" % raid_type)

- def pool_total_space(self, pool_id):
+ def _sim_pool_total_space(self, sim_pool, sim_disks=None):
"""
Find out the correct size of RAID pool
"""
- sim_pool = self.pool_dict[pool_id]
- each_pool_size_bytes = 0
- member_type = sim_pool['member_type']
if sim_pool['member_type'] == PoolRAID.MEMBER_TYPE_POOL:
- each_pool_size_bytes = sim_pool['member_size']
-
- return self._size_of_raid(
- member_type, sim_pool['member_ids'], sim_pool['raid_type'],
- each_pool_size_bytes)
-
- @staticmethod
- def _block_rounding(size_bytes):
- return (size_bytes + SimData.SIM_DATA_BLK_SIZE - 1) / \
- SimData.SIM_DATA_BLK_SIZE * \
- SimData.SIM_DATA_BLK_SIZE
-
- def job_create(self, returned_item):
- if random.randint(0, 3) > 0:
- self.job_num += 1
- job_id = "JOB_%s" % self.job_num
- self.job_dict[job_id] = SimJob(returned_item)
- return job_id, None
+ return sim_pool['total_space']
+
+ if sim_disks is None:
+ member_sizes = list(
+ i['total_space']
+ for i in self._data_find(
+ 'disks', "owner_pool_id=%d;" % sim_pool['id'],
+ ['total_space']))
else:
- return None, returned_item
-
- def job_status(self, job_id, flags=0):
- if job_id in self.job_dict.keys():
- return self.job_dict[job_id].progress()
- raise LsmError(ErrorNumber.NOT_FOUND_JOB,
- 'Non-existent job: %s' % job_id)
+ member_sizes = [d['total_space'] for d in sim_disks]

- def job_free(self, job_id, flags=0):
- if job_id in self.job_dict.keys():
- del(self.job_dict[job_id])
- return
- raise LsmError(ErrorNumber.NOT_FOUND_JOB,
- 'Non-existent job: %s' % job_id)
+ return BackStore._size_of_raid(sim_pool['raid_type'], member_sizes)

- def set_time_out(self, ms, flags=0):
- self.tmo = ms
- return None
+ def _sim_pool_free_space(self, sim_pool, sim_pools=None, sim_vols=None,
+ sim_fss=None):
+ """
+ Calculate out the free size of certain pool.
+ """
+ sim_pool_id = sim_pool['id']
+ consumed_sizes = []
+ if sim_pools is None:
+ consumed_sizes.extend(
+ list(
+ p['total_space']
+ for p in self._data_find(
+ 'pools', "parent_pool_id=%d" % sim_pool_id,
+ ['total_space'])))
+ else:
+ consumed_sizes.extend(
+ list(
+ p['total_space'] for p in sim_pools
+ if p['parent_pool_id'] == sim_pool_id))
+
+ if sim_fss is None:
+ consumed_sizes.extend(
+ list(
+ f['consumed_size']
+ for f in self._data_find(
+ 'fss', "pool_id=%d" % sim_pool_id, ['consumed_size'])))
+ else:
+ consumed_sizes.extend(
+ list(
+ f['consumed_size'] for f in sim_fss
+ if f['pool_id'] == sim_pool_id))
+
+ if sim_vols is None:
+ consumed_sizes.extend(
+ list(
+ v['consumed_size']
+ for v in self._data_find(
+ 'volumes', "pool_id=%d" % sim_pool_id,
+ ['consumed_size'])))
+ else:
+ consumed_sizes.extend(
+ list(
+ v['consumed_size']
+ for v in sim_vols
+ if v['pool_id'] == sim_pool_id))
+
+ free_space = sim_pool.get(
+ 'total_space', self._sim_pool_total_space(sim_pool))
+ for consumed_size in consumed_sizes:
+ if free_space < consumed_size:
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Stored simulator state corrupted: free_space of pool "
+ "%d is less than 0, please move or delete %s" %
+ (sim_pool_id, self.statefile))
+ free_space -= consumed_size

- def get_time_out(self, flags=0):
- return self.tmo
+ return free_space

- def systems(self):
- return self.syss
+ def sim_pools(self):
+ """
+ Return a list of sim_pool dict.
+ """
+ sim_pools = self._get_table('pools', BackStore.POOL_KEY_LIST)
+ sim_disks = self.sim_disks()
+ sim_vols = self.sim_vols()
+ sim_fss = self.sim_fss()
+ for sim_pool in sim_pools:
+ # Calculate out total_space
+ sim_pool['total_space'] = self._sim_pool_total_space(
+ sim_pool, sim_disks)
+ sim_pool['free_space'] = self._sim_pool_free_space(
+ sim_pool, sim_pools, sim_vols, sim_fss)
+
+ return sim_pools
+
+ def sim_pool_of_id(self, sim_pool_id):
+ sim_pool = self._sim_data_of_id(
+ "pools", sim_pool_id, BackStore.POOL_KEY_LIST,
+ ErrorNumber.NOT_FOUND_POOL, "Pool")
+ sim_pool['total_space'] = self._sim_pool_total_space(
+ sim_pool)
+ sim_pool['free_space'] = self._sim_pool_free_space(sim_pool)
+ return sim_pool
+
+ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
+ element_type, unsupported_actions=0):
+ # Detect disk type
+ disk_type = None
+ for sim_disk in self.sim_disks():
+ if disk_type is None:
+ disk_type = sim_disk['disk_type']
+ elif disk_type != sim_disk['disk_type']:
+ disk_type = None
+ break
+ member_type = PoolRAID.MEMBER_TYPE_DISK
+ if disk_type is not None:
+ member_type = PoolRAID.disk_type_to_member_type(disk_type)
+
+ self._data_add(
+ 'pools',
+ {
+ 'name': name,
+ 'status': Pool.STATUS_OK,
+ 'status_info': '',
+ 'element_type': element_type,
+ 'unsupported_actions': unsupported_actions,
+ 'raid_type': raid_type,
+ 'member_type': member_type,
+ })
+
+ # update disk owner
+ sim_pool_id = self.lastrowid
+ for sim_disk_id in sim_disk_ids:
+ self._data_update(
+ 'disks', sim_disk_id, 'owner_pool_id', sim_pool_id)
+ return sim_pool_id
+
+ def sim_pool_create_sub_pool(self, name, parent_pool_id, size,
+ element_type, unsupported_actions=0):
+ self._data_add(
+ 'pools',
+ {
+ 'name': name,
+ 'status': Pool.STATUS_OK,
+ 'status_info': '',
+ 'element_type': element_type,
+ 'unsupported_actions': unsupported_actions,
+ 'raid_type': PoolRAID.RAID_TYPE_NOT_APPLICABLE,
+ 'member_type': PoolRAID.MEMBER_TYPE_POOL,
+ 'parent_pool_id': parent_pool_id,
+ 'total_space': size,
+ })
+ # update disk owner
+ return self.lastrowid
+
+ def sim_vols(self, sim_ag_id=None):
+ """
+ Return a list of sim_vol dict.
+ """
+ all_sim_vols = self._get_table('volumes', BackStore.VOL_KEY_LIST)
+ if sim_ag_id:
+ self.sim_ag_of_id(sim_ag_id)
+ sim_vol_ids = self._sim_vol_ids_of_masked_ag(sim_ag_id)
+ return list(
+ v for v in all_sim_vols
+ if v['id'] in sim_vol_ids)
+ else:
+ return all_sim_vols
+
+ def _sim_data_of_id(self, table_name, data_id, key_list, lsm_error_no,
+ data_name):
+ sim_data = self._data_find(
+ table_name, 'id=%s' % data_id, key_list, flag_unique=True)
+ if sim_data is None:
+ if lsm_error_no:
+ raise LsmError(
+ lsm_error_no, "%s not found" % data_name)
+ else:
+ return None
+ return sim_data

- def pools(self):
- return self.pool_dict.values()
+ def sim_vol_of_id(self, sim_vol_id):
+ """
+ Return sim_vol if found. Raise error if not found.
+ """
+ return self._sim_data_of_id(
+ "volumes", sim_vol_id, BackStore.VOL_KEY_LIST,
+ ErrorNumber.NOT_FOUND_VOLUME,
+ "Volume")

- def volumes(self):
- return self.vol_dict.values()
+ def _check_pool_free_space(self, sim_pool_id, size_bytes):
+ sim_pool = self.sim_pool_of_id(sim_pool_id)

- def disks(self):
- return self.disk_dict.values()
-
- def access_groups(self):
- return self.ag_dict.values()
-
- def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0):
- self._check_dup_name(
- self.vol_dict.values(), vol_name, ErrorNumber.NAME_CONFLICT)
- size_bytes = SimData._block_rounding(size_bytes)
- # check free size
- free_space = self.pool_free_space(pool_id)
- if (free_space < size_bytes):
+ if (sim_pool['free_space'] < size_bytes):
raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
"Insufficient space in pool")
+
+ @staticmethod
+ def _block_rounding(size_bytes):
+ return (size_bytes + BackStore.BLK_SIZE - 1) / \
+ BackStore.BLK_SIZE * BackStore.BLK_SIZE
+
+ def sim_vol_create(self, name, size_bytes, sim_pool_id, thinp):
+ size_bytes = BackStore._block_rounding(size_bytes)
+ self._check_pool_free_space(sim_pool_id, size_bytes)
sim_vol = dict()
- sim_vol['vol_id'] = self._next_vol_id()
- sim_vol['vpd83'] = SimData._random_vpd()
- sim_vol['name'] = vol_name
- sim_vol['total_space'] = size_bytes
+ sim_vol['vpd83'] = _random_vpd()
+ sim_vol['name'] = name
sim_vol['thinp'] = thinp
- sim_vol['sys_id'] = SimData.SIM_DATA_SYS_ID
- sim_vol['pool_id'] = pool_id
- sim_vol['consume_size'] = size_bytes
+ sim_vol['pool_id'] = sim_pool_id
+ sim_vol['total_space'] = size_bytes
+ sim_vol['consumed_size'] = size_bytes
sim_vol['admin_state'] = Volume.ADMIN_STATE_ENABLED
- self.vol_dict[sim_vol['vol_id']] = sim_vol
- return sim_vol

- def volume_delete(self, vol_id, flags=0):
- if vol_id in self.vol_dict.keys():
- if 'mask' in self.vol_dict[vol_id].keys() and \
- self.vol_dict[vol_id]['mask']:
- raise LsmError(ErrorNumber.IS_MASKED,
- "Volume is masked to access group")
- del(self.vol_dict[vol_id])
- return None
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such volume: %s" % vol_id)
+ try:
+ self._data_add("volumes", sim_vol)
+ except sqlite3.IntegrityError as sql_error:
+ if "column name is not unique" in sql_error.message:
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other volume" % name)
+ else:
+ raise

- def volume_resize(self, vol_id, new_size_bytes, flags=0):
- new_size_bytes = SimData._block_rounding(new_size_bytes)
- if vol_id in self.vol_dict.keys():
- pool_id = self.vol_dict[vol_id]['pool_id']
- free_space = self.pool_free_space(pool_id)
- if (free_space < new_size_bytes):
- raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
- "Insufficient space in pool")
-
- if self.vol_dict[vol_id]['total_space'] == new_size_bytes:
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "New size same "
- "as current")
-
- self.vol_dict[vol_id]['total_space'] = new_size_bytes
- self.vol_dict[vol_id]['consume_size'] = new_size_bytes
- return self.vol_dict[vol_id]
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such volume: %s" % vol_id)
+ return self.lastrowid

- def volume_replicate(self, dst_pool_id, rep_type, src_vol_id, new_vol_name,
- flags=0):
- if src_vol_id not in self.vol_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such volume: %s" % src_vol_id)
-
- self._check_dup_name(self.vol_dict.values(), new_vol_name,
- ErrorNumber.NAME_CONFLICT)
-
- size_bytes = self.vol_dict[src_vol_id]['total_space']
- size_bytes = SimData._block_rounding(size_bytes)
- # check free size
- free_space = self.pool_free_space(dst_pool_id)
- if (free_space < size_bytes):
- raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
- "Insufficient space in pool")
- sim_vol = dict()
- sim_vol['vol_id'] = self._next_vol_id()
- sim_vol['vpd83'] = SimData._random_vpd()
- sim_vol['name'] = new_vol_name
- sim_vol['total_space'] = size_bytes
- sim_vol['sys_id'] = SimData.SIM_DATA_SYS_ID
- sim_vol['pool_id'] = dst_pool_id
- sim_vol['consume_size'] = size_bytes
- sim_vol['admin_state'] = Volume.ADMIN_STATE_ENABLED
- self.vol_dict[sim_vol['vol_id']] = sim_vol
+ def sim_vol_delete(self, sim_vol_id):
+ """
+ This does not check whether volume exist or not.
+ """
+ # Check existence.
+ self.sim_vol_of_id(sim_vol_id)
+
+ if self._sim_ag_ids_of_masked_vol(sim_vol_id):
+ raise LsmError(
+ ErrorNumber.IS_MASKED,
+ "Volume is masked to access group")
+
+ dst_sim_vol_ids = self.dst_sim_vol_ids_of_src(sim_vol_id)
+ if len(dst_sim_vol_ids) >= 1:
+ for dst_sim_vol_id in dst_sim_vol_ids:
+ if dst_sim_vol_id != sim_vol_id:
+ # Don't raise error on volume internal replication.
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Requested volume is a replication source")
+
+ self._data_delete("volumes", 'id="%s"' % sim_vol_id)
+ # break replication if requested volume is a target volume
+ self._data_delete('vol_rep_ranges', 'dst_vol_id="%s"' % sim_vol_id)
+ self._data_delete('vol_reps', 'dst_vol_id="%s"' % sim_vol_id)
+
+ def sim_vol_mask(self, sim_vol_id, sim_ag_id):
+ self.sim_vol_of_id(sim_vol_id)
+ self.sim_ag_of_id(sim_ag_id)
+ hash_id = md5("%d%d" % (sim_vol_id, sim_ag_id))
+ try:
+ self._data_add(
+ "vol_masks",
+ {'id': hash_id, 'ag_id': sim_ag_id, 'vol_id': sim_vol_id})
+ except sqlite3.IntegrityError as sql_error:
+ if "column id is not unique" in sql_error.message:
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Volume is already masked to requested access group")
+ else:
+ raise

- dst_vol_id = sim_vol['vol_id']
- if 'replicate' not in self.vol_dict[src_vol_id].keys():
- self.vol_dict[src_vol_id]['replicate'] = dict()
+ return None

- if dst_vol_id not in self.vol_dict[src_vol_id]['replicate'].keys():
- self.vol_dict[src_vol_id]['replicate'][dst_vol_id] = list()
+ def sim_vol_unmask(self, sim_vol_id, sim_ag_id):
+ self.sim_vol_of_id(sim_vol_id)
+ self.sim_ag_of_id(sim_ag_id)
+ hash_id = md5("%d%d" % (sim_vol_id, sim_ag_id))
+ if self._data_find('vol_masks', 'id="%s"' % hash_id, ['id'], True):
+ self._data_delete('vol_masks', 'id="%s"' % hash_id)
+ else:
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Volume is not masked to requested access group")
+ return None

- sim_rep = {
- 'rep_type': rep_type,
- 'src_start_blk': 0,
- 'dst_start_blk': 0,
- 'blk_count': size_bytes,
- }
- self.vol_dict[src_vol_id]['replicate'][dst_vol_id].extend(
- [sim_rep])
+ def _sim_vol_ids_of_masked_ag(self, sim_ag_id):
+ return list(
+ m['vol_id'] for m in self._data_find(
+ 'vol_masks', 'ag_id="%s"' % sim_ag_id, ['vol_id']))
+
+ def _sim_ag_ids_of_masked_vol(self, sim_vol_id):
+ return list(
+ m['ag_id'] for m in self._data_find(
+ 'vol_masks', 'vol_id="%s"' % sim_vol_id, ['ag_id']))
+
+ def sim_vol_resize(self, sim_vol_id, new_size_bytes):
+ org_new_size_bytes = new_size_bytes
+ new_size_bytes = BackStore._block_rounding(new_size_bytes)
+ sim_vol = self.sim_vol_of_id(sim_vol_id)
+ if sim_vol['total_space'] == new_size_bytes:
+ if org_new_size_bytes != new_size_bytes:
+ # Even volume size is identical to rounded size,
+ # but it's not what user requested, hence we silently pass.
+ return
+ else:
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Volume size is identical to requested")

- return sim_vol
+ sim_pool = self.sim_pool_of_id(sim_vol['pool_id'])

- def volume_replicate_range_block_size(self, sys_id, flags=0):
- return SimData.SIM_DATA_BLK_SIZE
+ increment = new_size_bytes - sim_vol['total_space']

- def volume_replicate_range(self, rep_type, src_vol_id, dst_vol_id, ranges,
- flags=0):
- if src_vol_id not in self.vol_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % src_vol_id)
+ if increment > 0:

- if dst_vol_id not in self.vol_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % dst_vol_id)
+ if sim_pool['unsupported_actions'] & Pool.UNSUPPORTED_VOLUME_GROW:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Requested pool does not allow volume size grow")

- sim_reps = []
- for rep_range in ranges:
- sim_rep = dict()
- sim_rep['rep_type'] = rep_type
- sim_rep['src_start_blk'] = rep_range.src_block
- sim_rep['dst_start_blk'] = rep_range.dest_block
- sim_rep['blk_count'] = rep_range.block_count
- sim_reps.extend([sim_rep])
+ if sim_pool['free_space'] < increment:
+ raise LsmError(
+ ErrorNumber.NOT_ENOUGH_SPACE, "Insufficient space in pool")

- if 'replicate' not in self.vol_dict[src_vol_id].keys():
- self.vol_dict[src_vol_id]['replicate'] = dict()
+ elif sim_pool['unsupported_actions'] & Pool.UNSUPPORTED_VOLUME_SHRINK:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Requested pool does not allow volume size grow")

- if dst_vol_id not in self.vol_dict[src_vol_id]['replicate'].keys():
- self.vol_dict[src_vol_id]['replicate'][dst_vol_id] = list()
+ # TODO: If a volume is in a replication relationship, resize should
+ # be handled properly.
+ self._data_update(
+ 'volumes', sim_vol_id, "total_space", new_size_bytes)
+ self._data_update(
+ 'volumes', sim_vol_id, "consumed_size", new_size_bytes)

- self.vol_dict[src_vol_id]['replicate'][dst_vol_id].extend(
- [sim_reps])
+ def _sim_vol_rep_ranges(self, rep_id):
+ """
+ Return a list of dict:
+ [{
+ 'id': md5 of src_sim_vol_id and dst_sim_vol_id,
+ 'src_start_blk': int,
+ 'dst_start_blk': int,
+ 'blk_count': int,
+ }]
+ """
+ return self._data_find(
+ 'vol_rep_ranges', 'id="%s"' % rep_id,
+ ['id', 'src_start_blk', 'dst_start_blk', 'blk_count'])

- return None
+ def dst_sim_vol_ids_of_src(self, src_sim_vol_id):
+ """
+ Return a list of dst_vol_id for provided source volume ID.
+ """
+ self.sim_vol_of_id(src_sim_vol_id)
+ return list(
+ d['dst_vol_id'] for d in self._data_find(
+ 'vol_reps', 'src_vol_id="%s"' % src_sim_vol_id,
+ ['dst_vol_id']))
+
+ def sim_vol_replica(self, src_sim_vol_id, dst_sim_vol_id, rep_type,
+ blk_ranges=None):
+ self.sim_vol_of_id(src_sim_vol_id)
+ self.sim_vol_of_id(dst_sim_vol_id)
+
+ # TODO: Use consumed_size < total_space to reflect the CLONE type.
+ cur_src_sim_vol_ids = list(
+ r['src_vol_id'] for r in self._data_find(
+ 'vol_reps', 'dst_vol_id="%s"' % dst_sim_vol_id,
+ ['src_vol_id']))
+ if len(cur_src_sim_vol_ids) == 1 and \
+ cur_src_sim_vol_ids[0] == src_sim_vol_id:
+ # src and dst match. Maybe user are overriding old setting.
+ pass
+ elif len(cur_src_sim_vol_ids) == 0:
+ pass
+ else:
+ # TODO: Need to introduce new API error
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Target volume is already a replication target for other "
+ "source volume")
+
+ self._data_add(
+ 'vol_reps',
+ {
+ 'src_vol_id': src_sim_vol_id,
+ 'dst_vol_id': dst_sim_vol_id,
+ 'rep_type': rep_type,
+ })
+ if blk_ranges:
+ # Remove old replicate ranges before add
+ self._data_delete(
+ 'vol_rep_ranges',
+ 'src_vol_id="%s" AND dst_vol_id="%s"' %
+ (src_sim_vol_id, dst_sim_vol_id))
+
+ for blk_range in blk_ranges:
+ self._data_add(
+ 'vol_rep_ranges',
+ {
+ 'src_vol_id': src_sim_vol_id,
+ 'dst_vol_id': dst_sim_vol_id,
+ 'src_start_blk': blk_range.src_block,
+ 'dst_start_blk': blk_range.dest_block,
+ 'blk_count': blk_range.block_count,
+ })
+
+ def sim_vol_src_replica_break(self, src_sim_vol_id):
+
+ if not self.dst_sim_vol_ids_of_src(src_sim_vol_id):
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Provided volume is not a replication source")
+
+ self._data_delete(
+ 'vol_rep_ranges', 'src_vol_id="%s"' % src_sim_vol_id)
+ self._data_delete(
+ 'vol_reps', 'src_vol_id="%s"' % src_sim_vol_id)
+
+ def sim_vol_state_change(self, sim_vol_id, new_admin_state):
+ sim_vol = self.sim_vol_of_id(sim_vol_id)
+ if sim_vol['admin_state'] == new_admin_state:
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Volume admin state is identical to requested")
+
+ self._data_update(
+ 'volumes', sim_vol_id, "admin_state", new_admin_state)

- def volume_enable(self, vol_id, flags=0):
+ @staticmethod
+ def _sim_ag_fill_init_info(sim_ag, sim_inits):
+ """
+ Update 'init_type' and 'init_ids' of sim_ag
+ """
+ sim_ag['init_ids'] = list(i['id'] for i in sim_inits)
+ init_type_set = set(
+ list(sim_init['init_type'] for sim_init in sim_inits))
+
+ if len(init_type_set) == 0:
+ sim_ag['init_type'] = AccessGroup.INIT_TYPE_UNKNOWN
+ elif len(init_type_set) == 1:
+ sim_ag['init_type'] = init_type_set.pop()
+ else:
+ sim_ag['init_type'] = AccessGroup.INIT_TYPE_ISCSI_WWPN_MIXED
+
+ def sim_ags(self, sim_vol_id=None):
+ sim_ags = self._get_table('ags', BackStore.AG_KEY_LIST)
+ sim_inits = self._get_table('inits', BackStore.INIT_KEY_LIST)
+ if sim_vol_id:
+ self.sim_vol_of_id(sim_vol_id)
+ sim_ag_ids = self._sim_ag_ids_of_masked_vol(sim_vol_id)
+ sim_ags = list(a for a in sim_ags if a['id'] in sim_ag_ids)
+
+ for sim_ag in sim_ags:
+ owned_sim_inits = list(
+ i for i in sim_inits if i['owner_ag_id'] == sim_ag['id'])
+ BackStore._sim_ag_fill_init_info(sim_ag, owned_sim_inits)
+
+ return sim_ags
+
+ def _sim_init_create(self, init_type, init_id, sim_ag_id):
try:
- if self.vol_dict[vol_id]['admin_state'] == \
- Volume.ADMIN_STATE_ENABLED:
- raise LsmError(ErrorNumber.NO_STATE_CHANGE,
- "Volume is already enabled")
- except KeyError:
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
-
- self.vol_dict[vol_id]['admin_state'] = Volume.ADMIN_STATE_ENABLED
+ self._data_add(
+ "inits",
+ {
+ 'id': init_id,
+ 'init_type': init_type,
+ 'owner_ag_id': sim_ag_id
+ })
+ except sqlite3.IntegrityError as sql_error:
+ if "column id is not unique" in sql_error.message:
+ raise LsmError(
+ ErrorNumber.EXISTS_INITIATOR,
+ "Initiator '%s' is already in use by other access group" %
+ init_id)
+ else:
+ raise
+
+ def iscsi_chap_auth_set(self, init_id, in_user, in_pass, out_user,
+ out_pass):
+ # Currently, there is no API method to query status of iscsi CHAP.
return None

- def volume_disable(self, vol_id, flags=0):
+ def sim_ag_create(self, name, init_type, init_id):
try:
- if self.vol_dict[vol_id]['admin_state'] == \
- Volume.ADMIN_STATE_DISABLED:
- raise LsmError(ErrorNumber.NO_STATE_CHANGE,
- "Volume is already disabled")
- except KeyError:
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
-
- self.vol_dict[vol_id]['admin_state'] = Volume.ADMIN_STATE_DISABLED
+ self._data_add("ags", {'name': name})
+ sim_ag_id = self.lastrowid
+ except sqlite3.IntegrityError as sql_error:
+ if "column name is not unique" in sql_error.message:
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other access group" %
+ name)
+ else:
+ raise
+
+ self._sim_init_create(init_type, init_id, sim_ag_id)
+
+ return sim_ag_id
+
+ def sim_ag_delete(self, sim_ag_id):
+ self.sim_ag_of_id(sim_ag_id)
+ if self._sim_vol_ids_of_masked_ag(sim_ag_id):
+ raise LsmError(
+ ErrorNumber.IS_MASKED,
+ "Access group has volume masked to")
+
+ self._data_delete('ags', 'id="%s"' % sim_ag_id)
+ self._data_delete('inits', 'owner_ag_id="%s"' % sim_ag_id)
+
+ def sim_ag_init_add(self, sim_ag_id, init_id, init_type):
+ sim_ag = self.sim_ag_of_id(sim_ag_id)
+ if init_id in sim_ag['init_ids']:
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Initiator already in access group")
+
+ if init_type != AccessGroup.INIT_TYPE_ISCSI_IQN and \
+ init_type != AccessGroup.INIT_TYPE_WWPN:
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Only support iSCSI IQN and WWPN initiator type")
+
+ self._sim_init_create(init_type, init_id, sim_ag_id)
return None

- def volume_child_dependency(self, vol_id, flags=0):
+ def sim_ag_init_delete(self, sim_ag_id, init_id):
+ sim_ag = self.sim_ag_of_id(sim_ag_id)
+ if init_id not in sim_ag['init_ids']:
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Initiator is not in defined access group")
+ if len(sim_ag['init_ids']) == 1:
+ raise LsmError(
+ ErrorNumber.LAST_INIT_IN_ACCESS_GROUP,
+ "Refused to remove the last initiator from access group")
+
+ self._data_delete('inits', 'id="%s"' % init_id)
+
+ def sim_ag_of_id(self, sim_ag_id):
+ sim_ag = self._sim_data_of_id(
+ "ags", sim_ag_id, BackStore.AG_KEY_LIST,
+ ErrorNumber.NOT_FOUND_ACCESS_GROUP,
+ "Access Group")
+ sim_inits = self._data_find(
+ 'inits', 'owner_ag_id=%d' % sim_ag_id, BackStore.INIT_KEY_LIST)
+ BackStore._sim_ag_fill_init_info(sim_ag, sim_inits)
+ return sim_ag
+
+ def sim_fss(self):
"""
- If volume is a src or dst of a replication, we return True.
+ Return a list of sim_fs dict.
"""
- if vol_id not in self.vol_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
- if 'replicate' in self.vol_dict[vol_id].keys() and \
- self.vol_dict[vol_id]['replicate']:
- return True
- for sim_vol in self.vol_dict.values():
- if 'replicate' in sim_vol.keys():
- if vol_id in sim_vol['replicate'].keys():
- return True
- return False
+ return self._get_table('fss', BackStore.FS_KEY_LIST)

- def volume_child_dependency_rm(self, vol_id, flags=0):
- if vol_id not in self.vol_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
- if 'replicate' in self.vol_dict[vol_id].keys() and \
- self.vol_dict[vol_id]['replicate']:
- del self.vol_dict[vol_id]['replicate']
-
- for sim_vol in self.vol_dict.values():
- if 'replicate' in sim_vol.keys():
- if vol_id in sim_vol['replicate'].keys():
- del sim_vol['replicate'][vol_id]
- return None
+ def sim_fs_of_id(self, sim_fs_id, raise_error=True):
+ lsm_error_no = ErrorNumber.NOT_FOUND_FS
+ if not raise_error:
+ lsm_error_no = None

- def ags(self, flags=0):
- return self.ag_dict.values()
+ return self._sim_data_of_id(
+ "fss", sim_fs_id, BackStore.FS_KEY_LIST, lsm_error_no,
+ "File System")

- def _sim_ag_of_init(self, init_id):
+ def sim_fs_create(self, name, size_bytes, sim_pool_id):
+ size_bytes = BackStore._block_rounding(size_bytes)
+ self._check_pool_free_space(sim_pool_id, size_bytes)
+ try:
+ self._data_add(
+ "fss",
+ {
+ 'name': name,
+ 'total_space': size_bytes,
+ 'consumed_size': size_bytes,
+ 'free_space': size_bytes,
+ 'pool_id': sim_pool_id,
+ })
+ except sqlite3.IntegrityError as sql_error:
+ if "column name is not unique" in sql_error.message:
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other fs" % name)
+ else:
+ raise
+ return self.lastrowid
+
+ def sim_fs_delete(self, sim_fs_id):
+ self.sim_fs_of_id(sim_fs_id)
+ if self.clone_dst_sim_fs_ids_of_src(sim_fs_id):
+ # TODO: API does not have dedicate error for this scenario.
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Requested file system is a clone source")
+
+ if self.sim_fs_snaps(sim_fs_id):
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Requested file system has snapshot attached")
+
+ if self._data_find('exps', 'fs_id="%s"' % sim_fs_id, ['id']):
+ # TODO: API does not have dedicate error for this scenario
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Requested file system is exported via NFS")
+
+ self._data_delete("fss", 'id="%s"' % sim_fs_id)
+ # Delete clone relationship if requested fs is a clone target.
+ self._sql_exec(
+ "DELETE FROM fs_clones WHERE dst_fs_id='%s';" % sim_fs_id)
+
+ def sim_fs_resize(self, sim_fs_id, new_size_bytes):
+ org_new_size_bytes = new_size_bytes
+ new_size_bytes = BackStore._block_rounding(new_size_bytes)
+ sim_fs = self.sim_fs_of_id(sim_fs_id)
+
+ if sim_fs['total_space'] == new_size_bytes:
+ if new_size_bytes != org_new_size_bytes:
+ return
+ else:
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "File System size is identical to requested")
+
+ # TODO: If a fs is in a clone/snapshot relationship, resize should
+ # be handled properly.
+
+ sim_pool = self.sim_pool_of_id(sim_fs['pool_id'])
+
+ if new_size_bytes > sim_fs['total_space'] and \
+ sim_pool['free_space'] < new_size_bytes - sim_fs['total_space']:
+ raise LsmError(
+ ErrorNumber.NOT_ENOUGH_SPACE, "Insufficient space in pool")
+
+ self._data_update(
+ 'fss', sim_fs_id, "total_space", new_size_bytes)
+ self._data_update(
+ 'fss', sim_fs_id, "consumed_size", new_size_bytes)
+ self._data_update(
+ 'fss', sim_fs_id, "free_space", new_size_bytes)
+
+ def sim_fs_snaps(self, sim_fs_id):
+ self.sim_fs_of_id(sim_fs_id)
+ return self._data_find(
+ 'fs_snaps', 'fs_id="%s"' % sim_fs_id, BackStore.FS_SNAP_KEY_LIST)
+
+ def sim_fs_snap_of_id(self, sim_fs_snap_id, sim_fs_id=None):
+ sim_fs_snap = self._sim_data_of_id(
+ 'fs_snaps', sim_fs_snap_id, BackStore.FS_SNAP_KEY_LIST,
+ ErrorNumber.NOT_FOUND_FS_SS, 'File system snapshot')
+ if sim_fs_id and sim_fs_snap['fs_id'] != sim_fs_id:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_FS_SS,
+ "Defined file system snapshot ID is not belong to requested "
+ "file system")
+ return sim_fs_snap
+
+ def sim_fs_snap_create(self, sim_fs_id, name):
+ self.sim_fs_of_id(sim_fs_id)
+ try:
+ self._data_add(
+ 'fs_snaps',
+ {
+ 'name': name,
+ 'fs_id': sim_fs_id,
+ 'timestamp': int(time.time()),
+ })
+ except sqlite3.IntegrityError as sql_error:
+ if "column name is not unique" in sql_error.message:
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "The name is already used by other file system snapshot")
+ else:
+ raise
+ return self.lastrowid
+
+ def sim_fs_snap_restore(self, sim_fs_id, sim_fs_snap_id, files,
+ restore_files, flag_all_files):
+ # Currently LSM cannot query stauts of this action.
+ # we simply check existence
+ self.sim_fs_of_id(sim_fs_id)
+ if sim_fs_snap_id:
+ self.sim_fs_snap_of_id(sim_fs_snap_id, sim_fs_id)
+ return
+
+ def sim_fs_snap_delete(self, sim_fs_snap_id, sim_fs_id):
+ self.sim_fs_of_id(sim_fs_id)
+ self.sim_fs_snap_of_id(sim_fs_snap_id, sim_fs_id)
+ self._data_delete('fs_snaps', 'id="%s"' % sim_fs_snap_id)
+
+ def sim_fs_snap_del_by_fs(self, sim_fs_id):
+ sql_cmd = "DELETE FROM fs_snaps WHERE fs_id='%s';" % sim_fs_id
+ self._sql_exec(sql_cmd)
+
+ @staticmethod
+ def _sim_fs_clone_id(src_sim_fs_id, dst_sim_fs_id):
+ return md5("%s%s" % (src_sim_fs_id, dst_sim_fs_id))
+
+ def sim_fs_clone(self, src_sim_fs_id, dst_sim_fs_id, sim_fs_snap_id):
+ self.sim_fs_of_id(src_sim_fs_id)
+ self.sim_fs_of_id(dst_sim_fs_id)
+ sim_fs_clone_id = BackStore._sim_fs_clone_id(
+ src_sim_fs_id, dst_sim_fs_id)
+
+ if sim_fs_snap_id:
+ self.sim_fs_snap_of_id(sim_fs_snap_id, src_sim_fs_id)
+
+ self._data_add(
+ 'fs_clones',
+ {
+ 'id': sim_fs_clone_id,
+ 'src_fs_id': src_sim_fs_id,
+ 'dst_fs_id': dst_sim_fs_id,
+ 'fs_snap_id': sim_fs_snap_id,
+ })
+
+ def sim_fs_file_clone(self, sim_fs_id, src_fs_name, dst_fs_name,
+ sim_fs_snap_id):
+ # We don't have API to query file level clone.
+ # Simply check existence
+ self.sim_fs_of_id(sim_fs_id)
+ if sim_fs_snap_id:
+ self.sim_fs_snap_of_id(sim_fs_snap_id, sim_fs_id)
+ return
+
+ def clone_dst_sim_fs_ids_of_src(self, src_sim_fs_id):
"""
- Return sim_ag which containing this init_id.
- If not found, return None
+ Return a list of dst_fs_id for provided clone source fs ID.
"""
- for sim_ag in self.ag_dict.values():
- if init_id in sim_ag['init_ids']:
- return sim_ag
- return None
+ self.sim_fs_of_id(src_sim_fs_id)
+ return list(
+ d['dst_fs_id'] for d in self._data_find(
+ 'fs_clones', 'src_fs_id="%s"' % src_sim_fs_id,
+ ['dst_fs_id']))

- def _sim_ag_of_name(self, ag_name):
- for sim_ag in self.ag_dict.values():
- if ag_name == sim_ag['name']:
- return sim_ag
- return None
+ def sim_fs_src_clone_break(self, src_sim_fs_id):
+ self._data_delete('fs_clones', 'src_fs_id="%s"' % src_sim_fs_id)

- def _check_dup_name(self, sim_list, name, error_num):
- used_names = [x['name'] for x in sim_list]
- if name in used_names:
- raise LsmError(error_num, "Name '%s' already in use" % name)
+ @staticmethod
+ def _sim_exp_fix_host_list(sim_exp):
+ for key_name in ['root_hosts', 'rw_hosts', 'ro_hosts']:
+ if sim_exp[key_name]:
+ sim_exp[key_name] = sim_exp[key_name].split(',')
+ return sim_exp

- def access_group_create(self, name, init_id, init_type, sys_id, flags=0):
+ def sim_exps(self):
+ return list(
+ BackStore._sim_exp_fix_host_list(e)
+ for e in self._get_table('exps', BackStore.EXP_KEY_LIST))

- # Check to see if we have an access group with this name
- self._check_dup_name(self.ag_dict.values(), name,
- ErrorNumber.NAME_CONFLICT)
+ def sim_exp_of_id(self, sim_exp_id):
+ return BackStore._sim_exp_fix_host_list(
+ self._sim_data_of_id(
+ 'exps', sim_exp_id, BackStore.EXP_KEY_LIST,
+ ErrorNumber.NOT_FOUND_NFS_EXPORT, 'NFS Export'))

- exist_sim_ag = self._sim_ag_of_init(init_id)
- if exist_sim_ag:
- if exist_sim_ag['name'] == name:
- return exist_sim_ag
- else:
- raise LsmError(ErrorNumber.EXISTS_INITIATOR,
- "Initiator %s already exist in other " %
- init_id + "access group %s(%s)" %
- (exist_sim_ag['name'], exist_sim_ag['ag_id']))
-
- exist_sim_ag = self._sim_ag_of_name(name)
- if exist_sim_ag:
- if init_id in exist_sim_ag['init_ids']:
- return exist_sim_ag
+ def sim_exp_create(self, sim_fs_id, exp_path, root_hosts, rw_hosts,
+ ro_hosts, anon_uid, anon_gid, auth_type, options):
+ self.sim_fs_of_id(sim_fs_id)
+
+ try:
+ self._data_add(
+ 'exps',
+ {
+ 'fs_id': sim_fs_id,
+ 'exp_path': exp_path,
+ 'root_hosts': ','.join(root_hosts),
+ 'rw_hosts': ','.join(rw_hosts),
+ 'ro_hosts': ','.join(ro_hosts),
+ 'anon_uid': anon_uid,
+ 'anon_gid': anon_gid,
+ 'auth_type': auth_type,
+ 'options': options,
+ })
+ except sqlite3.IntegrityError as sql_error:
+ if "column exp_path is not unique" in sql_error.message:
+ # TODO: Should we create new error instead of NAME_CONFLICT?
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Export path is already used by other NFS export")
else:
- raise LsmError(ErrorNumber.NAME_CONFLICT,
- "Another access group %s(%s) is using " %
- (exist_sim_ag['name'], exist_sim_ag['ag_id']) +
- "requested name %s but not contain init_id %s" %
- (exist_sim_ag['name'], init_id))
-
- sim_ag = dict()
- sim_ag['init_ids'] = [init_id]
- sim_ag['init_type'] = init_type
- sim_ag['sys_id'] = SimData.SIM_DATA_SYS_ID
- sim_ag['name'] = name
- sim_ag['ag_id'] = self._next_ag_id()
- self.ag_dict[sim_ag['ag_id']] = sim_ag
- return sim_ag
+ raise
+ return self.lastrowid

- def access_group_delete(self, ag_id, flags=0):
- if ag_id not in self.ag_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found")
- # Check whether any volume masked to.
- for sim_vol in self.vol_dict.values():
- if 'mask' in sim_vol.keys() and ag_id in sim_vol['mask']:
- raise LsmError(ErrorNumber.IS_MASKED,
- "Access group is masked to volume")
- del(self.ag_dict[ag_id])
- return None
+ def sim_exp_delete(self, sim_exp_id):
+ self.sim_exp_of_id(sim_exp_id)
+ self._data_delete('exps', 'id="%s"' % sim_exp_id)

- def access_group_initiator_add(self, ag_id, init_id, init_type, flags=0):
- if ag_id not in self.ag_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found")
- if init_id in self.ag_dict[ag_id]['init_ids']:
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "Initiator already "
- "in access group")
+ def sim_tgts(self):
+ """
+ Return a list of sim_tgt dict.
+ """
+ return self._get_table('tgts', BackStore.TGT_KEY_LIST)

- self._sim_ag_of_init(init_id)

- self.ag_dict[ag_id]['init_ids'].extend([init_id])
- return self.ag_dict[ag_id]
+class SimArray(object):
+ SIM_DATA_FILE = os.getenv("LSM_SIM_DATA",
+ tempfile.gettempdir() + '/lsm_sim_data')

- def access_group_initiator_delete(self, ag_id, init_id, init_type,
- flags=0):
- if ag_id not in self.ag_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found: %s" % ag_id)
-
- if init_id in self.ag_dict[ag_id]['init_ids']:
- new_init_ids = []
- for cur_init_id in self.ag_dict[ag_id]['init_ids']:
- if cur_init_id != init_id:
- new_init_ids.extend([cur_init_id])
- del(self.ag_dict[ag_id]['init_ids'])
- self.ag_dict[ag_id]['init_ids'] = new_init_ids
- else:
- raise LsmError(ErrorNumber.NO_STATE_CHANGE,
- "Initiator %s type %s not in access group %s"
- % (init_id, str(type(init_id)),
- str(self.ag_dict[ag_id]['init_ids'])))
- return self.ag_dict[ag_id]
+ ID_FMT = 5

- def volume_mask(self, ag_id, vol_id, flags=0):
- if ag_id not in self.ag_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found: %s" % ag_id)
- if vol_id not in self.vol_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
- if 'mask' not in self.vol_dict[vol_id].keys():
- self.vol_dict[vol_id]['mask'] = dict()
-
- if ag_id in self.vol_dict[vol_id]['mask']:
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "Volume already "
- "masked to access "
- "group")
-
- self.vol_dict[vol_id]['mask'][ag_id] = 2
+ @staticmethod
+ def _sim_id_to_lsm_id(sim_id, prefix):
+ return "%s_ID_%0*d" % (prefix, SimArray.ID_FMT, sim_id)
+
+ @staticmethod
+ def _lsm_id_to_sim_id(lsm_id):
+ return int(lsm_id[-SimArray.ID_FMT:])
+
+ @staticmethod
+ def _sort_by_id(resources):
+ return resources
+
+ @_handle_errors
+ def __init__(self, statefile, timeout):
+ if statefile is None:
+ statefile = SimArray.SIM_DATA_FILE
+
+ self.bs_obj = BackStore(statefile, timeout)
+ self.bs_obj.check_version_and_init()
+ self.statefile = statefile
+ self.timeout = timeout
+
+ def _job_create(self, data_type=None, sim_data_id=None):
+ sim_job_id = self.bs_obj.sim_job_create(
+ data_type, sim_data_id)
+ return SimArray._sim_id_to_lsm_id(sim_job_id, 'JOB')
+
+ @_handle_errors
+ def job_status(self, job_id, flags=0):
+ (progress, data_type, sim_data) = self.bs_obj.sim_job_status(
+ SimArray._lsm_id_to_sim_id(job_id))
+ status = JobStatus.INPROGRESS
+ if progress == 100:
+ status = JobStatus.COMPLETE
+
+ data = None
+ if data_type == BackStore.JOB_DATA_TYPE_VOL:
+ data = SimArray._sim_vol_2_lsm(sim_data)
+ elif data_type == BackStore.JOB_DATA_TYPE_FS:
+ data = SimArray._sim_fs_2_lsm(sim_data)
+ elif data_type == BackStore.JOB_DATA_TYPE_FS_SNAP:
+ data = SimArray._sim_fs_snap_2_lsm(sim_data)
+
+ return (status, progress, data)
+
+ @_handle_errors
+ def job_free(self, job_id, flags=0):
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_job_delete(SimArray._lsm_id_to_sim_id(job_id))
+ self.bs_obj.trans_commit()
return None

- def volume_unmask(self, ag_id, vol_id, flags=0):
- if ag_id not in self.ag_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found: %s" % ag_id)
- if vol_id not in self.vol_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
- if 'mask' not in self.vol_dict[vol_id].keys():
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "Volume not "
- "masked to access "
- "group")
-
- if ag_id not in self.vol_dict[vol_id]['mask'].keys():
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "Volume not "
- "masked to access "
- "group")
-
- del(self.vol_dict[vol_id]['mask'][ag_id])
+ @_handle_errors
+ def time_out_set(self, ms, flags=0):
+ self.bs_obj = BackStore(self.statefile, int(ms/1000))
+ self.timeout = ms
return None

- def volumes_accessible_by_access_group(self, ag_id, flags=0):
- # We don't check wether ag_id is valid
- rc = []
- for sim_vol in self.vol_dict.values():
- if 'mask' not in sim_vol:
- continue
- if ag_id in sim_vol['mask'].keys():
- rc.extend([sim_vol])
- return rc
+ @_handle_errors
+ def time_out_get(self, flags=0):
+ return self.timeout

- def access_groups_granted_to_volume(self, vol_id, flags=0):
- # We don't check wether vold_id is valid
- sim_ags = []
- if 'mask' in self.vol_dict[vol_id].keys():
- ag_ids = self.vol_dict[vol_id]['mask'].keys()
- for ag_id in ag_ids:
- sim_ags.extend([self.ag_dict[ag_id]])
- return sim_ags
+ @staticmethod
+ def _sim_sys_2_lsm(sim_sys):
+ return System(
+ sim_sys['id'], sim_sys['name'], sim_sys['status'],
+ sim_sys['status_info'])
+
+ @_handle_errors
+ def systems(self):
+ return list(
+ SimArray._sim_sys_2_lsm(sim_sys)
+ for sim_sys in self.bs_obj.sim_syss())
+
+ @staticmethod
+ def _sim_vol_2_lsm(sim_vol):
+ vol_id = SimArray._sim_id_to_lsm_id(sim_vol['id'], 'VOL')
+ pool_id = SimArray._sim_id_to_lsm_id(sim_vol['pool_id'], 'POOL')
+ return Volume(vol_id, sim_vol['name'], sim_vol['vpd83'],
+ BackStore.BLK_SIZE,
+ int(sim_vol['total_space'] / BackStore.BLK_SIZE),
+ sim_vol['admin_state'], BackStore.SYS_ID,
+ pool_id)
+
+ @_handle_errors
+ def volumes(self):
+ return list(
+ SimArray._sim_vol_2_lsm(v) for v in self.bs_obj.sim_vols())
+
+ @staticmethod
+ def _sim_pool_2_lsm(sim_pool):
+ pool_id = SimArray._sim_id_to_lsm_id(sim_pool['id'], 'POOL')
+ name = sim_pool['name']
+ total_space = sim_pool['total_space']
+ free_space = sim_pool['free_space']
+ status = sim_pool['status']
+ status_info = sim_pool['status_info']
+ sys_id = BackStore.SYS_ID
+ element_type = sim_pool['element_type']
+ unsupported_actions = sim_pool['unsupported_actions']
+ return Pool(
+ pool_id, name, element_type, unsupported_actions, total_space,
+ free_space, status, status_info, sys_id)

- def _ag_ids_of_init(self, init_id):
+ @_handle_errors
+ def pools(self, flags=0):
+ self.bs_obj.trans_begin()
+ sim_pools = self.bs_obj.sim_pools()
+ self.bs_obj.trans_rollback()
+ return list(
+ SimArray._sim_pool_2_lsm(sim_pool) for sim_pool in sim_pools)
+
+ @staticmethod
+ def _sim_disk_2_lsm(sim_disk):
+ return Disk(
+ SimArray._sim_id_to_lsm_id(sim_disk['id'], 'DISK'),
+ sim_disk['name'],
+ sim_disk['disk_type'], BackStore.BLK_SIZE,
+ int(sim_disk['total_space'] / BackStore.BLK_SIZE),
+ Disk.STATUS_OK, BackStore.SYS_ID)
+
+ @_handle_errors
+ def disks(self):
+ return list(
+ SimArray._sim_disk_2_lsm(sim_disk)
+ for sim_disk in self.bs_obj.sim_disks())
+
+ @_handle_errors
+ def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
+ _internal_use=False):
"""
- Find out the access groups defined initiator belong to.
- Will return a list of access group id or []
+ The '_internal_use' parameter is only for SimArray internal use.
+ This method will return the new sim_vol id instead of job_id when
+ '_internal_use' marked as True.
"""
- rc = []
- for sim_ag in self.ag_dict.values():
- if init_id in sim_ag['init_ids']:
- rc.extend([sim_ag['ag_id']])
- return rc
+ if _internal_use is False:
+ self.bs_obj.trans_begin()

- def iscsi_chap_auth(self, init_id, in_user, in_pass, out_user, out_pass,
- flags=0):
- # No iscsi chap query API yet, not need to setup anything
+ new_sim_vol_id = self.bs_obj.sim_vol_create(
+ vol_name, size_bytes, SimArray._lsm_id_to_sim_id(pool_id), thinp)
+
+ if _internal_use:
+ return new_sim_vol_id
+
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_VOL, new_sim_vol_id)
+ self.bs_obj.trans_commit()
+
+ return job_id, None
+
+ @_handle_errors
+ def volume_delete(self, vol_id, flags=0):
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_delete(SimArray._lsm_id_to_sim_id(vol_id))
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
+ @_handle_errors
+ def volume_resize(self, vol_id, new_size_bytes, flags=0):
+ self.bs_obj.trans_begin()
+
+ sim_vol_id = SimArray._lsm_id_to_sim_id(vol_id)
+ self.bs_obj.sim_vol_resize(sim_vol_id, new_size_bytes)
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_VOL, sim_vol_id)
+ self.bs_obj.trans_commit()
+
+ return job_id, None
+
+ @_handle_errors
+ def volume_replicate(self, dst_pool_id, rep_type, src_vol_id, new_vol_name,
+ flags=0):
+ self.bs_obj.trans_begin()
+
+ src_sim_vol_id = SimArray._lsm_id_to_sim_id(src_vol_id)
+ # Verify the existence of source volume
+ src_sim_vol = self.bs_obj.sim_vol_of_id(src_sim_vol_id)
+
+ dst_sim_vol_id = self.volume_create(
+ dst_pool_id, new_vol_name, src_sim_vol['total_space'],
+ src_sim_vol['thinp'], _internal_use=True)
+
+ self.bs_obj.sim_vol_replica(src_sim_vol_id, dst_sim_vol_id, rep_type)
+
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_VOL, dst_sim_vol_id)
+ self.bs_obj.trans_commit()
+
+ return job_id, None
+
+ @_handle_errors
+ def volume_replicate_range_block_size(self, sys_id, flags=0):
+ if sys_id != BackStore.SYS_ID:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+
+ @_handle_errors
+ def volume_replicate_range(self, rep_type, src_vol_id, dst_vol_id, ranges,
+ flags=0):
+ self.bs_obj.trans_begin()
+
+ # TODO: check whether star_blk + count is out of volume boundary
+ # TODO: Should check block overlap.
+
+ self.bs_obj.sim_vol_replica(
+ SimArray._lsm_id_to_sim_id(src_vol_id),
+ SimArray._lsm_id_to_sim_id(dst_vol_id), rep_type, ranges)
+
+ job_id = self._job_create()
+
+ self.bs_obj.trans_commit()
+ return job_id
+
+ @_handle_errors
+ def volume_enable(self, vol_id, flags=0):
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_state_change(
+ SimArray._lsm_id_to_sim_id(vol_id), Volume.ADMIN_STATE_ENABLED)
+ self.bs_obj.trans_commit()
return None

+ @_handle_errors
+ def volume_disable(self, vol_id, flags=0):
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_state_change(
+ SimArray._lsm_id_to_sim_id(vol_id), Volume.ADMIN_STATE_DISABLED)
+ self.bs_obj.trans_commit()
+ return None
+
+ @_handle_errors
+ def volume_child_dependency(self, vol_id, flags=0):
+ # TODO: API defination is blur:
+ # 0. Should we break replication if provided volume is a
+ # replication target?
+ # Assuming answer is no.
+ # 1. _client.py comments incorrect:
+ # "Implies that this volume cannot be deleted or possibly
+ # modified because it would affect its children"
+ # The 'modify' here is incorrect. If data on source volume
+ # changes, SYNC_MIRROR replication will change all target
+ # volumes.
+ # 2. Should 'mask' relationship included?
+ # # Assuming only replication counts here.
+ # 3. For volume internal block replication, should we return
+ # True or False.
+ # # Assuming False
+ # 4. volume_child_dependency_rm() against volume internal
+ # block replication, remove replication or raise error?
+ # # Assuming remove replication
+ src_sim_vol_id = SimArray._lsm_id_to_sim_id(vol_id)
+ dst_sim_vol_ids = self.bs_obj.dst_sim_vol_ids_of_src(src_sim_vol_id)
+ for dst_sim_fs_id in dst_sim_vol_ids:
+ if dst_sim_fs_id != src_sim_vol_id:
+ return True
+ return False
+
+ @_handle_errors
+ def volume_child_dependency_rm(self, vol_id, flags=0):
+ self.bs_obj.trans_begin()
+
+ self.bs_obj.sim_vol_src_replica_break(
+ SimArray._lsm_id_to_sim_id(vol_id))
+
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
+ @staticmethod
+ def _sim_fs_2_lsm(sim_fs):
+ fs_id = SimArray._sim_id_to_lsm_id(sim_fs['id'], 'FS')
+ pool_id = SimArray._sim_id_to_lsm_id(sim_fs['id'], 'POOL')
+ return FileSystem(fs_id, sim_fs['name'],
+ sim_fs['total_space'], sim_fs['free_space'],
+ pool_id, BackStore.SYS_ID)
+
+ @_handle_errors
def fs(self):
- return self.fs_dict.values()
+ return list(SimArray._sim_fs_2_lsm(f) for f in self.bs_obj.sim_fss())

- def fs_create(self, pool_id, fs_name, size_bytes, flags=0):
+ @_handle_errors
+ def fs_create(self, pool_id, fs_name, size_bytes, flags=0,
+ _internal_use=False):

- self._check_dup_name(self.fs_dict.values(), fs_name,
- ErrorNumber.NAME_CONFLICT)
+ if not _internal_use:
+ self.bs_obj.trans_begin()

- size_bytes = SimData._block_rounding(size_bytes)
- # check free size
- free_space = self.pool_free_space(pool_id)
- if (free_space < size_bytes):
- raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
- "Insufficient space in pool")
- sim_fs = dict()
- fs_id = self._next_fs_id()
- sim_fs['fs_id'] = fs_id
- sim_fs['name'] = fs_name
- sim_fs['total_space'] = size_bytes
- sim_fs['free_space'] = size_bytes
- sim_fs['sys_id'] = SimData.SIM_DATA_SYS_ID
- sim_fs['pool_id'] = pool_id
- sim_fs['consume_size'] = size_bytes
- self.fs_dict[fs_id] = sim_fs
- return sim_fs
+ new_sim_fs_id = self.bs_obj.sim_fs_create(
+ fs_name, size_bytes, SimArray._lsm_id_to_sim_id(pool_id))

- def fs_delete(self, fs_id, flags=0):
- if fs_id in self.fs_dict.keys():
- del(self.fs_dict[fs_id])
- return
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "No such File System: %s" % fs_id)
+ if _internal_use:
+ return new_sim_fs_id

- def fs_resize(self, fs_id, new_size_bytes, flags=0):
- new_size_bytes = SimData._block_rounding(new_size_bytes)
- if fs_id in self.fs_dict.keys():
- pool_id = self.fs_dict[fs_id]['pool_id']
- free_space = self.pool_free_space(pool_id)
- if (free_space < new_size_bytes):
- raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
- "Insufficient space in pool")
-
- if self.fs_dict[fs_id]['total_space'] == new_size_bytes:
- raise LsmError(ErrorNumber.NO_STATE_CHANGE,
- "New size same as current")
-
- self.fs_dict[fs_id]['total_space'] = new_size_bytes
- self.fs_dict[fs_id]['free_space'] = new_size_bytes
- self.fs_dict[fs_id]['consume_size'] = new_size_bytes
- return self.fs_dict[fs_id]
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "No such File System: %s" % fs_id)
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_FS, new_sim_fs_id)
+ self.bs_obj.trans_commit()
+
+ return job_id, None

+ @_handle_errors
+ def fs_delete(self, fs_id, flags=0):
+ #TODO: check clone snapshot and etc
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_fs_delete(SimArray._lsm_id_to_sim_id(fs_id))
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
+ @_handle_errors
+ def fs_resize(self, fs_id, new_size_bytes, flags=0):
+ sim_fs_id = SimArray._lsm_id_to_sim_id(fs_id)
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_fs_resize(sim_fs_id, new_size_bytes)
+ job_id = self._job_create(BackStore.JOB_DATA_TYPE_FS, sim_fs_id)
+ self.bs_obj.trans_commit()
+ return job_id, None
+
+ @_handle_errors
def fs_clone(self, src_fs_id, dst_fs_name, snap_id, flags=0):
- if src_fs_id not in self.fs_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % src_fs_id)
- if snap_id and snap_id not in self.snap_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS_SS,
- "No such Snapshot: %s" % snap_id)
+ self.bs_obj.trans_begin()
+
+ sim_fs_snap_id = None
+ if snap_id:
+ sim_fs_snap_id = SimArray._lsm_id_to_sim_id(snap_id)

- src_sim_fs = self.fs_dict[src_fs_id]
- if 'clone' not in src_sim_fs.keys():
- src_sim_fs['clone'] = dict()
+ src_sim_fs_id = SimArray._lsm_id_to_sim_id(src_fs_id)
+ src_sim_fs = self.bs_obj.sim_fs_of_id(src_sim_fs_id)
+ pool_id = SimArray._sim_id_to_lsm_id(src_sim_fs['pool_id'], 'POOL')

- # Make sure we don't have a duplicate name
- self._check_dup_name(self.fs_dict.values(), dst_fs_name,
- ErrorNumber.NAME_CONFLICT)
+ dst_sim_fs_id = self.fs_create(
+ pool_id, dst_fs_name, src_sim_fs['total_space'],
+ _internal_use=True)

- dst_sim_fs = self.fs_create(
- src_sim_fs['pool_id'], dst_fs_name, src_sim_fs['total_space'], 0)
+ self.bs_obj.sim_fs_clone(src_sim_fs_id, dst_sim_fs_id, sim_fs_snap_id)

- src_sim_fs['clone'][dst_sim_fs['fs_id']] = {
- 'snap_id': snap_id,
- }
- return dst_sim_fs
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_FS, dst_sim_fs_id)
+ self.bs_obj.trans_commit()

+ return job_id, None
+
+ @_handle_errors
def fs_file_clone(self, fs_id, src_fs_name, dst_fs_name, snap_id, flags=0):
- if fs_id not in self.fs_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- if snap_id and snap_id not in self.snap_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS_SS,
- "No such Snapshot: %s" % snap_id)
- # TODO: No file clone query API yet, no need to do anything internally
- return None
+ self.bs_obj.trans_begin()
+ sim_fs_snap_id = None
+ if snap_id:
+ sim_fs_snap_id = SimArray._lsm_id_to_sim_id(snap_id)

- def fs_snapshots(self, fs_id, flags=0):
- if fs_id not in self.fs_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- rc = []
- if 'snaps' in self.fs_dict[fs_id].keys():
- for snap_id in self.fs_dict[fs_id]['snaps']:
- rc.extend([self.snap_dict[snap_id]])
- return rc
+ self.bs_obj.sim_fs_file_clone(
+ SimArray._lsm_id_to_sim_id(fs_id), src_fs_name, dst_fs_name,
+ sim_fs_snap_id)

- def fs_snapshot_create(self, fs_id, snap_name, flags=0):
- if fs_id not in self.fs_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- if 'snaps' not in self.fs_dict[fs_id].keys():
- self.fs_dict[fs_id]['snaps'] = []
- else:
- self._check_dup_name(self.fs_dict[fs_id]['snaps'], snap_name,
- ErrorNumber.NAME_CONFLICT)
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id

- snap_id = self._next_snap_id()
- sim_snap = dict()
- sim_snap['snap_id'] = snap_id
- sim_snap['name'] = snap_name
- sim_snap['files'] = []
+ @staticmethod
+ def _sim_fs_snap_2_lsm(sim_fs_snap):
+ snap_id = SimArray._sim_id_to_lsm_id(sim_fs_snap['id'], 'FS_SNAP')
+ return FsSnapshot(
+ snap_id, sim_fs_snap['name'], sim_fs_snap['timestamp'])

- sim_snap['timestamp'] = time.time()
- self.snap_dict[snap_id] = sim_snap
- self.fs_dict[fs_id]['snaps'].extend([snap_id])
- return sim_snap
+ @_handle_errors
+ def fs_snapshots(self, fs_id, flags=0):
+ return list(
+ SimArray._sim_fs_snap_2_lsm(s)
+ for s in self.bs_obj.sim_fs_snaps(
+ SimArray._lsm_id_to_sim_id(fs_id)))

+ @_handle_errors
+ def fs_snapshot_create(self, fs_id, snap_name, flags=0):
+ self.bs_obj.trans_begin()
+ sim_fs_snap_id = self.bs_obj.sim_fs_snap_create(
+ SimArray._lsm_id_to_sim_id(fs_id), snap_name)
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_FS_SNAP, sim_fs_snap_id)
+ self.bs_obj.trans_commit()
+ return job_id, None
+
+ @_handle_errors
def fs_snapshot_delete(self, fs_id, snap_id, flags=0):
- if fs_id not in self.fs_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- if snap_id not in self.snap_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS_SS,
- "No such Snapshot: %s" % snap_id)
- del self.snap_dict[snap_id]
- new_snap_ids = []
- for old_snap_id in self.fs_dict[fs_id]['snaps']:
- if old_snap_id != snap_id:
- new_snap_ids.extend([old_snap_id])
- self.fs_dict[fs_id]['snaps'] = new_snap_ids
- return None
-
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_fs_snap_delete(
+ SimArray._lsm_id_to_sim_id(snap_id),
+ SimArray._lsm_id_to_sim_id(fs_id))
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
+ @_handle_errors
def fs_snapshot_restore(self, fs_id, snap_id, files, restore_files,
flag_all_files, flags):
- if fs_id not in self.fs_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- if snap_id not in self.snap_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS_SS,
- "No such Snapshot: %s" % snap_id)
- # Nothing need to done internally for restore.
- return None
+ self.bs_obj.trans_begin()
+ sim_fs_snap_id = None
+ if snap_id:
+ sim_fs_snap_id = SimArray._lsm_id_to_sim_id(snap_id)
+
+ self.bs_obj.sim_fs_snap_restore(
+ SimArray._lsm_id_to_sim_id(fs_id),
+ sim_fs_snap_id, files, restore_files, flag_all_files)

+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
+ @_handle_errors
def fs_child_dependency(self, fs_id, files, flags=0):
- if fs_id not in self.fs_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- if 'snaps' not in self.fs_dict[fs_id].keys():
+ sim_fs_id = SimArray._lsm_id_to_sim_id(fs_id)
+ self.bs_obj.trans_begin()
+ if self.bs_obj.clone_dst_sim_fs_ids_of_src(sim_fs_id) == [] and \
+ self.bs_obj.sim_fs_snaps(sim_fs_id) == []:
+ self.bs_obj.trans_rollback()
return False
- if files is None or len(files) == 0:
- if len(self.fs_dict[fs_id]['snaps']) >= 0:
- return True
- else:
- for req_file in files:
- for snap_id in self.fs_dict[fs_id]['snaps']:
- if len(self.snap_dict[snap_id]['files']) == 0:
- # We are snapshoting all files
- return True
- if req_file in self.snap_dict[snap_id]['files']:
- return True
- return False
+ self.bs_obj.trans_rollback()
+ return True

+ @_handle_errors
def fs_child_dependency_rm(self, fs_id, files, flags=0):
- if fs_id not in self.fs_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- if 'snaps' not in self.fs_dict[fs_id].keys():
- return None
- if files is None or len(files) == 0:
- if len(self.fs_dict[fs_id]['snaps']) >= 0:
- snap_ids = self.fs_dict[fs_id]['snaps']
- for snap_id in snap_ids:
- del self.snap_dict[snap_id]
- del self.fs_dict[fs_id]['snaps']
- else:
- for req_file in files:
- snap_ids_to_rm = []
- for snap_id in self.fs_dict[fs_id]['snaps']:
- if len(self.snap_dict[snap_id]['files']) == 0:
- # BUG: if certain snapshot is againsting all files,
- # what should we do if user request remove
- # dependency on certain files.
- # Currently, we do nothing
- return None
- if req_file in self.snap_dict[snap_id]['files']:
- new_files = []
- for old_file in self.snap_dict[snap_id]['files']:
- if old_file != req_file:
- new_files.extend([old_file])
- if len(new_files) == 0:
- # all files has been removed from snapshot list.
- snap_ids_to_rm.extend([snap_id])
- else:
- self.snap_dict[snap_id]['files'] = new_files
- for snap_id in snap_ids_to_rm:
- del self.snap_dict[snap_id]
-
- new_snap_ids = []
- for cur_snap_id in self.fs_dict[fs_id]['snaps']:
- if cur_snap_id not in snap_ids_to_rm:
- new_snap_ids.extend([cur_snap_id])
- if len(new_snap_ids) == 0:
- del self.fs_dict[fs_id]['snaps']
- else:
- self.fs_dict[fs_id]['snaps'] = new_snap_ids
- return None
+ """
+ Assuming API defination is break all clone relationship and remove
+ all snapshot of this source file system.
+ """
+ self.bs_obj.trans_begin()
+ if self.fs_child_dependency(fs_id, files) is False:
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "No snapshot or fs clone target found for this file system")
+
+ src_sim_fs_id = SimArray._lsm_id_to_sim_id(fs_id)
+ self.bs_obj.sim_fs_src_clone_break(src_sim_fs_id)
+ self.bs_obj.sim_fs_snap_del_by_fs(src_sim_fs_id)
+ job_id = self._job_create()
+ self.bs_obj.trans_begin()
+ return job_id

+ @staticmethod
+ def _sim_exp_2_lsm(sim_exp):
+ exp_id = SimArray._sim_id_to_lsm_id(sim_exp['id'], 'EXP')
+ fs_id = SimArray._sim_id_to_lsm_id(sim_exp['fs_id'], 'FS')
+ return NfsExport(exp_id, fs_id, sim_exp['exp_path'],
+ sim_exp['auth_type'], sim_exp['root_hosts'],
+ sim_exp['rw_hosts'], sim_exp['ro_hosts'],
+ sim_exp['anon_uid'], sim_exp['anon_gid'],
+ sim_exp['options'])
+
+ @_handle_errors
def exports(self, flags=0):
- return self.exp_dict.values()
+ return [SimArray._sim_exp_2_lsm(e) for e in self.bs_obj.sim_exps()]

+ @_handle_errors
def fs_export(self, fs_id, exp_path, root_hosts, rw_hosts, ro_hosts,
anon_uid, anon_gid, auth_type, options, flags=0):
- if fs_id not in self.fs_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- sim_exp = dict()
- sim_exp['exp_id'] = self._next_exp_id()
- sim_exp['fs_id'] = fs_id
- if exp_path is None:
- sim_exp['exp_path'] = "/%s" % sim_exp['exp_id']
- else:
- sim_exp['exp_path'] = exp_path
- sim_exp['auth_type'] = auth_type
- sim_exp['root_hosts'] = root_hosts
- sim_exp['rw_hosts'] = rw_hosts
- sim_exp['ro_hosts'] = ro_hosts
- sim_exp['anon_uid'] = anon_uid
- sim_exp['anon_gid'] = anon_gid
- sim_exp['options'] = options
- self.exp_dict[sim_exp['exp_id']] = sim_exp
- return sim_exp
+ self.bs_obj.trans_begin()
+ sim_exp_id = self.bs_obj.sim_exp_create(
+ SimArray._lsm_id_to_sim_id(fs_id), exp_path, root_hosts, rw_hosts,
+ ro_hosts, anon_uid, anon_gid, auth_type, options)
+ sim_exp = self.bs_obj.sim_exp_of_id(sim_exp_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_exp_2_lsm(sim_exp)

+ @_handle_errors
def fs_unexport(self, exp_id, flags=0):
- if exp_id not in self.exp_dict.keys():
- raise LsmError(ErrorNumber.NOT_FOUND_NFS_EXPORT,
- "No such NFS Export: %s" % exp_id)
- del self.exp_dict[exp_id]
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_exp_delete(SimArray._lsm_id_to_sim_id(exp_id))
+ self.bs_obj.trans_commit()
+ return None
+
+ @staticmethod
+ def _sim_ag_2_lsm(sim_ag):
+ ag_id = SimArray._sim_id_to_lsm_id(sim_ag['id'], 'AG')
+ return AccessGroup(ag_id, sim_ag['name'], sim_ag['init_ids'],
+ sim_ag['init_type'], BackStore.SYS_ID)
+
+ @_handle_errors
+ def ags(self):
+ return list(SimArray._sim_ag_2_lsm(a) for a in self.bs_obj.sim_ags())
+
+ @_handle_errors
+ def access_group_create(self, name, init_id, init_type, sys_id, flags=0):
+ if sys_id != BackStore.SYS_ID:
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+ self.bs_obj.trans_begin()
+ new_sim_ag_id = self.bs_obj.sim_ag_create(name, init_type, init_id)
+ new_sim_ag = self.bs_obj.sim_ag_of_id(new_sim_ag_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_ag_2_lsm(new_sim_ag)
+
+ @_handle_errors
+ def access_group_delete(self, ag_id, flags=0):
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_ag_delete(SimArray._lsm_id_to_sim_id(ag_id))
+ self.bs_obj.trans_commit()
+ return None
+
+ @_handle_errors
+ def access_group_initiator_add(self, ag_id, init_id, init_type, flags=0):
+ sim_ag_id = SimArray._lsm_id_to_sim_id(ag_id)
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_ag_init_add(sim_ag_id, init_id, init_type)
+ new_sim_ag = self.bs_obj.sim_ag_of_id(sim_ag_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_ag_2_lsm(new_sim_ag)
+
+ @_handle_errors
+ def access_group_initiator_delete(self, ag_id, init_id, init_type,
+ flags=0):
+ sim_ag_id = SimArray._lsm_id_to_sim_id(ag_id)
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_ag_init_delete(sim_ag_id, init_id)
+ sim_ag = self.bs_obj.sim_ag_of_id(sim_ag_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_ag_2_lsm(sim_ag)
+
+ @_handle_errors
+ def volume_mask(self, ag_id, vol_id, flags=0):
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_mask(
+ SimArray._lsm_id_to_sim_id(vol_id),
+ SimArray._lsm_id_to_sim_id(ag_id))
+ self.bs_obj.trans_commit()
+ return None
+
+ def volume_unmask(self, ag_id, vol_id, flags=0):
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_unmask(
+ SimArray._lsm_id_to_sim_id(vol_id),
+ SimArray._lsm_id_to_sim_id(ag_id))
+ self.bs_obj.trans_commit()
+ return None
+
+ def volumes_accessible_by_access_group(self, ag_id, flags=0):
+ self.bs_obj.trans_begin()
+
+ sim_vols = self.bs_obj.sim_vols(
+ sim_ag_id=SimArray._lsm_id_to_sim_id(ag_id))
+
+ self.bs_obj.trans_rollback()
+ return [SimArray._sim_vol_2_lsm(v) for v in sim_vols]
+
+ def access_groups_granted_to_volume(self, vol_id, flags=0):
+ self.bs_obj.trans_begin()
+ sim_ags = self.bs_obj.sim_ags(
+ sim_vol_id=SimArray._lsm_id_to_sim_id(vol_id))
+ self.bs_obj.trans_rollback()
+ return [SimArray._sim_ag_2_lsm(a) for a in sim_ags]
+
+ def iscsi_chap_auth(self, init_id, in_user, in_pass, out_user, out_pass,
+ flags=0):
+ self.bs_obj.trans_begin()
+ self.bs_obj.iscsi_chap_auth_set(
+ init_id, in_user, in_pass, out_user, out_pass)
+ self.bs_obj.trans_commit()
return None

+ @staticmethod
+ def _sim_tgt_2_lsm(sim_tgt):
+ tgt_id = "TGT_PORT_ID_%0*d" % (SimArray.ID_FMT, sim_tgt['id'])
+ return TargetPort(
+ tgt_id, sim_tgt['port_type'], sim_tgt['service_address'],
+ sim_tgt['network_address'], sim_tgt['physical_address'],
+ sim_tgt['physical_name'],
+ BackStore.SYS_ID)
+
def target_ports(self):
- return self.tgt_dict.values()
+ return list(SimArray._sim_tgt_2_lsm(t) for t in self.bs_obj.sim_tgts())
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index 79b6df5..8f7adfc 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
@@ -29,7 +29,6 @@ class SimPlugin(INfs, IStorageAreaNetwork):
def __init__(self):
self.uri = None
self.password = None
- self.tmo = 0
self.sim_array = None

def plugin_register(self, uri, password, timeout, flags=0):
@@ -41,14 +40,14 @@ class SimPlugin(INfs, IStorageAreaNetwork):
qp = uri_parse(uri)
if 'parameters' in qp and 'statefile' in qp['parameters'] \
and qp['parameters']['statefile'] is not None:
- self.sim_array = SimArray(qp['parameters']['statefile'])
+ self.sim_array = SimArray(qp['parameters']['statefile'], timeout)
else:
- self.sim_array = SimArray()
+ self.sim_array = SimArray(None, timeout)

return None

def plugin_unregister(self, flags=0):
- self.sim_array.save_state()
+ pass

def job_status(self, job_id, flags=0):
return self.sim_array.job_status(job_id, flags)
--
1.8.3.1
Tony Asleson
2015-01-08 20:31:11 UTC
Permalink
Hi Gris,

Lots of changes here. I haven't had a chance to fully scrutinize it,
but this is what I have come up with so far.

I see that when we create something we start a transaction and then
after the item or job is created we commit the transaction. I don't
know what the concurrency or isolation levels are for sqlite, but one
potential issue is that if we create a job the transaction shouldn't get
committed until the job completes. Otherwise the record exists in the
db before the user that initiated the object creation is even returned it.
Post by Gris Ge
* This is just a domo patch. Please don't commit.
* Tested with 100 thread of lsmcli for con-currency action.
* Tested by plugin_test.
* Don't have complex data layout in database. Prepare for sharing same
statefile with 'simc://' plugin.
* Found some blur definition on replication (both volume and fs).
Please check "#TODO" comments of replication methods.
1. 'make check' failed at tester.c.
2. Make statefile global readable and writable. Maybe?
3. Remove public method of BackStore class.
---
plugin/sim/simarray.py | 3066 ++++++++++++++++++++++++++++-------------------
plugin/sim/simulator.py | 7 +-
2 files changed, 1804 insertions(+), 1269 deletions(-)
diff --git a/plugin/sim/simarray.py b/plugin/sim/simarray.py
index d733e50..9f7f6c5 100644
--- a/plugin/sim/simarray.py
+++ b/plugin/sim/simarray.py
@@ -1,10 +1,10 @@
-# Copyright (C) 2011-2014 Red Hat, Inc.
+# Copyright (C) 2011-2015 Red Hat, Inc.
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or any later version.
#
-# This library is distributed in the hope that it will be useful,
+# This library is distributed in the hope that it is useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
@@ -16,23 +16,64 @@
# Author: tasleson
-# TODO: 1. Introduce constant check by using state_to_str() converting.
-# 2. Snapshot should consume space in pool.
+# TODO: 1. Review public method of BackStore
+# 2. Make sure statefile is created with global write.
Can sqlite handle multiple concurrent client access to the same on disk
data? I'm not sure what it can/cannot do.
Post by Gris Ge
import random
-import pickle
import tempfile
import os
import time
-import fcntl
+import sqlite3
-from lsm import (size_human_2_size_bytes, size_bytes_2_size_human)
+
+from lsm import (size_human_2_size_bytes)
from lsm import (System, Volume, Disk, Pool, FileSystem, AccessGroup,
FsSnapshot, NfsExport, md5, LsmError, TargetPort,
ErrorNumber, JobStatus)
-# Used for format width for disks
-D_FMT = 5
+
+ return method(*args, **kargs)
+ args[0].bs_obj.trans_rollback()
+ raise LsmError(
+ ErrorNumber.TIMEOUT,
+ "Timeout to require lock on state file")
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Got unexpected error from sqlite3: %s" % sql_error.message)
+ args[0].bs_obj.trans_rollback()
+ raise
+ args[0].bs_obj.trans_rollback()
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Got unexpected error: %s" % str(base_error))
+ return md_wrapper
+
+
+ rc_dict = {}
+ rc_dict[sim_data['id']] = sim_data
+ return rc_dict
+
+
+ """
+ Generate a random VPD83 NAA_Type3 ID
+ """
+ vpd = ['60']
+ vpd.append(str('%02x' % (random.randint(0, 255))))
+ return "".join(vpd)
"""
return member_type in PoolRAID._MEMBER_TYPE_2_DISK_TYPE
-
- """
- Simulates a longer running job, uses actual wall time. If test cases
- take too long we can reduce time by shortening time duration.
- """
-
- end = self.start + self.duration
- now = time.time()
- self.percent = 100
- self.status = JobStatus.COMPLETE
- diff = now - self.start
- self.percent = int(100 * (diff / self.duration))
-
- duration = os.getenv("LSM_SIM_TIME", 1)
- self.status = JobStatus.INPROGRESS
- self.percent = 0
- self.__item = item_to_return
- self.start = time.time()
- self.duration = float(random.randint(0, int(duration)))
-
- """
- Returns a tuple (status, percent, data)
- """
- self._calc_progress()
- return self.status, self.percent, self.item
-
- return self.__item
- return None
-
- self.__item = value
-
-
- SIM_DATA_FILE = os.getenv("LSM_SIM_DATA",
- tempfile.gettempdir() + '/lsm_sim_data')
-
- raise LsmError(ErrorNumber.INVALID_ARGUMENT,
- "Stored simulator state incompatible with "
- "simulator, please move or delete %s" %
- dump_file)
-
@staticmethod
- # isupper() just make sure the 'lsm_test_aggr' come as first one.
- return sorted(resources, key=lambda k: (k.id.isupper(), k.id))
+ return m_type
+ return PoolRAID.MEMBER_TYPE_UNKNOWN
+
+
+ VERSION = "3.0"
+ VERSION_SIGNATURE = 'LSM_SIMULATOR_DATA_%s_%s' % (VERSION, md5(VERSION))
+ JOB_DEFAULT_DURATION = 1
+ JOB_DATA_TYPE_VOL = 1
+ JOB_DATA_TYPE_FS = 2
+ JOB_DATA_TYPE_FS_SNAP = 3
+
+ SYS_ID = "sim-01"
+ SYS_NAME = "LSM simulated storage plug-in"
+ BLK_SIZE = 512
+
+ SYS_KEY_LIST = ['id', 'name', 'status', 'status_info', 'version']
+
+ POOL_KEY_LIST = [
+ 'id', 'name', 'status', 'status_info',
+ 'element_type', 'unsupported_actions', 'raid_type',
+ 'member_type', 'total_space', 'parent_pool_id']
+
+ DISK_KEY_LIST = [
+ 'id', 'disk_prefix', 'total_space', 'disk_type', 'status',
+ 'owner_pool_id']
+
+ VOL_KEY_LIST = [
+ 'id', 'vpd83', 'name', 'total_space', 'consumed_size',
+ 'pool_id', 'admin_state', 'rep_source', 'rep_type', 'thinp']
+
+ TGT_KEY_LIST = [
+ 'id', 'port_type', 'service_address', 'network_address',
+ 'physical_address', 'physical_name']
+
+ AG_KEY_LIST = ['id', 'name']
+
+ INIT_KEY_LIST = ['id', 'init_type', 'owner_ag_id']
+
+ JOB_KEY_LIST = ['id', 'duration', 'timestamp', 'data_type', 'data_id']
+
+ FS_KEY_LIST = [
+ 'id', 'name', 'total_space', 'free_space', 'consumed_size',
+ 'pool_id']
+
+ FS_SNAP_KEY_LIST = [
+ 'id', 'fs_id', 'name', 'timestamp']
+
+ EXP_KEY_LIST = [
+ 'id', 'fs_id', 'exp_path', 'auth_type', 'root_hosts', 'rw_hosts',
+ 'ro_hosts', 'anon_uid', 'anon_gid', 'options']
+
+ self.statefile = statefile
+ self.lastrowid = None
+ self.sql_conn = sqlite3.connect(
+ statefile, timeout=int(timeout/1000), isolation_level="IMMEDIATE")
+ # Create tables no matter exit or not. No lock required.
+ sql_cmd = (
+ "CREATE TABLE systems ("
+ "id TEXT PRIMARY KEY, "
+ "name TEXT, "
+ "status INTEGER, "
+ "status_info TEXT, "
+ "version TEXT);\n") # version hold the signature of data
+
+ sql_cmd += (
+ "CREATE TABLE disks ("
+ "id INTEGER PRIMARY KEY, "
+ "total_space LONG, "
+ "disk_type INTEGER, "
+ "status INTEGER, "
+ "owner_pool_id INTEGER, " # Indicate this disk is used to
+ # assemble a pool
+ "disk_prefix TEXT);\n")
Add:
FOREIGN KEY(system_id) REFERENCES system(id)

A disk can exist when a pool does not, in addition a disk could
participate in multiple pools. A pool could exist without any disks.
Instead of having field owner_pool_id present I believe we should have a
separate table that relates which disk(s) are in which pool(s).

CREATE TABLE disk_allocation (
FOREIGN KEY (pool_id) REFERENCES pools(id),
FOREIGN KEY (disk_id) REFERENCES disks(id),
INTEGER percent_utilized
)
Post by Gris Ge
+
+ sql_cmd += (
+ "CREATE TABLE tgts ("
+ "id INTEGER PRIMARY KEY, "
+ "port_type INTEGER, "
+ "service_address TEXT, "
+ "network_address TEXT, "
+ "physical_address TEXT, "
+ "physical_name TEXT);\n")
This should be related to a system yes?
FOREIGN KEY(system_id) REFERENCES system(id)
Post by Gris Ge
+
+ sql_cmd += (
+ "CREATE TABLE pools ("
+ "id INTEGER PRIMARY KEY, "
+ "name TEXT UNIQUE, "
+ "status INTEGER, "
+ "status_info TEXT, "
+ "element_type INTEGER, "
+ "unsupported_actions INTEGER, "
+ "raid_type INTEGER, "
+ "parent_pool_id INTEGER, " # Indicate this pool is allocated from
+ # other pool
+ "member_type INTEGER, "
+ "total_space LONG);\n") # total_space here is only for
+ # sub-pool (pool from pool)
Add:
FOREIGN KEY(system_id) REFERENCES system(id)


I suggest we ditch a pool allocated from another pool for the simulator.
Calculate total_space & free space from the disks that comprise the
pool and the volumes that have been created from it.
Post by Gris Ge
+ sql_cmd += (
+ "CREATE TABLE volumes ("
+ "id INTEGER PRIMARY KEY, "
+ "vpd83 TEXT, "
+ "name TEXT UNIQUE, "
+ "total_space LONG, "
+ "consumed_size LONG, " # Reserved for future thinp support.
+ "pool_id INTEGER, "
FOREIGN KEY(pool_id) REFERENCES pool(id),
Post by Gris Ge
+ "admin_state INTEGER, "
+ "thinp INTEGER, "
+ "rep_source INTEGER, " # Only exist on target volume.
+ "rep_type INTEGER);\n") # Only exist on target volume
The relationship between volumes should be stored in a separate table,
one which I think you have already created below in vol_reps?
Post by Gris Ge
+
+ sql_cmd += (
+ "CREATE TABLE ags ("
+ "id INTEGER PRIMARY KEY, "
+ "name TEXT UNIQUE);\n")
+
+ sql_cmd += (
+ "CREATE TABLE inits ("
+ "id TEXT UNIQUE, "
+ "init_type INTEGER, "
+ "owner_ag_id INTEGER);\n")
FOREIGN KEY(owner_ag_id) REFERENCES ag(id),
Post by Gris Ge
+
+ sql_cmd += (
+ "CREATE TABLE vol_masks ("
+ "id TEXT UNIQUE, " # md5 hash of vol_id and ag_id
+ "vol_id INTEGER, "
FOREIGN KEY(vol_id) REFERENCES volumes(id),
Post by Gris Ge
+ "ag_id INTEGER);\n")
FOREIGN KEY(ag_id) REFERENCES ags(id),
Post by Gris Ge
+ sql_cmd += (
+ "CREATE TABLE vol_reps ("
+ "rep_type INTEGER, "
+ "src_vol_id INTEGER, "
FOREIGN KEY(src_vol_id) REFERENCES volumes(id),
Post by Gris Ge
+ "dst_vol_id INTEGER);\n")
FOREIGN KEY(dst_vol_id) REFERENCES volumes(id),
Post by Gris Ge
+ sql_cmd += (
+ "CREATE TABLE vol_rep_ranges ("
+ "src_vol_id INTEGER, "
+ "dst_vol_id INTEGER, "
+ "src_start_blk LONG, "
+ "dst_start_blk LONG, "
+ "blk_count LONG);\n")
Do we really need to keep track of replication ranges? How will be
leverage this?
Post by Gris Ge
+
+ sql_cmd += (
+ "CREATE TABLE fss ("
+ "id INTEGER PRIMARY KEY, "
+ "name TEXT UNIQUE, "
+ "total_space LONG, "
+ "consumed_size LONG, "
+ "free_space LONG, "
+ "pool_id INTEGER);\n")
FOREIGN KEY(pool_id) REFERENCES pools(id),
Post by Gris Ge
+
+ sql_cmd += (
+ "CREATE TABLE fs_snaps ("
+ "id INTEGER PRIMARY KEY, "
+ "name TEXT UNIQUE, "
+ "fs_id INTEGER, "
FOREIGN KEY(fs_id) REFERENCES fs(id),
Post by Gris Ge
+ "timestamp TEXT);\n")
+
+ sql_cmd += (
+ "CREATE TABLE fs_clones ("
+ "id TEXT UNIQUE, "
+ "src_fs_id INTEGER, "
FOREIGN KEY(src_fs_id) REFERENCES fs(id),
Post by Gris Ge
+ "dst_fs_id INTEGER, "
FOREIGN KEY(dst_fs_id) REFERENCES fs(id),
Post by Gris Ge
+ "fs_snap_id INTEGER);\n")
Why does a fs snapshot need to exist for a fs_clone to exist?
Post by Gris Ge
+
+ sql_cmd += (
+ "CREATE TABLE exps ("
+ "id INTEGER PRIMARY KEY, "
+ "fs_id INTEGER, "
FOREIGN KEY(fs_id) REFERENCES fs(id),
Post by Gris Ge
+ "exp_path TEXT UNIQUE, "
+ "auth_type TEXT, "
+ "root_hosts TEXT, " # divided by ','
+ "rw_hosts TEXT, " # divided by ','
+ "ro_hosts TEXT, " # divided by ','
This is a violation of first normal form. These should be broken out
into a separate table or tables.
Post by Gris Ge
+ "anon_uid INTEGER, "
+ "anon_gid INTEGER, "
+ "options TEXT);\n")
+
+ sql_cmd += (
+ "CREATE TABLE jobs ("
+ "id INTEGER PRIMARY KEY, "
+ "duration INTEGER, "
+ "timestamp TEXT, "
+ "data_type INTEGER, "
+ "data_id TEXT);\n")
In addition, please add NOT NULL to anything that isn't a
primary/foreign key.
Post by Gris Ge
+
+ sql_cur = self.sql_conn.cursor()
+ sql_cur.executescript(sql_cmd)
+ pass
+ raise sql_error
- self.dump_file = SimArray.SIM_DATA_FILE
- self.dump_file = dump_file
-
- self.state_fd = os.open(self.dump_file, os.O_RDWR | os.O_CREAT)
- fcntl.lockf(self.state_fd, fcntl.LOCK_EX)
- self.state_fo = os.fdopen(self.state_fd, "r+b")
-
- current = self.state_fo.read()
-
- self.data = pickle.loads(current)
-
- # Going forward we could get smarter about handling this for
- # changes that aren't invasive, but we at least need to check
- # to make sure that the data will work and not cause any
- # undo confusion.
- if self.data.version != SimData.SIM_DATA_VERSION or \
- SimArray._version_error(self.dump_file)
- SimArray._version_error(self.dump_file)
+ sim_syss = self.sim_syss()
+ return False
- self.data = SimData()
-
- # Make sure we are at the beginning of the stream
- self.state_fo.seek(0)
- pickle.dump(self.data, self.state_fo)
- self.state_fo.flush()
- self.state_fo.close()
- self.state_fo = None
-
- return self.data.job_status(job_id, flags=0)
-
- return self.data.job_free(job_id, flags=0)
-
- return self.data.set_time_out(ms, flags)
-
- return self.data.get_time_out(flags)
-
- return self.data.systems()
-
- return Volume(sim_vol['vol_id'], sim_vol['name'], sim_vol['vpd83'],
- SimData.SIM_DATA_BLK_SIZE,
- int(sim_vol['total_space'] / SimData.SIM_DATA_BLK_SIZE),
- sim_vol['admin_state'], sim_vol['sys_id'],
- sim_vol['pool_id'])
-
- sim_vols = self.data.volumes()
- return SimArray._sort_by_id(
- [SimArray._sim_vol_2_lsm(v) for v in sim_vols])
-
- pool_id = sim_pool['pool_id']
- name = sim_pool['name']
- total_space = self.data.pool_total_space(pool_id)
- free_space = self.data.pool_free_space(pool_id)
- status = sim_pool['status']
- status_info = sim_pool['status_info']
- sys_id = sim_pool['sys_id']
- unsupported_actions = sim_pool['unsupported_actions']
- return Pool(pool_id, name,
- Pool.ELEMENT_TYPE_VOLUME | Pool.ELEMENT_TYPE_FS,
- unsupported_actions, total_space, free_space, status,
- status_info, sys_id)
-
- rc = []
- sim_pools = self.data.pools()
- rc.extend([self._sim_pool_2_lsm(sim_pool, flags)])
- return SimArray._sort_by_id(rc)
-
- rc = []
- sim_disks = self.data.disks()
- disk = Disk(sim_disk['disk_id'], sim_disk['name'],
- sim_disk['disk_type'], SimData.SIM_DATA_BLK_SIZE,
- int(sim_disk['total_space'] /
- SimData.SIM_DATA_BLK_SIZE), Disk.STATUS_OK,
- sim_disk['sys_id'])
- rc.extend([disk])
- return SimArray._sort_by_id(rc)
-
- sim_vol = self.data.volume_create(
- pool_id, vol_name, size_bytes, thinp, flags)
- return self.data.job_create(SimArray._sim_vol_2_lsm(sim_vol))
-
- self.data.volume_delete(vol_id, flags=0)
- return self.data.job_create(None)[0]
-
- sim_vol = self.data.volume_resize(vol_id, new_size_bytes, flags)
- return self.data.job_create(SimArray._sim_vol_2_lsm(sim_vol))
-
- def volume_replicate(self, dst_pool_id, rep_type, src_vol_id, new_vol_name,
- sim_vol = self.data.volume_replicate(
- dst_pool_id, rep_type, src_vol_id, new_vol_name, flags)
- return self.data.job_create(SimArray._sim_vol_2_lsm(sim_vol))
-
- return self.data.volume_replicate_range_block_size(sys_id, flags)
-
- def volume_replicate_range(self, rep_type, src_vol_id, dst_vol_id, ranges,
- return self.data.job_create(
- self.data.volume_replicate_range(
- rep_type, src_vol_id, dst_vol_id, ranges, flags))[0]
-
- return self.data.volume_enable(vol_id, flags)
-
- return self.data.volume_disable(vol_id, flags)
-
- return self.data.volume_child_dependency(vol_id, flags)
-
- return self.data.job_create(
- self.data.volume_child_dependency_rm(vol_id, flags))[0]
-
- return FileSystem(sim_fs['fs_id'], sim_fs['name'],
- sim_fs['total_space'], sim_fs['free_space'],
- sim_fs['pool_id'], sim_fs['sys_id'])
-
- sim_fss = self.data.fs()
- return SimArray._sort_by_id(
- [SimArray._sim_fs_2_lsm(f) for f in sim_fss])
-
- sim_fs = self.data.fs_create(pool_id, fs_name, size_bytes, flags)
- return self.data.job_create(SimArray._sim_fs_2_lsm(sim_fs))
-
- self.data.fs_delete(fs_id, flags=0)
- return self.data.job_create(None)[0]
-
- sim_fs = self.data.fs_resize(fs_id, new_size_bytes, flags)
- return self.data.job_create(SimArray._sim_fs_2_lsm(sim_fs))
-
- sim_fs = self.data.fs_clone(src_fs_id, dst_fs_name, snap_id, flags)
- return self.data.job_create(SimArray._sim_fs_2_lsm(sim_fs))
-
- return self.data.job_create(
- self.data.fs_file_clone(
- fs_id, src_fs_name, dst_fs_name, snap_id, flags))[0]
-
- return FsSnapshot(sim_snap['snap_id'], sim_snap['name'],
- sim_snap['timestamp'])
-
- sim_snaps = self.data.fs_snapshots(fs_id, flags)
- return [SimArray._sim_snap_2_lsm(s) for s in sim_snaps]
-
- sim_snap = self.data.fs_snapshot_create(fs_id, snap_name, flags)
- return self.data.job_create(SimArray._sim_snap_2_lsm(sim_snap))
-
- return self.data.job_create(
- self.data.fs_snapshot_delete(fs_id, snap_id, flags))[0]
-
- def fs_snapshot_restore(self, fs_id, snap_id, files, restore_files,
- return self.data.job_create(
- self.data.fs_snapshot_restore(
- fs_id, snap_id, files, restore_files,
- flag_all_files, flags))[0]
-
- return self.data.fs_child_dependency(fs_id, files, flags)
-
- return self.data.job_create(
- self.data.fs_child_dependency_rm(fs_id, files, flags))[0]
-
- return NfsExport(sim_exp['exp_id'], sim_exp['fs_id'],
- sim_exp['exp_path'], sim_exp['auth_type'],
- sim_exp['root_hosts'], sim_exp['rw_hosts'],
- sim_exp['ro_hosts'], sim_exp['anon_uid'],
- sim_exp['anon_gid'], sim_exp['options'])
-
- sim_exps = self.data.exports(flags)
- return SimArray._sort_by_id(
- [SimArray._sim_exp_2_lsm(e) for e in sim_exps])
+ if 'version' in sim_syss[0] and \
+ return True
- def fs_export(self, fs_id, exp_path, root_hosts, rw_hosts, ro_hosts,
- sim_exp = self.data.fs_export(
- fs_id, exp_path, root_hosts, rw_hosts, ro_hosts,
- anon_uid, anon_gid, auth_type, options, flags)
- return SimArray._sim_exp_2_lsm(sim_exp)
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Stored simulator state incompatible with "
+ "simulator, please move or delete %s" % self.statefile)
- return self.data.fs_unexport(exp_id, flags)
+ """
+ Raise error if version not match.
+ If empty database found, initiate.
+ """
+ # The complex lock workflow is all caused by python sqlite3 do
+ # autocommit for "CREATE TABLE" command.
+ self.trans_begin()
+ self.trans_commit()
+ return
+ self._data_add(
+ 'systems',
+ {
+ 'id': BackStore.SYS_ID,
+ 'name': BackStore.SYS_NAME,
+ 'status': System.STATUS_OK,
+ 'status_info': "",
+ 'version': BackStore.VERSION_SIGNATURE,
+ })
+
+ size_bytes_2t = size_human_2_size_bytes('2TiB')
+ size_bytes_512g = size_human_2_size_bytes('512GiB')
+ # Add 2 SATA disks(2TiB)
+ pool_1_disks = []
+ self._data_add(
+ 'disks',
+ {
+ 'disk_prefix': "2TiB SATA Disk",
+ 'total_space': size_bytes_2t,
+ 'disk_type': Disk.TYPE_SATA,
+ 'status': Disk.STATUS_OK,
+ })
+ pool_1_disks.append(self.lastrowid)
+
+ test_pool_disks = []
+ # Add 6 SAS disks(2TiB)
+ self._data_add(
+ 'disks',
+ {
+ 'disk_prefix': "2TiB SAS Disk",
+ 'total_space': size_bytes_2t,
+ 'disk_type': Disk.TYPE_SAS,
+ 'status': Disk.STATUS_OK,
+ })
+ test_pool_disks.append(self.lastrowid)
+
+ # Add 5 SSD disks(512GiB)
+ self._data_add(
+ 'disks',
+ {
+ 'disk_prefix': "512GiB SSD Disk",
+ 'total_space': size_bytes_512g,
+ 'disk_type': Disk.TYPE_SSD,
+ 'status': Disk.STATUS_OK,
+ })
+ # Add 7 SSD disks(2TiB)
+ self._data_add(
+ 'disks',
+ {
+ 'disk_prefix': "2TiB SSD Disk",
+ 'total_space': size_bytes_2t,
+ 'disk_type': Disk.TYPE_SSD,
+ 'status': Disk.STATUS_OK,
+ })
+
+ pool_1_id = self.sim_pool_create_from_disk(
+ name='Pool 1',
+ raid_type=PoolRAID.RAID_TYPE_RAID1,
+ sim_disk_ids=pool_1_disks,
+ element_type=Pool.ELEMENT_TYPE_POOL |
+ Pool.ELEMENT_TYPE_FS |
+ Pool.ELEMENT_TYPE_VOLUME |
+ Pool.ELEMENT_TYPE_DELTA |
+ Pool.ELEMENT_TYPE_SYS_RESERVED,
+ unsupported_actions=Pool.UNSUPPORTED_VOLUME_GROW |
+ Pool.UNSUPPORTED_VOLUME_SHRINK)
- return AccessGroup(sim_ag['ag_id'], sim_ag['name'], sim_ag['init_ids'],
- sim_ag['init_type'], sim_ag['sys_id'])
+ self.sim_pool_create_sub_pool(
+ name='Pool 2(sub pool of Pool 1)',
+ parent_pool_id=pool_1_id,
+ element_type=Pool.ELEMENT_TYPE_FS |
+ Pool.ELEMENT_TYPE_VOLUME |
+ Pool.ELEMENT_TYPE_DELTA,
+ size=size_bytes_512g)
- sim_ags = self.data.ags()
- return [SimArray._sim_ag_2_lsm(a) for a in sim_ags]
+ self.sim_pool_create_from_disk(
+ name='lsm_test_aggr',
+ element_type=Pool.ELEMENT_TYPE_FS |
+ Pool.ELEMENT_TYPE_VOLUME |
+ Pool.ELEMENT_TYPE_DELTA,
+ raid_type=PoolRAID.RAID_TYPE_RAID0,
+ sim_disk_ids=test_pool_disks)
+
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_FC,
+ 'service_address': '50:0a:09:86:99:4b:8d:c5',
+ 'network_address': '50:0a:09:86:99:4b:8d:c5',
+ 'physical_address': '50:0a:09:86:99:4b:8d:c5',
+ 'physical_name': 'FC_a_0b',
+ })
+
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_FCOE,
+ 'service_address': '50:0a:09:86:99:4b:8d:c6',
+ 'network_address': '50:0a:09:86:99:4b:8d:c6',
+ 'physical_address': '50:0a:09:86:99:4b:8d:c6',
+ 'physical_name': 'FCoE_b_0c',
+ })
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_ISCSI,
+ 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
+ 'network_address': 'sim-iscsi-tgt-3.example.com:3260',
+ 'physical_address': 'a4:4e:31:47:f4:e0',
+ 'physical_name': 'iSCSI_c_0d',
+ })
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_ISCSI,
+ 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
+ 'network_address': '10.0.0.1:3260',
+ 'physical_address': 'a4:4e:31:47:f4:e1',
+ 'physical_name': 'iSCSI_c_0e',
+ })
+ self._data_add(
+ 'tgts',
+ {
+ 'port_type': TargetPort.TYPE_ISCSI,
+ 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
+ 'network_address': '[2001:470:1f09:efe:a64e:31ff::1]:3260',
+ 'physical_address': 'a4:4e:31:47:f4:e1',
+ 'physical_name': 'iSCSI_c_0e',
+ })
+
+ self.trans_commit()
+ return
- sim_ag = self.data.access_group_create(
- name, init_id, init_type, sys_id, flags)
- return SimArray._sim_ag_2_lsm(sim_ag)
+ """
+ Execute sql command and get all output.
+ If key_list is not None, will convert returned sql data to a list of
+ dictionaries.
+ """
+ sql_cur = self.sql_conn.cursor()
+ sql_cur.execute(sql_cmd)
+ self.lastrowid = sql_cur.lastrowid
+ sql_output = sql_cur.fetchall()
+ return list(
+ dict(zip(key_list, value_list))
+ for value_list in sql_output
+ if value_list)
+ return sql_output
- return self.data.access_group_delete(ag_id, flags)
+ sql_cmd = "SELECT %s FROM %s" % (",".join(key_list), table_name)
+ return self._sql_exec(sql_cmd, key_list)
- sim_ag = self.data.access_group_initiator_add(
- ag_id, init_id, init_type, flags)
- return SimArray._sim_ag_2_lsm(sim_ag)
+ self.sql_conn.execute("BEGIN IMMEDIATE TRANSACTION;")
- def access_group_initiator_delete(self, ag_id, init_id, init_type,
- sim_ag = self.data.access_group_initiator_delete(ag_id, init_id,
- init_type, flags)
- return SimArray._sim_ag_2_lsm(sim_ag)
+ self.sql_conn.commit()
- return self.data.volume_mask(ag_id, vol_id, flags)
+ self.sql_conn.rollback()
- return self.data.volume_unmask(ag_id, vol_id, flags)
+ keys = data_dict.keys()
+ values = data_dict.values()
- sim_vols = self.data.volumes_accessible_by_access_group(ag_id, flags)
- return [SimArray._sim_vol_2_lsm(v) for v in sim_vols]
+ sql_cmd = "INSERT INTO %s (" % table_name
+ sql_cmd += "'%s', " % key
- sim_ags = self.data.access_groups_granted_to_volume(vol_id, flags)
- return [SimArray._sim_ag_2_lsm(a) for a in sim_ags]
+ # Remove tailing ', '
+ sql_cmd = sql_cmd[:-2]
- def iscsi_chap_auth(self, init_id, in_user, in_pass, out_user, out_pass,
- return self.data.iscsi_chap_auth(init_id, in_user, in_pass, out_user,
- out_pass, flags)
+ sql_cmd += ") VALUES ( "
+ value = ''
+ sql_cmd += "'%s', " % value
- return TargetPort(
- sim_tgt['tgt_id'], sim_tgt['port_type'],
- sim_tgt['service_address'], sim_tgt['network_address'],
- sim_tgt['physical_address'], sim_tgt['physical_name'],
- sim_tgt['sys_id'])
+ # Remove tailing ', '
+ sql_cmd = sql_cmd[:-2]
- sim_tgts = self.data.target_ports()
- return [SimArray._sim_tgt_2_lsm(t) for t in sim_tgts]
+ sql_cmd += ");"
+ self._sql_exec(sql_cmd)
+ sql_cmd = "SELECT %s FROM %s WHERE %s" % (
+ ",".join(key_list), table, condition)
+ sim_datas = self._sql_exec(sql_cmd, key_list)
+ return None
+ return sim_datas[0]
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "_data_find(): Got non-unique data: %s" % locals())
+ return sim_datas
- """
- * we don't store one data twice
- * we don't srore data which could be caculated out
-
- self.vol_dict = {
- Volume.id = sim_vol,
- }
-
- sim_vol = {
- 'vol_id': self._next_vol_id()
- 'vpd83': SimData._random_vpd(),
- 'name': vol_name,
- 'total_space': size_bytes,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'pool_id': owner_pool_id,
- 'consume_size': size_bytes,
- 'admin_state': Volume.ADMIN_STATE_ENABLED,
- 'replicate': {
- dst_vol_id = [
- {
- 'src_start_blk': src_start_blk,
- 'dst_start_blk': dst_start_blk,
- 'blk_count': blk_count,
- 'rep_type': Volume.REPLICATE_XXXX,
- },
- ],
- },
- 'mask': {
- ag_id = Volume.ACCESS_READ_WRITE|Volume.ACCESS_READ_ONLY,
- },
- 'mask_init': {
- init_id = Volume.ACCESS_READ_WRITE|Volume.ACCESS_READ_ONLY,
- }
- }
-
- self.ag_dict ={
- AccessGroup.id = sim_ag,
- }
- sim_ag = {
- 'init_ids': [init_id,],
- 'init_type': AccessGroup.init_type,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'name': name,
- 'ag_id': self._next_ag_id()
- }
-
- self.fs_dict = {
- FileSystem.id = sim_fs,
- }
- sim_fs = {
- 'fs_id': self._next_fs_id(),
- 'name': fs_name,
- 'total_space': size_bytes,
- 'free_space': size_bytes,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'pool_id': pool_id,
- 'consume_size': size_bytes,
- 'clone': {
- dst_fs_id: {
- 'snap_id': snap_id, # None if no snapshot
- 'files': [ file_path, ] # [] if all files cloned.
- },
- },
- 'snaps' = [snap_id, ],
- }
- self.snap_dict = {
- Snapshot.id: sim_snap,
- }
- sim_snap = {
- 'snap_id': self._next_snap_id(),
- 'name': snap_name,
- 'fs_id': fs_id,
- 'files': [file_path, ],
- 'timestamp': time.time(),
- }
- self.exp_dict = {
- Export.id: sim_exp,
- }
- sim_exp = {
- 'exp_id': self._next_exp_id(),
- 'fs_id': fs_id,
- 'exp_path': exp_path,
- 'auth_type': auth_type,
- 'root_hosts': [root_host, ],
- 'rw_hosts': [rw_host, ],
- 'ro_hosts': [ro_host, ],
- 'anon_uid': anon_uid,
- 'anon_gid': anon_gid,
- 'options': [option, ],
- }
-
- self.pool_dict = {
- Pool.id: sim_pool,
- }
- sim_pool = {
- 'name': pool_name,
- 'pool_id': Pool.id,
- 'raid_type': PoolRAID.RAID_TYPE_XXXX,
- 'member_ids': [ disk_id or pool_id or volume_id ],
- 'member_type': PoolRAID.MEMBER_TYPE_XXXX,
- 'member_size': size_bytes # space allocated from each member pool.
- # only for MEMBER_TYPE_POOL
- 'status': SIM_DATA_POOL_STATUS,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'element_type': SimData.SIM_DATA_POOL_ELEMENT_TYPE,
- }
- """
- SIM_DATA_BLK_SIZE = 512
- SIM_DATA_VERSION = "2.9"
- SIM_DATA_SYS_ID = 'sim-01'
- SIM_DATA_INIT_NAME = 'NULL'
- SIM_DATA_TMO = 30000 # ms
- SIM_DATA_POOL_STATUS = Pool.STATUS_OK
- SIM_DATA_POOL_STATUS_INFO = ''
- SIM_DATA_POOL_ELEMENT_TYPE = Pool.ELEMENT_TYPE_FS \
- | Pool.ELEMENT_TYPE_POOL \
- | Pool.ELEMENT_TYPE_VOLUME
-
- SIM_DATA_POOL_UNSUPPORTED_ACTIONS = 0
-
- SIM_DATA_SYS_POOL_ELEMENT_TYPE = SIM_DATA_POOL_ELEMENT_TYPE \
- | Pool.ELEMENT_TYPE_SYS_RESERVED
-
- SIM_DATA_CUR_VOL_ID = 0
- SIM_DATA_CUR_POOL_ID = 0
- SIM_DATA_CUR_FS_ID = 0
- SIM_DATA_CUR_AG_ID = 0
- SIM_DATA_CUR_SNAP_ID = 0
- SIM_DATA_CUR_EXP_ID = 0
-
- self.SIM_DATA_CUR_POOL_ID += 1
- return "POOL_ID_%08d" % self.SIM_DATA_CUR_POOL_ID
-
- self.SIM_DATA_CUR_VOL_ID += 1
- return "VOL_ID_%08d" % self.SIM_DATA_CUR_VOL_ID
-
- self.SIM_DATA_CUR_FS_ID += 1
- return "FS_ID_%08d" % self.SIM_DATA_CUR_FS_ID
-
- self.SIM_DATA_CUR_AG_ID += 1
- return "AG_ID_%08d" % self.SIM_DATA_CUR_AG_ID
-
- self.SIM_DATA_CUR_SNAP_ID += 1
- return "SNAP_ID_%08d" % self.SIM_DATA_CUR_SNAP_ID
-
- self.SIM_DATA_CUR_EXP_ID += 1
- return "EXP_ID_%08d" % self.SIM_DATA_CUR_EXP_ID
+ value = ''
- return 'LSM_SIMULATOR_DATA_%s' % md5(SimData.SIM_DATA_VERSION)
+ sql_cmd = "UPDATE %s SET %s='%s' WHERE id='%s'" % \
+ (table, column_name, value, data_id)
- return "DISK_ID_%0*d" % (D_FMT, num)
-
- self.tmo = SimData.SIM_DATA_TMO
- self.version = SimData.SIM_DATA_VERSION
- self.signature = SimData.state_signature()
- self.job_num = 0
- self.job_dict = {
- # id: SimJob
- }
- self.syss = [
- System(SimData.SIM_DATA_SYS_ID, 'LSM simulated storage plug-in',
- System.STATUS_OK, '')]
- pool_size_200g = size_human_2_size_bytes('200GiB')
- self.pool_dict = {
- 'POO1': dict(
- pool_id='POO1', name='Pool 1',
- member_type=PoolRAID.MEMBER_TYPE_DISK_SATA,
- member_ids=[SimData._disk_id(0), SimData._disk_id(1)],
- raid_type=PoolRAID.RAID_TYPE_RAID1,
- status=SimData.SIM_DATA_POOL_STATUS,
- status_info=SimData.SIM_DATA_POOL_STATUS_INFO,
- sys_id=SimData.SIM_DATA_SYS_ID,
- element_type=SimData.SIM_DATA_SYS_POOL_ELEMENT_TYPE,
- unsupported_actions=Pool.UNSUPPORTED_VOLUME_GROW |
- Pool.UNSUPPORTED_VOLUME_SHRINK),
- 'POO2': dict(
- pool_id='POO2', name='Pool 2',
- member_type=PoolRAID.MEMBER_TYPE_POOL,
- member_ids=['POO1'], member_size=pool_size_200g,
- raid_type=PoolRAID.RAID_TYPE_NOT_APPLICABLE,
- status=Pool.STATUS_OK,
- status_info=SimData.SIM_DATA_POOL_STATUS_INFO,
- sys_id=SimData.SIM_DATA_SYS_ID,
- element_type=SimData.SIM_DATA_POOL_ELEMENT_TYPE,
- unsupported_actions=SimData.SIM_DATA_POOL_UNSUPPORTED_ACTIONS),
- # lsm_test_aggr pool is required by test/runtest.sh
- 'lsm_test_aggr': dict(
- pool_id='lsm_test_aggr',
- name='lsm_test_aggr',
- member_type=PoolRAID.MEMBER_TYPE_DISK_SAS,
- member_ids=[SimData._disk_id(2), SimData._disk_id(3)],
- raid_type=PoolRAID.RAID_TYPE_RAID0,
- status=Pool.STATUS_OK,
- status_info=SimData.SIM_DATA_POOL_STATUS_INFO,
- sys_id=SimData.SIM_DATA_SYS_ID,
- element_type=SimData.SIM_DATA_POOL_ELEMENT_TYPE,
- unsupported_actions=SimData.SIM_DATA_POOL_UNSUPPORTED_ACTIONS),
- }
- self.vol_dict = {
- }
- self.fs_dict = {
- }
- self.snap_dict = {
- }
- self.exp_dict = {
- }
- disk_size_2t = size_human_2_size_bytes('2TiB')
- disk_size_512g = size_human_2_size_bytes('512GiB')
-
- # Make disks in a loop so that we can create many disks easily
- # if we wish
- self.disk_dict = {}
-
- d_id = SimData._disk_id(i)
- d_size = disk_size_2t
- d_name = "SATA Disk %0*d" % (D_FMT, i)
- d_type = Disk.TYPE_SATA
-
- d_name = "SAS Disk %0*d" % (D_FMT, i)
- d_type = Disk.TYPE_SAS
- d_name = "SSD Disk %0*d" % (D_FMT, i)
- d_type = Disk.TYPE_SSD
- d_size = disk_size_512g
-
- self.disk_dict[d_id] = dict(disk_id=d_id, name=d_name,
- total_space=d_size,
- disk_type=d_type,
- sys_id=SimData.SIM_DATA_SYS_ID)
-
- self.ag_dict = {
- }
- # Create some volumes, fs and etc
- self.volume_create(
- 'POO1', 'Volume 000', size_human_2_size_bytes('200GiB'),
- Volume.PROVISION_DEFAULT)
- self.volume_create(
- 'POO1', 'Volume 001', size_human_2_size_bytes('200GiB'),
- Volume.PROVISION_DEFAULT)
-
- self.pool_dict['POO3'] = {
- 'pool_id': 'POO3',
- 'name': 'Pool 3',
- 'member_type': PoolRAID.MEMBER_TYPE_DISK_SSD,
- 'member_ids': [
- self.disk_dict[SimData._disk_id(9)]['disk_id'],
- self.disk_dict[SimData._disk_id(10)]['disk_id'],
- ],
- 'raid_type': PoolRAID.RAID_TYPE_RAID1,
- 'status': Pool.STATUS_OK,
- 'status_info': SimData.SIM_DATA_POOL_STATUS_INFO,
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- 'element_type': SimData.SIM_DATA_POOL_ELEMENT_TYPE,
- 'unsupported_actions': SimData.SIM_DATA_POOL_UNSUPPORTED_ACTIONS
- }
-
- self.tgt_dict = {
- 'TGT_PORT_ID_01': {
- 'tgt_id': 'TGT_PORT_ID_01',
- 'port_type': TargetPort.TYPE_FC,
- 'service_address': '50:0a:09:86:99:4b:8d:c5',
- 'network_address': '50:0a:09:86:99:4b:8d:c5',
- 'physical_address': '50:0a:09:86:99:4b:8d:c5',
- 'physical_name': 'FC_a_0b',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- 'TGT_PORT_ID_02': {
- 'tgt_id': 'TGT_PORT_ID_02',
- 'port_type': TargetPort.TYPE_FCOE,
- 'service_address': '50:0a:09:86:99:4b:8d:c6',
- 'network_address': '50:0a:09:86:99:4b:8d:c6',
- 'physical_address': '50:0a:09:86:99:4b:8d:c6',
- 'physical_name': 'FCoE_b_0c',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- 'TGT_PORT_ID_03': {
- 'tgt_id': 'TGT_PORT_ID_03',
- 'port_type': TargetPort.TYPE_ISCSI,
- 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
- 'network_address': 'sim-iscsi-tgt-3.example.com:3260',
- 'physical_address': 'a4:4e:31:47:f4:e0',
- 'physical_name': 'iSCSI_c_0d',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- 'TGT_PORT_ID_04': {
- 'tgt_id': 'TGT_PORT_ID_04',
- 'port_type': TargetPort.TYPE_ISCSI,
- 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
- 'network_address': '10.0.0.1:3260',
- 'physical_address': 'a4:4e:31:47:f4:e1',
- 'physical_name': 'iSCSI_c_0e',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- 'TGT_PORT_ID_05': {
- 'tgt_id': 'TGT_PORT_ID_05',
- 'port_type': TargetPort.TYPE_ISCSI,
- 'service_address': 'iqn.1986-05.com.example:sim-tgt-03',
- 'network_address': '[2001:470:1f09:efe:a64e:31ff::1]:3260',
- 'physical_address': 'a4:4e:31:47:f4:e1',
- 'physical_name': 'iSCSI_c_0e',
- 'sys_id': SimData.SIM_DATA_SYS_ID,
- },
- }
+ self._sql_exec(sql_cmd)
- return
+ sql_cmd = "DELETE FROM %s WHERE %s;" % (table, condition)
+ self._sql_exec(sql_cmd)
"""
- Calculate out the free size of certain pool.
+ Return a job id(Integer)
"""
- free_space = self.pool_total_space(pool_id)
- continue
- return 0
- free_space -= sim_vol['consume_size']
- continue
- return 0
- free_space -= sim_fs['consume_size']
- continue
- free_space -= sim_pool['member_size']
- return free_space
+ self._data_add(
+ "jobs",
+ {
+ "duration": os.getenv(
+ "LSM_SIM_TIME", BackStore.JOB_DEFAULT_DURATION),
+ "timestamp": time.time(),
+ "data_type": job_data_type,
+ "data_id": data_id,
+ })
+ return self.lastrowid
+
+ self._data_delete('jobs', 'id="%s"' % sim_job_id)
+
+ """
+ Return (progress, data_type, data) tuple.
+ progress is the integer of percent.
+ """
+ sim_job = self._data_find(
+ 'jobs', 'id=%s' % sim_job_id, BackStore.JOB_KEY_LIST,
+ flag_unique=True)
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_JOB, "Job not found")
+
+ progress = int(
+ (time.time() - float(sim_job['timestamp'])) /
+ sim_job['duration'] * 100)
+
+ data = None
+ data_type = None
+
+ progress = 100
+ data = self.sim_vol_of_id(sim_job['data_id'])
+ data_type = sim_job['data_type']
+ data = self.sim_fs_of_id(sim_job['data_id'])
+ data_type = sim_job['data_type']
+ data = self.sim_fs_snap_of_id(sim_job['data_id'])
+ data_type = sim_job['data_type']
+
+ return (progress, data_type, data)
+
+ """
+ Return a list of sim_sys dict.
+ """
+ return self._get_table('systems', BackStore.SYS_KEY_LIST)
"""
- Generate a random 16 digit number as hex
+ Return a list of sim_disk dict.
"""
- vpd = []
- vpd.append(str('%02x' % (random.randint(0, 255))))
- return "".join(vpd)
-
- def _size_of_raid(self, member_type, member_ids, raid_type,
- member_sizes = []
- member_sizes.extend([self.disk_dict[member_id]['total_space']])
-
- member_sizes.extend([pool_each_size])
+ sim_disks = self._get_table('disks', BackStore.DISK_KEY_LIST)
- raise LsmError(ErrorNumber.PLUGIN_BUG,
- "Got unsupported member_type in _size_of_raid()" +
- ": %d" % member_type)
+ sim_disk['name'] = "%s_%s" % (
+ sim_disk['disk_prefix'], sim_disk['id'])
+ del sim_disk['disk_prefix']
+ return sim_disks
+
all_size = 0
member_size = 0
- member_count = len(member_ids)
+ member_count = len(member_sizes)
all_size += member_size
return 0
- print "%s" % size_bytes_2_size_human(all_size)
- print "%s" % size_bytes_2_size_human(member_size)
return int(all_size / 2 - member_size * 2)
- raise LsmError(ErrorNumber.PLUGIN_BUG,
- "_size_of_raid() got invalid raid type: " +
- "%s(%d)" % (Pool.raid_type_to_str(raid_type),
- raid_type))
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "_size_of_raid() got invalid raid type: %d" % raid_type)
"""
Find out the correct size of RAID pool
"""
- sim_pool = self.pool_dict[pool_id]
- each_pool_size_bytes = 0
- member_type = sim_pool['member_type']
- each_pool_size_bytes = sim_pool['member_size']
-
- return self._size_of_raid(
- member_type, sim_pool['member_ids'], sim_pool['raid_type'],
- each_pool_size_bytes)
-
- return (size_bytes + SimData.SIM_DATA_BLK_SIZE - 1) / \
- SimData.SIM_DATA_BLK_SIZE * \
- SimData.SIM_DATA_BLK_SIZE
-
- self.job_num += 1
- job_id = "JOB_%s" % self.job_num
- self.job_dict[job_id] = SimJob(returned_item)
- return job_id, None
+ return sim_pool['total_space']
+
+ member_sizes = list(
+ i['total_space']
+ for i in self._data_find(
+ 'disks', "owner_pool_id=%d;" % sim_pool['id'],
+ ['total_space']))
- return None, returned_item
-
- return self.job_dict[job_id].progress()
- raise LsmError(ErrorNumber.NOT_FOUND_JOB,
- 'Non-existent job: %s' % job_id)
+ member_sizes = [d['total_space'] for d in sim_disks]
- del(self.job_dict[job_id])
- return
- raise LsmError(ErrorNumber.NOT_FOUND_JOB,
- 'Non-existent job: %s' % job_id)
+ return BackStore._size_of_raid(sim_pool['raid_type'], member_sizes)
- self.tmo = ms
- return None
+ def _sim_pool_free_space(self, sim_pool, sim_pools=None, sim_vols=None,
+ """
+ Calculate out the free size of certain pool.
+ """
+ sim_pool_id = sim_pool['id']
+ consumed_sizes = []
+ consumed_sizes.extend(
+ list(
+ p['total_space']
+ for p in self._data_find(
+ 'pools', "parent_pool_id=%d" % sim_pool_id,
+ ['total_space'])))
+ consumed_sizes.extend(
+ list(
+ p['total_space'] for p in sim_pools
+ if p['parent_pool_id'] == sim_pool_id))
+
+ consumed_sizes.extend(
+ list(
+ f['consumed_size']
+ for f in self._data_find(
+ 'fss', "pool_id=%d" % sim_pool_id, ['consumed_size'])))
+ consumed_sizes.extend(
+ list(
+ f['consumed_size'] for f in sim_fss
+ if f['pool_id'] == sim_pool_id))
+
+ consumed_sizes.extend(
+ list(
+ v['consumed_size']
+ for v in self._data_find(
+ 'volumes', "pool_id=%d" % sim_pool_id,
+ ['consumed_size'])))
+ consumed_sizes.extend(
+ list(
+ v['consumed_size']
+ for v in sim_vols
+ if v['pool_id'] == sim_pool_id))
+
+ free_space = sim_pool.get(
+ 'total_space', self._sim_pool_total_space(sim_pool))
+ raise LsmError(
+ ErrorNumber.INVALID_ARGUMENT,
+ "Stored simulator state corrupted: free_space of pool "
+ "%d is less than 0, please move or delete %s" %
+ (sim_pool_id, self.statefile))
+ free_space -= consumed_size
- return self.tmo
+ return free_space
- return self.syss
+ """
+ Return a list of sim_pool dict.
+ """
+ sim_pools = self._get_table('pools', BackStore.POOL_KEY_LIST)
+ sim_disks = self.sim_disks()
+ sim_vols = self.sim_vols()
+ sim_fss = self.sim_fss()
+ # Calculate out total_space
+ sim_pool['total_space'] = self._sim_pool_total_space(
+ sim_pool, sim_disks)
+ sim_pool['free_space'] = self._sim_pool_free_space(
+ sim_pool, sim_pools, sim_vols, sim_fss)
+
+ return sim_pools
+
+ sim_pool = self._sim_data_of_id(
+ "pools", sim_pool_id, BackStore.POOL_KEY_LIST,
+ ErrorNumber.NOT_FOUND_POOL, "Pool")
+ sim_pool['total_space'] = self._sim_pool_total_space(
+ sim_pool)
+ sim_pool['free_space'] = self._sim_pool_free_space(sim_pool)
+ return sim_pool
+
+ def sim_pool_create_from_disk(self, name, sim_disk_ids, raid_type,
+ # Detect disk type
+ disk_type = None
+ disk_type = sim_disk['disk_type']
+ disk_type = None
+ break
+ member_type = PoolRAID.MEMBER_TYPE_DISK
+ member_type = PoolRAID.disk_type_to_member_type(disk_type)
+
+ self._data_add(
+ 'pools',
+ {
+ 'name': name,
+ 'status': Pool.STATUS_OK,
+ 'status_info': '',
+ 'element_type': element_type,
+ 'unsupported_actions': unsupported_actions,
+ 'raid_type': raid_type,
+ 'member_type': member_type,
+ })
+
+ # update disk owner
+ sim_pool_id = self.lastrowid
+ self._data_update(
+ 'disks', sim_disk_id, 'owner_pool_id', sim_pool_id)
+ return sim_pool_id
+
+ def sim_pool_create_sub_pool(self, name, parent_pool_id, size,
+ self._data_add(
+ 'pools',
+ {
+ 'name': name,
+ 'status': Pool.STATUS_OK,
+ 'status_info': '',
+ 'element_type': element_type,
+ 'unsupported_actions': unsupported_actions,
+ 'raid_type': PoolRAID.RAID_TYPE_NOT_APPLICABLE,
+ 'member_type': PoolRAID.MEMBER_TYPE_POOL,
+ 'parent_pool_id': parent_pool_id,
+ 'total_space': size,
+ })
+ # update disk owner
+ return self.lastrowid
+
+ """
+ Return a list of sim_vol dict.
+ """
+ all_sim_vols = self._get_table('volumes', BackStore.VOL_KEY_LIST)
+ self.sim_ag_of_id(sim_ag_id)
+ sim_vol_ids = self._sim_vol_ids_of_masked_ag(sim_ag_id)
+ return list(
+ v for v in all_sim_vols
+ if v['id'] in sim_vol_ids)
+ return all_sim_vols
+
+ def _sim_data_of_id(self, table_name, data_id, key_list, lsm_error_no,
+ sim_data = self._data_find(
+ table_name, 'id=%s' % data_id, key_list, flag_unique=True)
+ raise LsmError(
+ lsm_error_no, "%s not found" % data_name)
+ return None
+ return sim_data
- return self.pool_dict.values()
+ """
+ Return sim_vol if found. Raise error if not found.
+ """
+ return self._sim_data_of_id(
+ "volumes", sim_vol_id, BackStore.VOL_KEY_LIST,
+ ErrorNumber.NOT_FOUND_VOLUME,
+ "Volume")
- return self.vol_dict.values()
+ sim_pool = self.sim_pool_of_id(sim_pool_id)
- return self.disk_dict.values()
-
- return self.ag_dict.values()
-
- self._check_dup_name(
- self.vol_dict.values(), vol_name, ErrorNumber.NAME_CONFLICT)
- size_bytes = SimData._block_rounding(size_bytes)
- # check free size
- free_space = self.pool_free_space(pool_id)
raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
"Insufficient space in pool")
+
+ return (size_bytes + BackStore.BLK_SIZE - 1) / \
+ BackStore.BLK_SIZE * BackStore.BLK_SIZE
+
+ size_bytes = BackStore._block_rounding(size_bytes)
+ self._check_pool_free_space(sim_pool_id, size_bytes)
sim_vol = dict()
- sim_vol['vol_id'] = self._next_vol_id()
- sim_vol['vpd83'] = SimData._random_vpd()
- sim_vol['name'] = vol_name
- sim_vol['total_space'] = size_bytes
+ sim_vol['vpd83'] = _random_vpd()
+ sim_vol['name'] = name
sim_vol['thinp'] = thinp
- sim_vol['sys_id'] = SimData.SIM_DATA_SYS_ID
- sim_vol['pool_id'] = pool_id
- sim_vol['consume_size'] = size_bytes
+ sim_vol['pool_id'] = sim_pool_id
+ sim_vol['total_space'] = size_bytes
+ sim_vol['consumed_size'] = size_bytes
sim_vol['admin_state'] = Volume.ADMIN_STATE_ENABLED
- self.vol_dict[sim_vol['vol_id']] = sim_vol
- return sim_vol
- if 'mask' in self.vol_dict[vol_id].keys() and \
- raise LsmError(ErrorNumber.IS_MASKED,
- "Volume is masked to access group")
- del(self.vol_dict[vol_id])
- return None
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such volume: %s" % vol_id)
+ self._data_add("volumes", sim_vol)
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other volume" % name)
+ raise
- new_size_bytes = SimData._block_rounding(new_size_bytes)
- pool_id = self.vol_dict[vol_id]['pool_id']
- free_space = self.pool_free_space(pool_id)
- raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
- "Insufficient space in pool")
-
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "New size same "
- "as current")
-
- self.vol_dict[vol_id]['total_space'] = new_size_bytes
- self.vol_dict[vol_id]['consume_size'] = new_size_bytes
- return self.vol_dict[vol_id]
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such volume: %s" % vol_id)
+ return self.lastrowid
- def volume_replicate(self, dst_pool_id, rep_type, src_vol_id, new_vol_name,
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such volume: %s" % src_vol_id)
-
- self._check_dup_name(self.vol_dict.values(), new_vol_name,
- ErrorNumber.NAME_CONFLICT)
-
- size_bytes = self.vol_dict[src_vol_id]['total_space']
- size_bytes = SimData._block_rounding(size_bytes)
- # check free size
- free_space = self.pool_free_space(dst_pool_id)
- raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
- "Insufficient space in pool")
- sim_vol = dict()
- sim_vol['vol_id'] = self._next_vol_id()
- sim_vol['vpd83'] = SimData._random_vpd()
- sim_vol['name'] = new_vol_name
- sim_vol['total_space'] = size_bytes
- sim_vol['sys_id'] = SimData.SIM_DATA_SYS_ID
- sim_vol['pool_id'] = dst_pool_id
- sim_vol['consume_size'] = size_bytes
- sim_vol['admin_state'] = Volume.ADMIN_STATE_ENABLED
- self.vol_dict[sim_vol['vol_id']] = sim_vol
+ """
+ This does not check whether volume exist or not.
+ """
+ # Check existence.
+ self.sim_vol_of_id(sim_vol_id)
+
+ raise LsmError(
+ ErrorNumber.IS_MASKED,
+ "Volume is masked to access group")
+
+ dst_sim_vol_ids = self.dst_sim_vol_ids_of_src(sim_vol_id)
+ # Don't raise error on volume internal replication.
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Requested volume is a replication source")
+
+ self._data_delete("volumes", 'id="%s"' % sim_vol_id)
+ # break replication if requested volume is a target volume
+ self._data_delete('vol_rep_ranges', 'dst_vol_id="%s"' % sim_vol_id)
+ self._data_delete('vol_reps', 'dst_vol_id="%s"' % sim_vol_id)
+
+ self.sim_vol_of_id(sim_vol_id)
+ self.sim_ag_of_id(sim_ag_id)
+ hash_id = md5("%d%d" % (sim_vol_id, sim_ag_id))
+ self._data_add(
+ "vol_masks",
+ {'id': hash_id, 'ag_id': sim_ag_id, 'vol_id': sim_vol_id})
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Volume is already masked to requested access group")
+ raise
- dst_vol_id = sim_vol['vol_id']
- self.vol_dict[src_vol_id]['replicate'] = dict()
+ return None
- self.vol_dict[src_vol_id]['replicate'][dst_vol_id] = list()
+ self.sim_vol_of_id(sim_vol_id)
+ self.sim_ag_of_id(sim_ag_id)
+ hash_id = md5("%d%d" % (sim_vol_id, sim_ag_id))
+ self._data_delete('vol_masks', 'id="%s"' % hash_id)
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Volume is not masked to requested access group")
+ return None
- sim_rep = {
- 'rep_type': rep_type,
- 'src_start_blk': 0,
- 'dst_start_blk': 0,
- 'blk_count': size_bytes,
- }
- self.vol_dict[src_vol_id]['replicate'][dst_vol_id].extend(
- [sim_rep])
+ return list(
+ m['vol_id'] for m in self._data_find(
+ 'vol_masks', 'ag_id="%s"' % sim_ag_id, ['vol_id']))
+
+ return list(
+ m['ag_id'] for m in self._data_find(
+ 'vol_masks', 'vol_id="%s"' % sim_vol_id, ['ag_id']))
+
+ org_new_size_bytes = new_size_bytes
+ new_size_bytes = BackStore._block_rounding(new_size_bytes)
+ sim_vol = self.sim_vol_of_id(sim_vol_id)
+ # Even volume size is identical to rounded size,
+ # but it's not what user requested, hence we silently pass.
+ return
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Volume size is identical to requested")
- return sim_vol
+ sim_pool = self.sim_pool_of_id(sim_vol['pool_id'])
- return SimData.SIM_DATA_BLK_SIZE
+ increment = new_size_bytes - sim_vol['total_space']
- def volume_replicate_range(self, rep_type, src_vol_id, dst_vol_id, ranges,
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % src_vol_id)
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % dst_vol_id)
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Requested pool does not allow volume size grow")
- sim_reps = []
- sim_rep = dict()
- sim_rep['rep_type'] = rep_type
- sim_rep['src_start_blk'] = rep_range.src_block
- sim_rep['dst_start_blk'] = rep_range.dest_block
- sim_rep['blk_count'] = rep_range.block_count
- sim_reps.extend([sim_rep])
+ raise LsmError(
+ ErrorNumber.NOT_ENOUGH_SPACE, "Insufficient space in pool")
- self.vol_dict[src_vol_id]['replicate'] = dict()
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Requested pool does not allow volume size grow")
- self.vol_dict[src_vol_id]['replicate'][dst_vol_id] = list()
+ # TODO: If a volume is in a replication relationship, resize should
+ # be handled properly.
Should this even be allowed?
Post by Gris Ge
+ self._data_update(
+ 'volumes', sim_vol_id, "total_space", new_size_bytes)
+ self._data_update(
+ 'volumes', sim_vol_id, "consumed_size", new_size_bytes)
- self.vol_dict[src_vol_id]['replicate'][dst_vol_id].extend(
- [sim_reps])
+ """
+ [{
+ 'id': md5 of src_sim_vol_id and dst_sim_vol_id,
+ 'src_start_blk': int,
+ 'dst_start_blk': int,
+ 'blk_count': int,
+ }]
+ """
+ return self._data_find(
+ 'vol_rep_ranges', 'id="%s"' % rep_id,
+ ['id', 'src_start_blk', 'dst_start_blk', 'blk_count'])
- return None
+ """
+ Return a list of dst_vol_id for provided source volume ID.
+ """
+ self.sim_vol_of_id(src_sim_vol_id)
+ return list(
+ d['dst_vol_id'] for d in self._data_find(
+ 'vol_reps', 'src_vol_id="%s"' % src_sim_vol_id,
+ ['dst_vol_id']))
+
+ def sim_vol_replica(self, src_sim_vol_id, dst_sim_vol_id, rep_type,
+ self.sim_vol_of_id(src_sim_vol_id)
+ self.sim_vol_of_id(dst_sim_vol_id)
+
+ # TODO: Use consumed_size < total_space to reflect the CLONE type.
+ cur_src_sim_vol_ids = list(
+ r['src_vol_id'] for r in self._data_find(
+ 'vol_reps', 'dst_vol_id="%s"' % dst_sim_vol_id,
+ ['src_vol_id']))
+ if len(cur_src_sim_vol_ids) == 1 and \
+ # src and dst match. Maybe user are overriding old setting.
+ pass
+ pass
+ # TODO: Need to introduce new API error
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Target volume is already a replication target for other "
+ "source volume")
+
+ self._data_add(
+ 'vol_reps',
+ {
+ 'src_vol_id': src_sim_vol_id,
+ 'dst_vol_id': dst_sim_vol_id,
+ 'rep_type': rep_type,
+ })
+ # Remove old replicate ranges before add
+ self._data_delete(
+ 'vol_rep_ranges',
+ 'src_vol_id="%s" AND dst_vol_id="%s"' %
+ (src_sim_vol_id, dst_sim_vol_id))
+
+ self._data_add(
+ 'vol_rep_ranges',
+ {
+ 'src_vol_id': src_sim_vol_id,
+ 'dst_vol_id': dst_sim_vol_id,
+ 'src_start_blk': blk_range.src_block,
+ 'dst_start_blk': blk_range.dest_block,
+ 'blk_count': blk_range.block_count,
+ })
+
+
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Provided volume is not a replication source")
+
+ self._data_delete(
+ 'vol_rep_ranges', 'src_vol_id="%s"' % src_sim_vol_id)
+ self._data_delete(
+ 'vol_reps', 'src_vol_id="%s"' % src_sim_vol_id)
+
+ sim_vol = self.sim_vol_of_id(sim_vol_id)
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Volume admin state is identical to requested")
+
+ self._data_update(
+ 'volumes', sim_vol_id, "admin_state", new_admin_state)
+ """
+ Update 'init_type' and 'init_ids' of sim_ag
+ """
+ sim_ag['init_ids'] = list(i['id'] for i in sim_inits)
+ init_type_set = set(
+ list(sim_init['init_type'] for sim_init in sim_inits))
+
+ sim_ag['init_type'] = AccessGroup.INIT_TYPE_UNKNOWN
+ sim_ag['init_type'] = init_type_set.pop()
+ sim_ag['init_type'] = AccessGroup.INIT_TYPE_ISCSI_WWPN_MIXED
+
+ sim_ags = self._get_table('ags', BackStore.AG_KEY_LIST)
+ sim_inits = self._get_table('inits', BackStore.INIT_KEY_LIST)
+ self.sim_vol_of_id(sim_vol_id)
+ sim_ag_ids = self._sim_ag_ids_of_masked_vol(sim_vol_id)
+ sim_ags = list(a for a in sim_ags if a['id'] in sim_ag_ids)
+
+ owned_sim_inits = list(
+ i for i in sim_inits if i['owner_ag_id'] == sim_ag['id'])
+ BackStore._sim_ag_fill_init_info(sim_ag, owned_sim_inits)
+
+ return sim_ags
+
- if self.vol_dict[vol_id]['admin_state'] == \
- raise LsmError(ErrorNumber.NO_STATE_CHANGE,
- "Volume is already enabled")
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
-
- self.vol_dict[vol_id]['admin_state'] = Volume.ADMIN_STATE_ENABLED
+ self._data_add(
+ "inits",
+ {
+ 'id': init_id,
+ 'init_type': init_type,
+ 'owner_ag_id': sim_ag_id
+ })
+ raise LsmError(
+ ErrorNumber.EXISTS_INITIATOR,
+ "Initiator '%s' is already in use by other access group" %
+ init_id)
+ raise
+
+ def iscsi_chap_auth_set(self, init_id, in_user, in_pass, out_user,
+ # Currently, there is no API method to query status of iscsi CHAP.
return None
- if self.vol_dict[vol_id]['admin_state'] == \
- raise LsmError(ErrorNumber.NO_STATE_CHANGE,
- "Volume is already disabled")
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
-
- self.vol_dict[vol_id]['admin_state'] = Volume.ADMIN_STATE_DISABLED
+ self._data_add("ags", {'name': name})
+ sim_ag_id = self.lastrowid
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other access group" %
+ name)
+ raise
+
+ self._sim_init_create(init_type, init_id, sim_ag_id)
+
+ return sim_ag_id
+
+ self.sim_ag_of_id(sim_ag_id)
+ raise LsmError(
+ ErrorNumber.IS_MASKED,
+ "Access group has volume masked to")
+
+ self._data_delete('ags', 'id="%s"' % sim_ag_id)
+ self._data_delete('inits', 'owner_ag_id="%s"' % sim_ag_id)
+
+ sim_ag = self.sim_ag_of_id(sim_ag_id)
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Initiator already in access group")
+
+ if init_type != AccessGroup.INIT_TYPE_ISCSI_IQN and \
+ raise LsmError(
+ ErrorNumber.NO_SUPPORT,
+ "Only support iSCSI IQN and WWPN initiator type")
+
+ self._sim_init_create(init_type, init_id, sim_ag_id)
return None
+ sim_ag = self.sim_ag_of_id(sim_ag_id)
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "Initiator is not in defined access group")
+ raise LsmError(
+ ErrorNumber.LAST_INIT_IN_ACCESS_GROUP,
+ "Refused to remove the last initiator from access group")
+
+ self._data_delete('inits', 'id="%s"' % init_id)
+
+ sim_ag = self._sim_data_of_id(
+ "ags", sim_ag_id, BackStore.AG_KEY_LIST,
+ ErrorNumber.NOT_FOUND_ACCESS_GROUP,
+ "Access Group")
+ sim_inits = self._data_find(
+ 'inits', 'owner_ag_id=%d' % sim_ag_id, BackStore.INIT_KEY_LIST)
+ BackStore._sim_ag_fill_init_info(sim_ag, sim_inits)
+ return sim_ag
+
"""
- If volume is a src or dst of a replication, we return True.
+ Return a list of sim_fs dict.
"""
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
- if 'replicate' in self.vol_dict[vol_id].keys() and \
- return True
- return True
- return False
+ return self._get_table('fss', BackStore.FS_KEY_LIST)
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
- if 'replicate' in self.vol_dict[vol_id].keys() and \
- del self.vol_dict[vol_id]['replicate']
-
- del sim_vol['replicate'][vol_id]
- return None
+ lsm_error_no = ErrorNumber.NOT_FOUND_FS
+ lsm_error_no = None
- return self.ag_dict.values()
+ return self._sim_data_of_id(
+ "fss", sim_fs_id, BackStore.FS_KEY_LIST, lsm_error_no,
+ "File System")
+ size_bytes = BackStore._block_rounding(size_bytes)
+ self._check_pool_free_space(sim_pool_id, size_bytes)
+ self._data_add(
+ "fss",
+ {
+ 'name': name,
+ 'total_space': size_bytes,
+ 'consumed_size': size_bytes,
+ 'free_space': size_bytes,
+ 'pool_id': sim_pool_id,
+ })
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Name '%s' is already in use by other fs" % name)
+ raise
+ return self.lastrowid
+
+ self.sim_fs_of_id(sim_fs_id)
+ # TODO: API does not have dedicate error for this scenario.
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Requested file system is a clone source")
+
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Requested file system has snapshot attached")
+
+ # TODO: API does not have dedicate error for this scenario
+ raise LsmError(
+ ErrorNumber.PLUGIN_BUG,
+ "Requested file system is exported via NFS")
+
+ self._data_delete("fss", 'id="%s"' % sim_fs_id)
+ # Delete clone relationship if requested fs is a clone target.
+ self._sql_exec(
+ "DELETE FROM fs_clones WHERE dst_fs_id='%s';" % sim_fs_id)
+
+ org_new_size_bytes = new_size_bytes
+ new_size_bytes = BackStore._block_rounding(new_size_bytes)
+ sim_fs = self.sim_fs_of_id(sim_fs_id)
+
+ return
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "File System size is identical to requested")
+
+ # TODO: If a fs is in a clone/snapshot relationship, resize should
+ # be handled properly.
+
+ sim_pool = self.sim_pool_of_id(sim_fs['pool_id'])
+
+ if new_size_bytes > sim_fs['total_space'] and \
+ raise LsmError(
+ ErrorNumber.NOT_ENOUGH_SPACE, "Insufficient space in pool")
+
+ self._data_update(
+ 'fss', sim_fs_id, "total_space", new_size_bytes)
+ self._data_update(
+ 'fss', sim_fs_id, "consumed_size", new_size_bytes)
+ self._data_update(
+ 'fss', sim_fs_id, "free_space", new_size_bytes)
+
+ self.sim_fs_of_id(sim_fs_id)
+ return self._data_find(
+ 'fs_snaps', 'fs_id="%s"' % sim_fs_id, BackStore.FS_SNAP_KEY_LIST)
+
+ sim_fs_snap = self._sim_data_of_id(
+ 'fs_snaps', sim_fs_snap_id, BackStore.FS_SNAP_KEY_LIST,
+ ErrorNumber.NOT_FOUND_FS_SS, 'File system snapshot')
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_FS_SS,
+ "Defined file system snapshot ID is not belong to requested "
+ "file system")
+ return sim_fs_snap
+
+ self.sim_fs_of_id(sim_fs_id)
+ self._data_add(
+ 'fs_snaps',
+ {
+ 'name': name,
+ 'fs_id': sim_fs_id,
+ 'timestamp': int(time.time()),
+ })
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "The name is already used by other file system snapshot")
+ raise
+ return self.lastrowid
+
+ def sim_fs_snap_restore(self, sim_fs_id, sim_fs_snap_id, files,
+ # Currently LSM cannot query stauts of this action.
+ # we simply check existence
+ self.sim_fs_of_id(sim_fs_id)
+ self.sim_fs_snap_of_id(sim_fs_snap_id, sim_fs_id)
+ return
+
+ self.sim_fs_of_id(sim_fs_id)
+ self.sim_fs_snap_of_id(sim_fs_snap_id, sim_fs_id)
+ self._data_delete('fs_snaps', 'id="%s"' % sim_fs_snap_id)
+
+ sql_cmd = "DELETE FROM fs_snaps WHERE fs_id='%s';" % sim_fs_id
+ self._sql_exec(sql_cmd)
+
+ return md5("%s%s" % (src_sim_fs_id, dst_sim_fs_id))
+
+ self.sim_fs_of_id(src_sim_fs_id)
+ self.sim_fs_of_id(dst_sim_fs_id)
+ sim_fs_clone_id = BackStore._sim_fs_clone_id(
+ src_sim_fs_id, dst_sim_fs_id)
+
+ self.sim_fs_snap_of_id(sim_fs_snap_id, src_sim_fs_id)
+
+ self._data_add(
+ 'fs_clones',
+ {
+ 'id': sim_fs_clone_id,
+ 'src_fs_id': src_sim_fs_id,
+ 'dst_fs_id': dst_sim_fs_id,
+ 'fs_snap_id': sim_fs_snap_id,
+ })
+
+ def sim_fs_file_clone(self, sim_fs_id, src_fs_name, dst_fs_name,
+ # We don't have API to query file level clone.
+ # Simply check existence
+ self.sim_fs_of_id(sim_fs_id)
+ self.sim_fs_snap_of_id(sim_fs_snap_id, sim_fs_id)
+ return
+
"""
- Return sim_ag which containing this init_id.
- If not found, return None
+ Return a list of dst_fs_id for provided clone source fs ID.
"""
- return sim_ag
- return None
+ self.sim_fs_of_id(src_sim_fs_id)
+ return list(
+ d['dst_fs_id'] for d in self._data_find(
+ 'fs_clones', 'src_fs_id="%s"' % src_sim_fs_id,
+ ['dst_fs_id']))
- return sim_ag
- return None
+ self._data_delete('fs_clones', 'src_fs_id="%s"' % src_sim_fs_id)
- used_names = [x['name'] for x in sim_list]
- raise LsmError(error_num, "Name '%s' already in use" % name)
+ sim_exp[key_name] = sim_exp[key_name].split(',')
+ return sim_exp
+ return list(
+ BackStore._sim_exp_fix_host_list(e)
+ for e in self._get_table('exps', BackStore.EXP_KEY_LIST))
- # Check to see if we have an access group with this name
- self._check_dup_name(self.ag_dict.values(), name,
- ErrorNumber.NAME_CONFLICT)
+ return BackStore._sim_exp_fix_host_list(
+ self._sim_data_of_id(
+ 'exps', sim_exp_id, BackStore.EXP_KEY_LIST,
+ ErrorNumber.NOT_FOUND_NFS_EXPORT, 'NFS Export'))
- exist_sim_ag = self._sim_ag_of_init(init_id)
- return exist_sim_ag
- raise LsmError(ErrorNumber.EXISTS_INITIATOR,
- "Initiator %s already exist in other " %
- init_id + "access group %s(%s)" %
- (exist_sim_ag['name'], exist_sim_ag['ag_id']))
-
- exist_sim_ag = self._sim_ag_of_name(name)
- return exist_sim_ag
+ def sim_exp_create(self, sim_fs_id, exp_path, root_hosts, rw_hosts,
+ self.sim_fs_of_id(sim_fs_id)
+
+ self._data_add(
+ 'exps',
+ {
+ 'fs_id': sim_fs_id,
+ 'exp_path': exp_path,
+ 'root_hosts': ','.join(root_hosts),
+ 'rw_hosts': ','.join(rw_hosts),
+ 'ro_hosts': ','.join(ro_hosts),
+ 'anon_uid': anon_uid,
+ 'anon_gid': anon_gid,
+ 'auth_type': auth_type,
+ 'options': options,
+ })
+ # TODO: Should we create new error instead of NAME_CONFLICT?
+ raise LsmError(
+ ErrorNumber.NAME_CONFLICT,
+ "Export path is already used by other NFS export")
- raise LsmError(ErrorNumber.NAME_CONFLICT,
- "Another access group %s(%s) is using " %
- (exist_sim_ag['name'], exist_sim_ag['ag_id']) +
- "requested name %s but not contain init_id %s" %
- (exist_sim_ag['name'], init_id))
-
- sim_ag = dict()
- sim_ag['init_ids'] = [init_id]
- sim_ag['init_type'] = init_type
- sim_ag['sys_id'] = SimData.SIM_DATA_SYS_ID
- sim_ag['name'] = name
- sim_ag['ag_id'] = self._next_ag_id()
- self.ag_dict[sim_ag['ag_id']] = sim_ag
- return sim_ag
+ raise
+ return self.lastrowid
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found")
- # Check whether any volume masked to.
- raise LsmError(ErrorNumber.IS_MASKED,
- "Access group is masked to volume")
- del(self.ag_dict[ag_id])
- return None
+ self.sim_exp_of_id(sim_exp_id)
+ self._data_delete('exps', 'id="%s"' % sim_exp_id)
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found")
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "Initiator already "
- "in access group")
+ """
+ Return a list of sim_tgt dict.
+ """
+ return self._get_table('tgts', BackStore.TGT_KEY_LIST)
- self._sim_ag_of_init(init_id)
- self.ag_dict[ag_id]['init_ids'].extend([init_id])
- return self.ag_dict[ag_id]
+ SIM_DATA_FILE = os.getenv("LSM_SIM_DATA",
+ tempfile.gettempdir() + '/lsm_sim_data')
- def access_group_initiator_delete(self, ag_id, init_id, init_type,
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found: %s" % ag_id)
-
- new_init_ids = []
- new_init_ids.extend([cur_init_id])
- del(self.ag_dict[ag_id]['init_ids'])
- self.ag_dict[ag_id]['init_ids'] = new_init_ids
- raise LsmError(ErrorNumber.NO_STATE_CHANGE,
- "Initiator %s type %s not in access group %s"
- % (init_id, str(type(init_id)),
- str(self.ag_dict[ag_id]['init_ids'])))
- return self.ag_dict[ag_id]
+ ID_FMT = 5
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found: %s" % ag_id)
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
- self.vol_dict[vol_id]['mask'] = dict()
-
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "Volume already "
- "masked to access "
- "group")
-
- self.vol_dict[vol_id]['mask'][ag_id] = 2
+ return "%s_ID_%0*d" % (prefix, SimArray.ID_FMT, sim_id)
+
+ return int(lsm_id[-SimArray.ID_FMT:])
+
+ return resources
+
+ statefile = SimArray.SIM_DATA_FILE
+
+ self.bs_obj = BackStore(statefile, timeout)
+ self.bs_obj.check_version_and_init()
+ self.statefile = statefile
+ self.timeout = timeout
+
+ sim_job_id = self.bs_obj.sim_job_create(
+ data_type, sim_data_id)
+ return SimArray._sim_id_to_lsm_id(sim_job_id, 'JOB')
+
+ (progress, data_type, sim_data) = self.bs_obj.sim_job_status(
+ SimArray._lsm_id_to_sim_id(job_id))
+ status = JobStatus.INPROGRESS
+ status = JobStatus.COMPLETE
+
+ data = None
+ data = SimArray._sim_vol_2_lsm(sim_data)
+ data = SimArray._sim_fs_2_lsm(sim_data)
+ data = SimArray._sim_fs_snap_2_lsm(sim_data)
+
+ return (status, progress, data)
+
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_job_delete(SimArray._lsm_id_to_sim_id(job_id))
+ self.bs_obj.trans_commit()
return None
- raise LsmError(ErrorNumber.NOT_FOUND_ACCESS_GROUP,
- "Access group not found: %s" % ag_id)
- raise LsmError(ErrorNumber.NOT_FOUND_VOLUME,
- "No such Volume: %s" % vol_id)
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "Volume not "
- "masked to access "
- "group")
-
- raise LsmError(ErrorNumber.NO_STATE_CHANGE, "Volume not "
- "masked to access "
- "group")
-
- del(self.vol_dict[vol_id]['mask'][ag_id])
+ self.bs_obj = BackStore(self.statefile, int(ms/1000))
+ self.timeout = ms
return None
- # We don't check wether ag_id is valid
- rc = []
- continue
- rc.extend([sim_vol])
- return rc
+ return self.timeout
- # We don't check wether vold_id is valid
- sim_ags = []
- ag_ids = self.vol_dict[vol_id]['mask'].keys()
- sim_ags.extend([self.ag_dict[ag_id]])
- return sim_ags
+ return System(
+ sim_sys['id'], sim_sys['name'], sim_sys['status'],
+ sim_sys['status_info'])
+
+ return list(
+ SimArray._sim_sys_2_lsm(sim_sys)
+ for sim_sys in self.bs_obj.sim_syss())
+
+ vol_id = SimArray._sim_id_to_lsm_id(sim_vol['id'], 'VOL')
+ pool_id = SimArray._sim_id_to_lsm_id(sim_vol['pool_id'], 'POOL')
+ return Volume(vol_id, sim_vol['name'], sim_vol['vpd83'],
+ BackStore.BLK_SIZE,
+ int(sim_vol['total_space'] / BackStore.BLK_SIZE),
+ sim_vol['admin_state'], BackStore.SYS_ID,
+ pool_id)
+
+ return list(
+ SimArray._sim_vol_2_lsm(v) for v in self.bs_obj.sim_vols())
+
+ pool_id = SimArray._sim_id_to_lsm_id(sim_pool['id'], 'POOL')
+ name = sim_pool['name']
+ total_space = sim_pool['total_space']
+ free_space = sim_pool['free_space']
+ status = sim_pool['status']
+ status_info = sim_pool['status_info']
+ sys_id = BackStore.SYS_ID
+ element_type = sim_pool['element_type']
+ unsupported_actions = sim_pool['unsupported_actions']
+ return Pool(
+ pool_id, name, element_type, unsupported_actions, total_space,
+ free_space, status, status_info, sys_id)
+ self.bs_obj.trans_begin()
+ sim_pools = self.bs_obj.sim_pools()
+ self.bs_obj.trans_rollback()
+ return list(
+ SimArray._sim_pool_2_lsm(sim_pool) for sim_pool in sim_pools)
+
+ return Disk(
+ SimArray._sim_id_to_lsm_id(sim_disk['id'], 'DISK'),
+ sim_disk['name'],
+ sim_disk['disk_type'], BackStore.BLK_SIZE,
+ int(sim_disk['total_space'] / BackStore.BLK_SIZE),
+ Disk.STATUS_OK, BackStore.SYS_ID)
+
+ return list(
+ SimArray._sim_disk_2_lsm(sim_disk)
+ for sim_disk in self.bs_obj.sim_disks())
+
+ def volume_create(self, pool_id, vol_name, size_bytes, thinp, flags=0,
"""
- Find out the access groups defined initiator belong to.
- Will return a list of access group id or []
+ The '_internal_use' parameter is only for SimArray internal use.
+ This method will return the new sim_vol id instead of job_id when
+ '_internal_use' marked as True.
"""
- rc = []
- rc.extend([sim_ag['ag_id']])
- return rc
+ self.bs_obj.trans_begin()
- def iscsi_chap_auth(self, init_id, in_user, in_pass, out_user, out_pass,
- # No iscsi chap query API yet, not need to setup anything
+ new_sim_vol_id = self.bs_obj.sim_vol_create(
+ vol_name, size_bytes, SimArray._lsm_id_to_sim_id(pool_id), thinp)
+
+ return new_sim_vol_id
+
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_VOL, new_sim_vol_id)
+ self.bs_obj.trans_commit()
+
+ return job_id, None
+
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_delete(SimArray._lsm_id_to_sim_id(vol_id))
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
+ self.bs_obj.trans_begin()
+
+ sim_vol_id = SimArray._lsm_id_to_sim_id(vol_id)
+ self.bs_obj.sim_vol_resize(sim_vol_id, new_size_bytes)
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_VOL, sim_vol_id)
+ self.bs_obj.trans_commit()
+
+ return job_id, None
+
+ def volume_replicate(self, dst_pool_id, rep_type, src_vol_id, new_vol_name,
+ self.bs_obj.trans_begin()
+
+ src_sim_vol_id = SimArray._lsm_id_to_sim_id(src_vol_id)
+ # Verify the existence of source volume
+ src_sim_vol = self.bs_obj.sim_vol_of_id(src_sim_vol_id)
+
+ dst_sim_vol_id = self.volume_create(
+ dst_pool_id, new_vol_name, src_sim_vol['total_space'],
+ src_sim_vol['thinp'], _internal_use=True)
+
+ self.bs_obj.sim_vol_replica(src_sim_vol_id, dst_sim_vol_id, rep_type)
+
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_VOL, dst_sim_vol_id)
+ self.bs_obj.trans_commit()
+
+ return job_id, None
+
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+
+ def volume_replicate_range(self, rep_type, src_vol_id, dst_vol_id, ranges,
+ self.bs_obj.trans_begin()
+
+ # TODO: check whether star_blk + count is out of volume boundary
+ # TODO: Should check block overlap.
+
+ self.bs_obj.sim_vol_replica(
+ SimArray._lsm_id_to_sim_id(src_vol_id),
+ SimArray._lsm_id_to_sim_id(dst_vol_id), rep_type, ranges)
+
+ job_id = self._job_create()
+
+ self.bs_obj.trans_commit()
+ return job_id
+
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_state_change(
+ SimArray._lsm_id_to_sim_id(vol_id), Volume.ADMIN_STATE_ENABLED)
+ self.bs_obj.trans_commit()
return None
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_state_change(
+ SimArray._lsm_id_to_sim_id(vol_id), Volume.ADMIN_STATE_DISABLED)
+ self.bs_obj.trans_commit()
+ return None
+
+ # 0. Should we break replication if provided volume is a
+ # replication target?
+ # Assuming answer is no.
This method should never change the state of the array. It's a pure
reporting method only. The remove dependency operation would be the one
to change the state of the system.
Post by Gris Ge
+ # "Implies that this volume cannot be deleted or possibly
+ # modified because it would affect its children"
+ # The 'modify' here is incorrect. If data on source volume
+ # changes, SYNC_MIRROR replication will change all target
+ # volumes.
In this case the modify was in regards to the metadata about the volume,
not the volume data contents, e.g. you re-size the parent.
Post by Gris Ge
+ # 2. Should 'mask' relationship included?
+ # # Assuming only replication counts here.
Correct
Post by Gris Ge
+ # 3. For volume internal block replication, should we return
+ # True or False.
+ # # Assuming False
Can you elaborate on what you mean here?
Post by Gris Ge
+ # 4. volume_child_dependency_rm() against volume internal
+ # block replication, remove replication or raise error?
+ # # Assuming remove replication
+ src_sim_vol_id = SimArray._lsm_id_to_sim_id(vol_id)
+ dst_sim_vol_ids = self.bs_obj.dst_sim_vol_ids_of_src(src_sim_vol_id)
+ return True
+ return False
+
+ self.bs_obj.trans_begin()
+
+ self.bs_obj.sim_vol_src_replica_break(
+ SimArray._lsm_id_to_sim_id(vol_id))
+
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
+ fs_id = SimArray._sim_id_to_lsm_id(sim_fs['id'], 'FS')
+ pool_id = SimArray._sim_id_to_lsm_id(sim_fs['id'], 'POOL')
+ return FileSystem(fs_id, sim_fs['name'],
+ sim_fs['total_space'], sim_fs['free_space'],
+ pool_id, BackStore.SYS_ID)
+
- return self.fs_dict.values()
+ return list(SimArray._sim_fs_2_lsm(f) for f in self.bs_obj.sim_fss())
+ def fs_create(self, pool_id, fs_name, size_bytes, flags=0,
- self._check_dup_name(self.fs_dict.values(), fs_name,
- ErrorNumber.NAME_CONFLICT)
+ self.bs_obj.trans_begin()
- size_bytes = SimData._block_rounding(size_bytes)
- # check free size
- free_space = self.pool_free_space(pool_id)
- raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
- "Insufficient space in pool")
- sim_fs = dict()
- fs_id = self._next_fs_id()
- sim_fs['fs_id'] = fs_id
- sim_fs['name'] = fs_name
- sim_fs['total_space'] = size_bytes
- sim_fs['free_space'] = size_bytes
- sim_fs['sys_id'] = SimData.SIM_DATA_SYS_ID
- sim_fs['pool_id'] = pool_id
- sim_fs['consume_size'] = size_bytes
- self.fs_dict[fs_id] = sim_fs
- return sim_fs
+ new_sim_fs_id = self.bs_obj.sim_fs_create(
+ fs_name, size_bytes, SimArray._lsm_id_to_sim_id(pool_id))
- del(self.fs_dict[fs_id])
- return
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "No such File System: %s" % fs_id)
+ return new_sim_fs_id
- new_size_bytes = SimData._block_rounding(new_size_bytes)
- pool_id = self.fs_dict[fs_id]['pool_id']
- free_space = self.pool_free_space(pool_id)
- raise LsmError(ErrorNumber.NOT_ENOUGH_SPACE,
- "Insufficient space in pool")
-
- raise LsmError(ErrorNumber.NO_STATE_CHANGE,
- "New size same as current")
-
- self.fs_dict[fs_id]['total_space'] = new_size_bytes
- self.fs_dict[fs_id]['free_space'] = new_size_bytes
- self.fs_dict[fs_id]['consume_size'] = new_size_bytes
- return self.fs_dict[fs_id]
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "No such File System: %s" % fs_id)
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_FS, new_sim_fs_id)
+ self.bs_obj.trans_commit()
+
+ return job_id, None
+ #TODO: check clone snapshot and etc
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_fs_delete(SimArray._lsm_id_to_sim_id(fs_id))
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
+ sim_fs_id = SimArray._lsm_id_to_sim_id(fs_id)
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_fs_resize(sim_fs_id, new_size_bytes)
+ job_id = self._job_create(BackStore.JOB_DATA_TYPE_FS, sim_fs_id)
+ self.bs_obj.trans_commit()
+ return job_id, None
+
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % src_fs_id)
- raise LsmError(ErrorNumber.NOT_FOUND_FS_SS,
- "No such Snapshot: %s" % snap_id)
+ self.bs_obj.trans_begin()
+
+ sim_fs_snap_id = None
+ sim_fs_snap_id = SimArray._lsm_id_to_sim_id(snap_id)
- src_sim_fs = self.fs_dict[src_fs_id]
- src_sim_fs['clone'] = dict()
+ src_sim_fs_id = SimArray._lsm_id_to_sim_id(src_fs_id)
+ src_sim_fs = self.bs_obj.sim_fs_of_id(src_sim_fs_id)
+ pool_id = SimArray._sim_id_to_lsm_id(src_sim_fs['pool_id'], 'POOL')
- # Make sure we don't have a duplicate name
- self._check_dup_name(self.fs_dict.values(), dst_fs_name,
- ErrorNumber.NAME_CONFLICT)
+ dst_sim_fs_id = self.fs_create(
+ pool_id, dst_fs_name, src_sim_fs['total_space'],
+ _internal_use=True)
- dst_sim_fs = self.fs_create(
- src_sim_fs['pool_id'], dst_fs_name, src_sim_fs['total_space'], 0)
+ self.bs_obj.sim_fs_clone(src_sim_fs_id, dst_sim_fs_id, sim_fs_snap_id)
- src_sim_fs['clone'][dst_sim_fs['fs_id']] = {
- 'snap_id': snap_id,
- }
- return dst_sim_fs
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_FS, dst_sim_fs_id)
+ self.bs_obj.trans_commit()
+ return job_id, None
+
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- raise LsmError(ErrorNumber.NOT_FOUND_FS_SS,
- "No such Snapshot: %s" % snap_id)
- # TODO: No file clone query API yet, no need to do anything internally
- return None
+ self.bs_obj.trans_begin()
+ sim_fs_snap_id = None
+ sim_fs_snap_id = SimArray._lsm_id_to_sim_id(snap_id)
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- rc = []
- rc.extend([self.snap_dict[snap_id]])
- return rc
+ self.bs_obj.sim_fs_file_clone(
+ SimArray._lsm_id_to_sim_id(fs_id), src_fs_name, dst_fs_name,
+ sim_fs_snap_id)
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- self.fs_dict[fs_id]['snaps'] = []
- self._check_dup_name(self.fs_dict[fs_id]['snaps'], snap_name,
- ErrorNumber.NAME_CONFLICT)
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
- snap_id = self._next_snap_id()
- sim_snap = dict()
- sim_snap['snap_id'] = snap_id
- sim_snap['name'] = snap_name
- sim_snap['files'] = []
+ snap_id = SimArray._sim_id_to_lsm_id(sim_fs_snap['id'], 'FS_SNAP')
+ return FsSnapshot(
+ snap_id, sim_fs_snap['name'], sim_fs_snap['timestamp'])
- sim_snap['timestamp'] = time.time()
- self.snap_dict[snap_id] = sim_snap
- self.fs_dict[fs_id]['snaps'].extend([snap_id])
- return sim_snap
+ return list(
+ SimArray._sim_fs_snap_2_lsm(s)
+ for s in self.bs_obj.sim_fs_snaps(
+ SimArray._lsm_id_to_sim_id(fs_id)))
+ self.bs_obj.trans_begin()
+ sim_fs_snap_id = self.bs_obj.sim_fs_snap_create(
+ SimArray._lsm_id_to_sim_id(fs_id), snap_name)
+ job_id = self._job_create(
+ BackStore.JOB_DATA_TYPE_FS_SNAP, sim_fs_snap_id)
+ self.bs_obj.trans_commit()
+ return job_id, None
+
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- raise LsmError(ErrorNumber.NOT_FOUND_FS_SS,
- "No such Snapshot: %s" % snap_id)
- del self.snap_dict[snap_id]
- new_snap_ids = []
- new_snap_ids.extend([old_snap_id])
- self.fs_dict[fs_id]['snaps'] = new_snap_ids
- return None
-
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_fs_snap_delete(
+ SimArray._lsm_id_to_sim_id(snap_id),
+ SimArray._lsm_id_to_sim_id(fs_id))
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
def fs_snapshot_restore(self, fs_id, snap_id, files, restore_files,
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- raise LsmError(ErrorNumber.NOT_FOUND_FS_SS,
- "No such Snapshot: %s" % snap_id)
- # Nothing need to done internally for restore.
- return None
+ self.bs_obj.trans_begin()
+ sim_fs_snap_id = None
+ sim_fs_snap_id = SimArray._lsm_id_to_sim_id(snap_id)
+
+ self.bs_obj.sim_fs_snap_restore(
+ SimArray._lsm_id_to_sim_id(fs_id),
+ sim_fs_snap_id, files, restore_files, flag_all_files)
+ job_id = self._job_create()
+ self.bs_obj.trans_commit()
+ return job_id
+
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
+ sim_fs_id = SimArray._lsm_id_to_sim_id(fs_id)
+ self.bs_obj.trans_begin()
+ if self.bs_obj.clone_dst_sim_fs_ids_of_src(sim_fs_id) == [] and \
+ self.bs_obj.trans_rollback()
return False
- return True
- # We are snapshoting all files
- return True
- return True
- return False
+ self.bs_obj.trans_rollback()
+ return True
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- return None
- snap_ids = self.fs_dict[fs_id]['snaps']
- del self.snap_dict[snap_id]
- del self.fs_dict[fs_id]['snaps']
- snap_ids_to_rm = []
- # BUG: if certain snapshot is againsting all files,
- # what should we do if user request remove
- # dependency on certain files.
- # Currently, we do nothing
- return None
- new_files = []
- new_files.extend([old_file])
- # all files has been removed from snapshot list.
- snap_ids_to_rm.extend([snap_id])
- self.snap_dict[snap_id]['files'] = new_files
- del self.snap_dict[snap_id]
-
- new_snap_ids = []
- new_snap_ids.extend([cur_snap_id])
- del self.fs_dict[fs_id]['snaps']
- self.fs_dict[fs_id]['snaps'] = new_snap_ids
- return None
+ """
+ Assuming API defination is break all clone relationship and remove
+ all snapshot of this source file system.
+ """
+ self.bs_obj.trans_begin()
+ raise LsmError(
+ ErrorNumber.NO_STATE_CHANGE,
+ "No snapshot or fs clone target found for this file system")
+
+ src_sim_fs_id = SimArray._lsm_id_to_sim_id(fs_id)
+ self.bs_obj.sim_fs_src_clone_break(src_sim_fs_id)
+ self.bs_obj.sim_fs_snap_del_by_fs(src_sim_fs_id)
+ job_id = self._job_create()
+ self.bs_obj.trans_begin()
+ return job_id
+ exp_id = SimArray._sim_id_to_lsm_id(sim_exp['id'], 'EXP')
+ fs_id = SimArray._sim_id_to_lsm_id(sim_exp['fs_id'], 'FS')
+ return NfsExport(exp_id, fs_id, sim_exp['exp_path'],
+ sim_exp['auth_type'], sim_exp['root_hosts'],
+ sim_exp['rw_hosts'], sim_exp['ro_hosts'],
+ sim_exp['anon_uid'], sim_exp['anon_gid'],
+ sim_exp['options'])
+
- return self.exp_dict.values()
+ return [SimArray._sim_exp_2_lsm(e) for e in self.bs_obj.sim_exps()]
def fs_export(self, fs_id, exp_path, root_hosts, rw_hosts, ro_hosts,
- raise LsmError(ErrorNumber.NOT_FOUND_FS,
- "File System: %s not found" % fs_id)
- sim_exp = dict()
- sim_exp['exp_id'] = self._next_exp_id()
- sim_exp['fs_id'] = fs_id
- sim_exp['exp_path'] = "/%s" % sim_exp['exp_id']
- sim_exp['exp_path'] = exp_path
- sim_exp['auth_type'] = auth_type
- sim_exp['root_hosts'] = root_hosts
- sim_exp['rw_hosts'] = rw_hosts
- sim_exp['ro_hosts'] = ro_hosts
- sim_exp['anon_uid'] = anon_uid
- sim_exp['anon_gid'] = anon_gid
- sim_exp['options'] = options
- self.exp_dict[sim_exp['exp_id']] = sim_exp
- return sim_exp
+ self.bs_obj.trans_begin()
+ sim_exp_id = self.bs_obj.sim_exp_create(
+ SimArray._lsm_id_to_sim_id(fs_id), exp_path, root_hosts, rw_hosts,
+ ro_hosts, anon_uid, anon_gid, auth_type, options)
+ sim_exp = self.bs_obj.sim_exp_of_id(sim_exp_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_exp_2_lsm(sim_exp)
- raise LsmError(ErrorNumber.NOT_FOUND_NFS_EXPORT,
- "No such NFS Export: %s" % exp_id)
- del self.exp_dict[exp_id]
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_exp_delete(SimArray._lsm_id_to_sim_id(exp_id))
+ self.bs_obj.trans_commit()
+ return None
+
+ ag_id = SimArray._sim_id_to_lsm_id(sim_ag['id'], 'AG')
+ return AccessGroup(ag_id, sim_ag['name'], sim_ag['init_ids'],
+ sim_ag['init_type'], BackStore.SYS_ID)
+
+ return list(SimArray._sim_ag_2_lsm(a) for a in self.bs_obj.sim_ags())
+
+ raise LsmError(
+ ErrorNumber.NOT_FOUND_SYSTEM,
+ "System not found")
+ self.bs_obj.trans_begin()
+ new_sim_ag_id = self.bs_obj.sim_ag_create(name, init_type, init_id)
+ new_sim_ag = self.bs_obj.sim_ag_of_id(new_sim_ag_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_ag_2_lsm(new_sim_ag)
+
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_ag_delete(SimArray._lsm_id_to_sim_id(ag_id))
+ self.bs_obj.trans_commit()
+ return None
+
+ sim_ag_id = SimArray._lsm_id_to_sim_id(ag_id)
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_ag_init_add(sim_ag_id, init_id, init_type)
+ new_sim_ag = self.bs_obj.sim_ag_of_id(sim_ag_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_ag_2_lsm(new_sim_ag)
+
+ def access_group_initiator_delete(self, ag_id, init_id, init_type,
+ sim_ag_id = SimArray._lsm_id_to_sim_id(ag_id)
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_ag_init_delete(sim_ag_id, init_id)
+ sim_ag = self.bs_obj.sim_ag_of_id(sim_ag_id)
+ self.bs_obj.trans_commit()
+ return SimArray._sim_ag_2_lsm(sim_ag)
+
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_mask(
+ SimArray._lsm_id_to_sim_id(vol_id),
+ SimArray._lsm_id_to_sim_id(ag_id))
+ self.bs_obj.trans_commit()
+ return None
+
+ self.bs_obj.trans_begin()
+ self.bs_obj.sim_vol_unmask(
+ SimArray._lsm_id_to_sim_id(vol_id),
+ SimArray._lsm_id_to_sim_id(ag_id))
+ self.bs_obj.trans_commit()
+ return None
+
+ self.bs_obj.trans_begin()
+
+ sim_vols = self.bs_obj.sim_vols(
+ sim_ag_id=SimArray._lsm_id_to_sim_id(ag_id))
+
+ self.bs_obj.trans_rollback()
+ return [SimArray._sim_vol_2_lsm(v) for v in sim_vols]
+
+ self.bs_obj.trans_begin()
+ sim_ags = self.bs_obj.sim_ags(
+ sim_vol_id=SimArray._lsm_id_to_sim_id(vol_id))
+ self.bs_obj.trans_rollback()
+ return [SimArray._sim_ag_2_lsm(a) for a in sim_ags]
+
+ def iscsi_chap_auth(self, init_id, in_user, in_pass, out_user, out_pass,
+ self.bs_obj.trans_begin()
+ self.bs_obj.iscsi_chap_auth_set(
+ init_id, in_user, in_pass, out_user, out_pass)
+ self.bs_obj.trans_commit()
return None
+ tgt_id = "TGT_PORT_ID_%0*d" % (SimArray.ID_FMT, sim_tgt['id'])
+ return TargetPort(
+ tgt_id, sim_tgt['port_type'], sim_tgt['service_address'],
+ sim_tgt['network_address'], sim_tgt['physical_address'],
+ sim_tgt['physical_name'],
+ BackStore.SYS_ID)
+
- return self.tgt_dict.values()
+ return list(SimArray._sim_tgt_2_lsm(t) for t in self.bs_obj.sim_tgts())
diff --git a/plugin/sim/simulator.py b/plugin/sim/simulator.py
index 79b6df5..8f7adfc 100644
--- a/plugin/sim/simulator.py
+++ b/plugin/sim/simulator.py
self.uri = None
self.password = None
- self.tmo = 0
self.sim_array = None
qp = uri_parse(uri)
if 'parameters' in qp and 'statefile' in qp['parameters'] \
- self.sim_array = SimArray(qp['parameters']['statefile'])
+ self.sim_array = SimArray(qp['parameters']['statefile'], timeout)
- self.sim_array = SimArray()
+ self.sim_array = SimArray(None, timeout)
return None
- self.sim_array.save_state()
+ pass
return self.sim_array.job_status(job_id, flags)
Gris Ge
2015-01-09 08:15:29 UTC
Permalink
Post by Tony Asleson
Hi Gris,
Lots of changes here. I haven't had a chance to fully scrutinize it,
but this is what I have come up with so far.
I see that when we create something we start a transaction and then
after the item or job is created we commit the transaction. I don't
know what the concurrency or isolation levels are for sqlite, but one
potential issue is that if we create a job the transaction shouldn't get
committed until the job completes. Otherwise the record exists in the
db before the user that initiated the object creation is even returned it.
Hi Tony,

Make sense.
I will filter out the creating volume in 'volumes()' call and
indicate NAME_CONFLICT error for another creating volume.

When python quit, all non-committed transaction will be rollback.
The transaction should be committed once job created.
Post by Tony Asleson
Post by Gris Ge
-# TODO: 1. Introduce constant check by using state_to_str() converting.
-# 2. Snapshot should consume space in pool.
+# TODO: 1. Review public method of BackStore
+# 2. Make sure statefile is created with global write.
Can sqlite handle multiple concurrent client access to the same on disk
data? I'm not sure what it can/cannot do.
Yes. It can.
http://www.sqlite.org/faq.html#q6
http://www.sqlite.org/lockingv3.html

In this patch, I use 'immediate' transaction to ensure only one writer
at the same time.

Tested by these command:
# Concurrent creating
for x in `seq 1 100`;
do
lsmenv sim lsmcli ac --name gris_ag_1$x \
--init iqn.1986-05.com.example:gris-9$x --sys sim-01 -s &
done
# Duplicate creating
for x in `seq 1 100`;
do
lsmenv sim lsmcli ac --name gris_ag_10 \
--init iqn.1986-05.com.example:gris-10 --sys sim-01 -s &
done
Post by Tony Asleson
Post by Gris Ge
+ sql_cmd += (
+ "CREATE TABLE disks ("
+ "id INTEGER PRIMARY KEY, "
+ "total_space LONG, "
+ "disk_type INTEGER, "
+ "status INTEGER, "
+ "owner_pool_id INTEGER, " # Indicate this disk is used to
+ # assemble a pool
+ "disk_prefix TEXT);\n")
FOREIGN KEY(system_id) REFERENCES system(id)
A disk can exist when a pool does not, in addition a disk could
participate in multiple pools. A pool could exist without any disks.
Instead of having field owner_pool_id present I believe we should have a
separate table that relates which disk(s) are in which pool(s).
CREATE TABLE disk_allocation (
FOREIGN KEY (pool_id) REFERENCES pools(id),
FOREIGN KEY (disk_id) REFERENCES disks(id),
INTEGER percent_utilized
)
For free disk, 'owner_pool_id' just be 0.
Currently, we don't support RAID/pool from disk slice yet.
Let's focus on what we have now in this patch first.
Post by Tony Asleson
Post by Gris Ge
+
+ sql_cmd += (
+ "CREATE TABLE tgts ("
+ "id INTEGER PRIMARY KEY, "
+ "port_type INTEGER, "
+ "service_address TEXT, "
+ "network_address TEXT, "
+ "physical_address TEXT, "
+ "physical_name TEXT);\n")
This should be related to a system yes?
FOREIGN KEY(system_id) REFERENCES system(id)
Since the whole simulator is one system. I skipped the systemd id in
all tables except the 'systems' table.
Post by Tony Asleson
Post by Gris Ge
+ "parent_pool_id INTEGER, " # Indicate this pool is allocated from
+ # other pool
+ "member_type INTEGER, "
+ "total_space LONG);\n") # total_space here is only for
+ # sub-pool (pool from pool)
FOREIGN KEY(system_id) REFERENCES system(id)
I suggest we ditch a pool allocated from another pool for the simulator.
Calculate total_space & free space from the disks that comprise the
pool and the volumes that have been created from it.
Since ONTAP is using pool over pool.
Keep pool over pool in simulator just in case someday we will support
manipulating pool.
It's already exist in old pickle simulator code, I am trying to
provide identical behaviour of old pickle one.
Post by Tony Asleson
Post by Gris Ge
+ sql_cmd += (
+ "CREATE TABLE volumes ("
+ "id INTEGER PRIMARY KEY, "
+ "vpd83 TEXT, "
+ "name TEXT UNIQUE, "
+ "total_space LONG, "
+ "consumed_size LONG, " # Reserved for future thinp support.
+ "pool_id INTEGER, "
FOREIGN KEY(pool_id) REFERENCES pool(id),
Thanks for the tips. I will use that keyword.
Post by Tony Asleson
Post by Gris Ge
+ "admin_state INTEGER, "
+ "thinp INTEGER, "
+ "rep_source INTEGER, " # Only exist on target volume.
+ "rep_type INTEGER);\n") # Only exist on target volume
The relationship between volumes should be stored in a separate table,
one which I think you have already created below in vol_reps?
Oh. I forgot to remove that.
Post by Tony Asleson
Post by Gris Ge
+ sql_cmd += (
+ "CREATE TABLE vol_rep_ranges ("
+ "src_vol_id INTEGER, "
+ "dst_vol_id INTEGER, "
+ "src_start_blk LONG, "
+ "dst_start_blk LONG, "
+ "blk_count LONG);\n")
Do we really need to keep track of replication ranges? How will be
leverage this?
No. Since we don't have query method for this, there is no need to
trace it.
Post by Tony Asleson
Post by Gris Ge
+ sql_cmd += (
+ "CREATE TABLE fs_clones ("
+ "id TEXT UNIQUE, "
+ "src_fs_id INTEGER, "
FOREIGN KEY(src_fs_id) REFERENCES fs(id),
Post by Gris Ge
+ "dst_fs_id INTEGER, "
FOREIGN KEY(dst_fs_id) REFERENCES fs(id),
Post by Gris Ge
+ "fs_snap_id INTEGER);\n")
Why does a fs snapshot need to exist for a fs_clone to exist?
Since fs_clone() has 'snapshot' argument.
I will remove it as we don't have query method for this neither.
Post by Tony Asleson
Post by Gris Ge
+ "exp_path TEXT UNIQUE, "
+ "auth_type TEXT, "
+ "root_hosts TEXT, " # divided by ','
+ "rw_hosts TEXT, " # divided by ','
+ "ro_hosts TEXT, " # divided by ','
This is a violation of first normal form. These should be broken out
into a separate table or tables.
Will do. More code for a simple list.
Post by Tony Asleson
Post by Gris Ge
+ "anon_uid INTEGER, "
+ "anon_gid INTEGER, "
+ "options TEXT);\n")
+
+ sql_cmd += (
+ "CREATE TABLE jobs ("
+ "id INTEGER PRIMARY KEY, "
+ "duration INTEGER, "
+ "timestamp TEXT, "
+ "data_type INTEGER, "
+ "data_id TEXT);\n")
In addition, please add NOT NULL to anything that isn't a
primary/foreign key.
Will do that for foreign key.
The primary key will be automatically generate if providing NULL.
Post by Tony Asleson
Post by Gris Ge
- self.vol_dict[src_vol_id]['replicate'][dst_vol_id] = list()
+ # TODO: If a volume is in a replication relationship, resize should
+ # be handled properly.
Should this even be allowed?
No sure. We don't have clear document about replication/clone/snapshot
yet.
Post by Tony Asleson
Post by Gris Ge
+
+ # 0. Should we break replication if provided volume is a
+ # replication target?
+ # Assuming answer is no.
This method should never change the state of the array. It's a pure
reporting method only. The remove dependency operation would be the one
to change the state of the system.
I move it from volume_child_dependency_rm() to discuss
'child_dependency' here.
Post by Tony Asleson
Post by Gris Ge
+ # "Implies that this volume cannot be deleted or possibly
+ # modified because it would affect its children"
+ # The 'modify' here is incorrect. If data on source volume
+ # changes, SYNC_MIRROR replication will change all target
+ # volumes.
In this case the modify was in regards to the metadata about the volume,
not the volume data contents, e.g. you re-size the parent.
Post by Gris Ge
+ # 2. Should 'mask' relationship included?
+ # # Assuming only replication counts here.
Correct
Post by Gris Ge
+ # 3. For volume internal block replication, should we return
+ # True or False.
+ # # Assuming False
Can you elaborate on what you mean here?
In plugin_test test_replication_range(), both source volume and target
volume is the same for volume_replicate_range().
I am not sure whether this counts as child dependency.
If not,
volume_child_dependency_rm() should raise NO_STATE_CHANGE.
Then we have on method removing this kind of replication.
If yes,
volume_child_dependency() definition need a rework.


Giving we don't handle replication in old code, let's focus this patch
on making old thing worked works and leave API discussion in other
thread with user case and code.


Thank you very much for the review.
--
Gris Ge
Loading...