Gris Ge
2014-06-27 10:50:03 UTC
Hi All,
I would like to share some ideas about lsm tests.
Current issues:
* We have these test:
tester.c # Testing LSM C binding.
plugin_test.c # Testing plugin
cmdtest.py # Testing lsmcli
* It's blur whether they should be capacity driven or simulator only.
The tester.c and cmdtest.py are checking information about
simulator plugin, like how many volumes we could get on query,
use lsm_test_pool for volume or fs creating and etc.
* We are putting all test cases into single file as subroutines
which is hard to maintain.
Needs:
1. Test with these interfaces:
* Python Binding
* C Binding
* lsmcli
2. Cover these types of test:
* Plugin test
# Plugin test focus on validating how much does a plugin
# supported. We use it to generate hardware support page.
* Library test
# This kind of test is runnable on real plugin also, but suggested
# be against simulator plugin only.
1. Scalability test
# Example, querying 1000+ volumes.
2. Constants test.
# Comparing C binding constants with Python binding.
3. The output format of lsmcli.
4. Validate lsmcli command options.
As plugin test also test the library, so only two type of tests:
A. Library test
B. Plugin + Library test
3. Performance check.
# Comparing performance with provided stable performance data.
# In case any patch introduce performance regression.
Possible solution:
Basic rule:
1. All test cases would be run against any URI. Even may be
skipped as capability check fail.
2. Simulate how library user would do.
3. User case driven, test case code could be used as sample code.
4. Test case could run directly without lsm_test.py assistance.
Code layout:
test/lsm_test.py
test/c_tests/001_query_sys.c
# Explain the test procedure
test/py_tests/001_query_sys.py
test/lsmcli_tests/001_query_sys.py
lsm_test.py \
-u uri [-p pass] [-p performance_file] [-t test_case] [-s test_set]
[-g performance_file] [-l log_folder]
# If 'test_set' == 'plugin', only plugin test will be called.
# With '-p performance_file', performance will be checked and
# ranked.
# The '-g performance_file' is to generate performance data for
# next comparison.
* Invoking test cases
* Base on test_set to choose test cases to run.
# All executable file in c_tests, py_tests, lsmcli
# named as xxx_ (^[0-9]{3}_) will be considered as a test case.
# with out we don't need to maintain a config file for
# test cases list.
* Test case will be invoked with
-u uri [-p pass] [-s test_set]
# Test case can skip complex library validation if [-s plugin]
# define requesting plugin test only.
* Generate fancy report.
* Gather debug log from lsmd and plugin.
# currently, plugin does not have debug log yet.
* Basically, with the same test number, c_test, py_tests and
lsmcli_tests are following the same test procedure. With that
we can ensure user can use any way we supported to meet their
needs.
* If possible, let lsm_test.py start lsmd and plugin, which
* could introduce memory leak check and crash dump.
Performance file:
Format:
URI=sim:// # First line for URI
#[test_name] [time_in_seconds]
c_tests/001_query 25
Rank:
lsm_test.py will calculate out a percentage about performance.
Let's assume c_tests/001_query take 30 seconds:
+10%(30/25) # Bad performance
How does lsm_test.py check whether certain test case can be run:
* Invoke test case file without parameter will got this kind of
output text:
# Line 1: Type # Library or Plugin
# Line 2: Capabilities needed for this test.
Example (volume creating test):
1 2
# 1 for lsm.Test.TEST_TYPE_LIBRARY
# 2 for lsm.Test.TEST_TYPE_PLUGIN
20 21 33
# 20 is lsm.Capabilities.VOLUMES
# 21 is lsm.Capabilities.VOLUME_CREATE
# 22 is lsm.Capabilities.VOLUME_DELETE
* If request
How does lsm_test.py represent test result:
* Base on return code of test case:
0 PASS
1 FAIL # Bug spotted.
# As lsm_test.py already checked the capability before
# invoking certain test cases, if NO_SUPPORT error found
# during test, it should be treated as FAIL.
Report sample:
URI: sim://
LSM Version: 0.1.0
# test case name # Result # Performance (if provided)
c_tests/001_query PASS +10%(30/25)
How to handle logs:
All test case use STDOUT and STDERR, lsm_test.py capture them
by saving them into PWD folder(if -l log_folder not defined) as
lsm_test_log/test_result.log
lsm_test_log/<test_name>_<time_stamp>.log
lsm_test_log/<test_name>_<time_stamp>.err
lsm_test_log/syslog.log
# need a way to parse or capture syslog from lsmd and
# plugins.
Any comments or suggestions will be appreciated.
Thank you in advance.
Best regards.
I would like to share some ideas about lsm tests.
Current issues:
* We have these test:
tester.c # Testing LSM C binding.
plugin_test.c # Testing plugin
cmdtest.py # Testing lsmcli
* It's blur whether they should be capacity driven or simulator only.
The tester.c and cmdtest.py are checking information about
simulator plugin, like how many volumes we could get on query,
use lsm_test_pool for volume or fs creating and etc.
* We are putting all test cases into single file as subroutines
which is hard to maintain.
Needs:
1. Test with these interfaces:
* Python Binding
* C Binding
* lsmcli
2. Cover these types of test:
* Plugin test
# Plugin test focus on validating how much does a plugin
# supported. We use it to generate hardware support page.
* Library test
# This kind of test is runnable on real plugin also, but suggested
# be against simulator plugin only.
1. Scalability test
# Example, querying 1000+ volumes.
2. Constants test.
# Comparing C binding constants with Python binding.
3. The output format of lsmcli.
4. Validate lsmcli command options.
As plugin test also test the library, so only two type of tests:
A. Library test
B. Plugin + Library test
3. Performance check.
# Comparing performance with provided stable performance data.
# In case any patch introduce performance regression.
Possible solution:
Basic rule:
1. All test cases would be run against any URI. Even may be
skipped as capability check fail.
2. Simulate how library user would do.
3. User case driven, test case code could be used as sample code.
4. Test case could run directly without lsm_test.py assistance.
Code layout:
test/lsm_test.py
test/c_tests/001_query_sys.c
# Explain the test procedure
test/py_tests/001_query_sys.py
test/lsmcli_tests/001_query_sys.py
lsm_test.py \
-u uri [-p pass] [-p performance_file] [-t test_case] [-s test_set]
[-g performance_file] [-l log_folder]
# If 'test_set' == 'plugin', only plugin test will be called.
# With '-p performance_file', performance will be checked and
# ranked.
# The '-g performance_file' is to generate performance data for
# next comparison.
* Invoking test cases
* Base on test_set to choose test cases to run.
# All executable file in c_tests, py_tests, lsmcli
# named as xxx_ (^[0-9]{3}_) will be considered as a test case.
# with out we don't need to maintain a config file for
# test cases list.
* Test case will be invoked with
-u uri [-p pass] [-s test_set]
# Test case can skip complex library validation if [-s plugin]
# define requesting plugin test only.
* Generate fancy report.
* Gather debug log from lsmd and plugin.
# currently, plugin does not have debug log yet.
* Basically, with the same test number, c_test, py_tests and
lsmcli_tests are following the same test procedure. With that
we can ensure user can use any way we supported to meet their
needs.
* If possible, let lsm_test.py start lsmd and plugin, which
* could introduce memory leak check and crash dump.
Performance file:
Format:
URI=sim:// # First line for URI
#[test_name] [time_in_seconds]
c_tests/001_query 25
Rank:
lsm_test.py will calculate out a percentage about performance.
Let's assume c_tests/001_query take 30 seconds:
+10%(30/25) # Bad performance
How does lsm_test.py check whether certain test case can be run:
* Invoke test case file without parameter will got this kind of
output text:
# Line 1: Type # Library or Plugin
# Line 2: Capabilities needed for this test.
Example (volume creating test):
1 2
# 1 for lsm.Test.TEST_TYPE_LIBRARY
# 2 for lsm.Test.TEST_TYPE_PLUGIN
20 21 33
# 20 is lsm.Capabilities.VOLUMES
# 21 is lsm.Capabilities.VOLUME_CREATE
# 22 is lsm.Capabilities.VOLUME_DELETE
* If request
How does lsm_test.py represent test result:
* Base on return code of test case:
0 PASS
1 FAIL # Bug spotted.
# As lsm_test.py already checked the capability before
# invoking certain test cases, if NO_SUPPORT error found
# during test, it should be treated as FAIL.
Report sample:
URI: sim://
LSM Version: 0.1.0
# test case name # Result # Performance (if provided)
c_tests/001_query PASS +10%(30/25)
How to handle logs:
All test case use STDOUT and STDERR, lsm_test.py capture them
by saving them into PWD folder(if -l log_folder not defined) as
lsm_test_log/test_result.log
lsm_test_log/<test_name>_<time_stamp>.log
lsm_test_log/<test_name>_<time_stamp>.err
lsm_test_log/syslog.log
# need a way to parse or capture syslog from lsmd and
# plugins.
Any comments or suggestions will be appreciated.
Thank you in advance.
Best regards.
--
Gris Ge
Gris Ge