2. Guides¶
2.1. Configuring the TCF client¶
The client looks for configuration files in multiple places.
2.1.1. Configuring access to ttbd servers¶
Each server you have access to is described by its URL
https://HOSTNAME.DOMAIN:PORT, which can be passed to tcf --url URL
<COMMAND>
or more conveniently, set in a configuration file with
tcfl.config.url_add()
:
tcfl.config.url_add('https://HOSTNAME.DOMAIN:PORT', ssl_ignore = True)
optionally an argument aka = NAME can be added to add a nickname for the server (which defaults to HOSTNAME).
In multiple-instance deployments (infrastructure/production/staging), most users only need access to the production server, so the following AKAs are recommended:
- HOSTNAME: production
- HOSTNAMEi: infrastructure
- HOSTNAMEs: staging
2.1.2. Configuring for Zephyr OS development¶
Note
the tcf-zephyr RPM already provides these settings for ZEPHYR_SDK_INSTALL_DIR, ZEPHYR_TOOLCHAIN_VARIANT (and ZEPHYR_GCC_VARIANT for < v1.11 versions of Zephyr) in /etc/tcf/config_zephyr.py, as well as an RPM installation of the Zephyr SDK.
To work with Zephyr OS applications without having to set the environment, a TCF configuration file conf_zephyr.py can be created with these settings:
# Set Zephyr's build environment (use .setdefault() to inherit
# existing values if present)
import os
os.environ.setdefault('ZEPHYR_TOOLCHAIN_VARIANT', 'zephyr')
os.environ.setdefault('ZEPHYR_SDK_INSTALL_DIR',
os.path.expanduser('/opt/zephyr-sdk-0.9.5'))
2.1.3. Configuring for Arduino Sketch development¶
Note
installing the tcf-sketch RPM package will bring in dependencies to build Arduino sketches that can be deployed in MCUs (such as Arduino Builder v1.6.13).
The corresponding board support packages need to be manually setup into the system using the Arduino IDE in a location that all users who are going to need it can access:
As your user, start the Arduino IDE and install the support packages for the boards you will build for; in this case we only do the Arduino Due and the Arduino 101:
- In the menu, select Tools > Board (ANY) > Boards Manager
- Search for Arduino Due, Intel Curie Boards (for Arduino 101) or any other boards you need support for
- Install
Packages appear in ~/.arduino15/packages
Any other user that needs access to those board definitions has to repeat those steps or copy those files. For example, for an autobuilder such as Jenkins, those files would have to be copied to the build slaves.
Ensure the targets are configured to expose Sketch information by declaring a tag:
- sketch_fqbn: sam:1.6.9:arduino_due_x_dbg for Arduino Due
2.1.4. Other configuration settings¶
Ignoring directory names when
scanning for test cases
:will tell the scanner to ignore any directory called docANYTHING
2.2. Running test cases with tcf run¶
tcf run builds and runs one or more testcases in one or more targets (or in none if the testcase does not require any):
$ tcf run
will recursively look for testcases from the current working directory and try to run them in as many targets as possible. The scanner will look for files that describe testcases:
- test*.py testcases written in Python
- testcase.ini Zephyr Sanity Check test cases
it can also be pointed to one or more files or directories:
$ tcf run ../test1.py sub/dir/1 bat/file/testcase.ini
for each testcase, if it needs targets, it evaluates which ones are available (from the configured *ttbd* servers, filtered with -t command line options and more filtering requirements the testcase might impose). Then it decides in how many it has to run it based on:
- are we asking to run on any target, one of each type, or all
- multiple random permutations of targets doing different roles that satisfy the testcase’s specifications
Each testcase and unique target (or group of targets) where it is going to be run is assigned a unique 4 letter identifier (called HASH) which is used to prefix all the messages regarding to it. This is useful to grep in long logs of multiple testcases and targets.
Consider a simple testcase that checks if there is a file called thisfile in the current working directory (and thus requires no targets):
#! /usr/bin/python2
#
# Copyright (c) 2017 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# pylint: disable = missing-docstring
import os
import tcfl.tc
@tcfl.tc.tags(ignore_example = True)
class _test(tcfl.tc.tc_c):
def eval(self):
filename = "testfile"
if os.path.exists(filename):
self.report_info("file '%s': exists! test passes" % filename)
else:
raise tcfl.tc.failed_e("file '%s': does not exist" % filename)
we run it:
$ tcf run /usr/share/tcf/examples/test_file_exists.py
FAIL0/2kdb /usr/share/tcf/examples/test_file_exists.py#_test @local: evaluation failed
FAIL0/ toplevel @local: 1 tests (0 passed, 1 failed, 0 blocked, 0 skipped) - failed
/tmp/tcf-Dw7AGw.mk:2: recipe for target 'tcf-jobserver-run' failed
make: *** [tcf-jobserver-run] Error 1
you can ignore the messages from make, they just say tcf returned with an error code–because a testcase failed, as the file thisfile doesn’t exist; if you add more -v:
$ tcf run -vv /usr/share/tcf/examples/test_file_exists.py
INFO2/ toplevel @local: scanning for test cases
INFO2/2kdb /usr/share/tcf/examples/test_file_exists.py#_test @local: will run on target group 'local'
FAIL2/2kdbE#1 /usr/share/tcf/examples/test_file_exists.py#_test @local: eval failed: file 'testfile': does not exist
FAIL0/2kdb /usr/share/tcf/examples/test_file_exists.py#_test @local: evaluation failed
FAIL0/ toplevel @local: 1 tests (0 passed, 1 failed, 0 blocked, 0 skipped) - failed
tcf is shy by default, it will only print about something having failed. See below for a more detailed description of the output.
Note the *HASH*, in this case 2kdb, which uniquely identifies the testcase/local-target combination. If the testcase fails, a report-HASH.txt file is created with failure information and instructions for reproduction.
Let’s create testfile, so the test passes:
$ touch testfile
$ tcf run /usr/share/tcf/examples/test_file_exists.py
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
If some error happens while running the testcase (network connection failure, or bug in the testcase code), the testcase will be blocked; to diagnose why, add -v’s or look at the logfile (if –logfile was given) for all the details; let’s copy the example and introduce a Python error:
$ cp /usr/share/examples/test_file_exists.py .
$ echo error >> test_file_exists.py
$ tcf run test_file_exists.py
BLCK0/ test_file_exists.py @local: blocked: Cannot import: name 'error' is not defined (NameError)
E tc.testcases_discover():4484: WARNING! No testcases found
BLCK0/ toplevel @local: 0 tests (0 passed, 0 failed, 0 blocked, 0 skipped) - / nothing ran
2.2.1. Running Zephyr OS testcases and samples¶
Because we installed the tcf-zephyr package, it brings the dependencies needed to run Zephyr OS testcases and samples; let’s run Zephyr OS’s Hello World sample:
$ git clone http://github.com/zephyrproject-rtos/zephyr
$ cd zephyr
$ export ZEPHYR_BASE=$PWD
$ tcf run /usr/share/tcf/examples/test_zephyr_hello_world.py
PASS0/ toplevel @local: 7 tests (7 passed, 0 failed, 0 blocked, 0 skipped) - passed
This will now build the Zephyr OS’s Hello World sample app for as many different Zephyr OS capable targets as it might find, try to run it there and verify it got the “Hello World!” string back. Note this might involve a lot of compilation based on how many targets you can access and it will take more or less based on your machine’s power.
2.2.2. Options to tcf run¶
There are many options to tcf run which you can find with tcf run –help; here is a summary of the most frequent ones:
- -v Increases the verbosity of the console output, can be repeated for more information
-i adds a RunID (
-i RUNID
), which will be prefixed in most messages and reports; it’ll also generate a logfile calledRUNID.log
with lots of low-level details about the process. Failure reports will be created in files calledreport-RUNID:HASH.txt
, where hash is the code that uniquely identifies the test case name and the target where it ran.This is very useful when running tcf from a continuous integration engine, such as Jenkins, to identify the reports and runs.
–log-file LOGDILE file to which to dump the log
–logdir LOGDIR directory where to place the log and failure report files.
-y: just run once on any target that is available and suitable; there is also -U to run only in one target of each type and -u to run on every target, plus a more detailed explanation here.
-t allows tcf run to filter which targets are available to select when determining where to run; see specifications.
-s allows tcf run to filter which testcases are to be run; see specifications:
$ tcf run -s slow
This selects all testcases that have a slow tag; to select testcases that don’t sport the slow tag:
$ tcf run -s "not slow"
you can also select a tag by value:
$ tcf run -s 'slow == "very"'
2.2.3. Target and tag filter specifications¶
TCF incorporates a simple expression language parser to allows to express boolean filters to select targets and tags in a programatic way, such as:
tag == 'value' and bsp in [ 'x86', 'arm' ]
and is used by tcf run’s -t and -s options, to select targets
and testcases, respectively. As well, testcases can use in the
tcfl.tc.target()
and tcfl.tc.interconnect()
decorators to
select tags.
The grammar is formally defined in commonl.expr_parser
, but in
general an valid expressions are:
- symbol
- symbol operator CONSTANT
- symbol in [ CONTANT1, CONSTANT2 … ]
- [not] ( [not] expresion1 and|or [not] expression2 )
- operators are and, or, not, ==, !=, <, >, >= and <=
- Python regex matching can be done with :
For targets, the target’s name and full IDs are made symbols that evaluate as True; other symboles added are the active BSP model and (if any) active BSP are also made symbols, so for given a target named z3 with the following tags:
$ tcf list -vvv z3
https://SERVER:5000/ttb-v0/targets/z3
...
fullid: SERVER/z3
id: z3
consoles: [u'arm']
fullid: https://SERVER:5000/z3
owner: None
bsps: {
u'arm': {
u'console': u'arm',
u'zephyr_board': u'qemu_cortex_m3',
u'zephyr_kernelname': u'zephyr.elf',
}
}
....
The following expressions could be used to match it:
- ``bsp == 'arm'``
- ``z3 or z1``
- ``zephyr_board in [ 'qemu_cortex_m3', 'frdm_k64f' ]``
The same system applies for tag; the tag itself is a symbol that evaluates to true if available. It’s contents are available for matching.
2.2.3.1. Examples of specifications¶
these examples can be passed to tcf run -t or tcfl.tc.target()
and tcfl.tc.interconnect()
in their spec parameter:
'type == "arduino101"'
filters to any target that declare’s its type is Arduino 101:
'bsp == "x86"'
any target (and BSP model of said target) that sports an x86 BSP–if the target supports multiple BSPs and BSP models, then it will select all the BSP models that expose at least an ‘x86’ BSP):
'zephyr_board : "quark_.*"'
this selects any target that contains a BSP that exposes a zephyr_board tag whose content matches the Python regex quark_.*; this:
'bsp == "x86" or bsp == "arm"'
'bsp in [ "x86", "arm" ]'
would run on any target that contains a BSP that declares itself as x86 or ARM:
'TARGETNAME'
would match a target called TARGETNAME (in any server):
'server/TARGETNAME'
would match TARGETNAME on server server:
'nwa or qz31a or ql06a'
This will allow only to run in network nwa and in targets qz31a and ql06a; this effectively limits the testcase to run only in permutations of targets that fit those limitations.
2.2.4. Interpreting the output of tcf run¶
In an output such as:
$ tcf run -vv -t local/qz39c-arm test_zephyr_hello_world.py
INFO2/ toplevel @local: scanning for test cases
INFO2/n9gc test_zephyr_hello_world.py#_test @local/qz39c-arm:arm: will run on target group 'xqkw (target=local/qz39c-arm:arm)'
PASS1/n9gc test_zephyr_hello_world.py#_test @local/qz39c-arm:arm: configure passed
PASS1/n9gc test_zephyr_hello_world.py#_test @local/qz39c-arm:arm: build passed
PASS2/n9gc test_zephyr_hello_world.py#_test @local/qz39c-arm:arm: deploy passed
INFO2/n9gcE#1 test_zephyr_hello_world.py#_test @local/qz39c-arm:arm: Reset
PASS2/n9gcE#1 test_zephyr_hello_world.py#_test @local/qz39c-arm:arm: found expected `Hello World! arm` in console `default` at 0.03s
PASS2/n9gcE#1 test_zephyr_hello_world.py#_test @local/qz39c-arm:arm: eval pass: found expected `Hello World! arm` in console `default` at 0.03s
PASS1/n9gc test_zephyr_hello_world.py#_test @local/qz39c-arm:arm: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
FIXME: verify
Note the columns and the messages:
INFO, information, PASS, FAIL, BLCK, something passed, failed to pass or an error/condition disallows the system from telling if there is a PASS or a FAIL; all followed by a number that indicates the depth ofthe message (0 most general, 1 more detailed, 2 more verbose, etc)
/XYZW[CBDEL].NN
, the message ID. this four character code is a unique identifier of the test case name and when it applies of the testcase name and the name of the target where it is being ran along with its BSP model. It stays stable across different runs. The lettersCBDEL
describe which phase it us running (Configure, Build, Deploy, Evaluation, cLean), followed by the step number when they are being executed. Each @build command in the markup description would be a different step; such would it be each @eval markup for evaluation.What is this useful for? Well, you can ask the system to generate a log file (using –log-file FILE.log) and just let it print the most high level information to the console. The log has way more information than you would ever care for, but when something fails, grep for the message ID in the logfile (for example, if the build had failed,
grep 33afB FILE.log
would give you the full build output for you to determine what is wrong. It is also passed to the server, so we can identify what the target was doing when.Note
TCF also generates reports when something fails (look for
report-MSGID.txt
) with all that detailed information.Testcase name, target name and BSP model.
A message indicating what happened
2.3. Creating testcases¶
Most of the testcases use the APIs provided with tcfl.tc.tc_c
and tcfl.tc.target_c
, plus any other Python module library.
Going back to the very simple testcase used here:
#! /usr/bin/python2
#
# Copyright (c) 2017 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# pylint: disable = missing-docstring
import os
import tcfl.tc
@tcfl.tc.tags(ignore_example = True)
class _test(tcfl.tc.tc_c):
def eval(self):
filename = "testfile"
if os.path.exists(filename):
self.report_info("file '%s': exists! test passes" % filename)
else:
raise tcfl.tc.failed_e("file '%s': does not exist" % filename)
this is a testcase that just checks for a file existing in the local
directory. It inherits from tcfl.tc.tc_c
to create a
testcase:
>>> class _test(tcfl.tc.tc_c):
this providing the basic glue to the meta test runner. The class can be name whatever suits your needs.
Note this testcase declares no targets; it is an static testcase, which evaluates by running on the local system with an eval() function:
>>> def eval(self):
>>> filename = "testfile"
>>> if os.path.exists(filename):
>>> self.report_info("file '%s': exists! test passes" % filename)
>>> else:
>>> raise tcfl.tc.failed_e("file '%s': does not exist" % filename)
Multiple evaluation functions may be specified, as long as they are called evalSOMETHING. You could add:
>>> def eval_2(self):
>>> self.shcmd_local("cat /etc/passwd")
this would use tcfl.tc.tc_c.shcmd_local()
to run a system
command. If it fails, it will raise a tcfl.tc.failed_e
exception that will fail the testcase. When running again, it will run
both functions in alphabetical order. For the testcase to pass, both
functions have to pass.
Running the modified version:
$ cp /usr/share/tcf/examples/test_file_exists.py .
# Edit test_file_exists.py to add eval_2
$ tcf run -vv test_file_exists.py
INFO2/ toplevel @local: scanning for test cases
INFO2/ryvi test_file_exists.py#_test @local: will run on target group 'local'
FAIL2/ryviE#1 test_file_exists.py#_test @local: eval failed: file 'testfile': does not exist
FAIL0/ryvi test_file_exists.py#_test @local: evaluation failed
FAIL0/ toplevel @local: 1 tests (0 passed, 1 failed, 0 blocked, 0 skipped) - failed
Note
ignore git errors/warnings, they are harmless and is a known issue.
It fails because the file testfile does not exist; eval() comes before eval_02() in alphabetical order, so it is run first. As soon as it fails the testcase execution is terminated, so eval_2() never gets to run.
Create testfile in the local directory and re-run it, so eval() passes and eval_2() also runs:
$ touch testfile
INFO2/ toplevel @local: scanning for test cases
INFO2/ryvi test_file_exists.py#_test @local: will run on target group 'local'
INFO2/ryviE#1 test_file_exists.py#_test @local: file 'testfile': exists! test passes
PASS2/ryviE#1 test_file_exists.py#_test @local: eval passed: 'cat /etc/passwd' @test_file_exists.py:14
PASS1/ryvi test_file_exists.py#_test @local: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
that E#1? those are messages relative to the Evaluation phase; the number means the index of the evaluation. We can ask tcf run to repeat the evaluation two times adding -r 2.
2.3.1. Zephyr OS’s Hello World!¶
Let’s move on now to testcases that use targets, using the Zephyr OS as a test subject. Ensure the tcf-zephyr package was installed (with dnf install -y tcf-zephyr) and clone the Zephyr OS (or used an existing cloned tree):
# dnf install -y tcf-zephyr # If not yet installed
$ git clone http://github.com/zephyrproject-rtos/zephyr
$ cd zephyr
$ export ZEPHYR_BASE=$PWD
This is a very simple test case that cheks that the target where it runs prints Hello World!; :
#! /usr/bin/python2
#
# Copyright (c) 2017 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
#
# We don't care for documenting all the interfaces, names should be
# self-descriptive:
#
# - pylint: disable = missing-docstring
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.tags(**tcfl.tl.zephyr_tags())
# Ask for a target that defines an zephyr_board field, which indicates
# it can run the Zephyr OS
@tcfl.tc.target("zephyr_board",
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "hello_world"))
class _test(tcfl.tc.tc_c):
@staticmethod
def eval(target):
target.expect("Hello World! %s" % target.kws['zephyr_board'])
running it on whichever suitable target (-y):
$ cp /usr/share/tcf/examples/test_zephyr_hello_world.py .
$ tcf run -vvy test_zephyr_hello_world.py
INFO2/ toplevel @local: scanning for test cases
INFO2/9orv test_zephyr_hello_world.py#_test @server2/arduino2-01:arm: will run on target group 'ixgq (target=server2/arduino2-01:arm)'
PASS2/9orv test_zephyr_hello_world.py#_test @server2/arduino2-01:arm: configure passed
PASS1/9orv test_zephyr_hello_world.py#_test @server2/arduino2-01:arm: build passed
PASS2/9orv test_zephyr_hello_world.py#_test @server2/arduino2-01:arm: deploy passed
INFO2/9orvE#1 test_zephyr_hello_world.py#_test @server2/arduino2-01:arm: Reset
PASS2/9orvE#1 test_zephyr_hello_world.py#_test @server2/arduino2-01:arm: found expected `Hello World! arm` in console `default` at 1.23s
PASS2/9orvE#1 test_zephyr_hello_world.py#_test @server2/arduino2-01:arm: eval pass: found expected `Hello World! arm` in console `default` at 1.23s
PASS1/9orv test_zephyr_hello_world.py#_test @server2/arduino2-01:arm: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
This testcase:
declares it needs a target on which to run with the
tcfl.tc.target()
class decorator, which by default will be called target:>>> @tcfl.tc.target("zephyr_board", >>> app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE, >>> "samples/hello_world"))
the target will need to satisfy the specification given in the first parameter (spec) which requires it exposes a tag zephyr_board (which indicates it can run Zephyr Apps and gives the name of board for the Zephyr’s build system with the BOARD parameter).
the target will be loaded with a Zephyr app which is available in $ZEPHYR_BASE/samples/hello_world. Note how ZEPHYR_BASE is defined, instead of pulling it out straight from the environment with os.environ[ZEPHYR_BASE]–this makes it easier to tell when the testcase is ignored because ZEPHYR_BASE is not defined.
app_zephyr is a plugin that indicates tcf how to build applications for different environments, how to load them into targets and how to start the targets.
creates an evaluation function to be ran during the evaluation phase:
>>> def eval(self, target): >>> target.expect("Hello World! %s" % target.bsp_model)
This function is passed arguments that represent the targets it has to operate on. Because we only declared a single target with
tcfl.tc.target()
and we didn’t specify a name with the name argument, it defaults to target (if you pass an argument that cannot be recognized like the name of a declared target, it will error out).For evaluating, we use
tcfl.tc.target_c.expect()
which expects to receive in the target’s console the string Hello World! BSPMODEL; the BSP model describes in which mode we are running the target when it has multiple cores/BSPs incorporated (such as the Arduino 101, which includes an x86 and arc BSPs).So for a target with an ARM BSP declared on its tags, it will expect to receive Hello World! ARM; if the string is received, the function returns and the testcase is considered to pass. Otherwise, it raises a failure exception
tcfl.tc.failed_e
, which is used by the test runner to mark the test case as a failure. If any other exception is raised (ortcfl.tc.blocked_e
), the meta test runner will consider the test blocked.
2.3.2. A test case with multiple targets¶
Consider the following made up test case, where we have two Zephyr applications in two subdirectories of where the testcase file is located. They are a simple (and fake) apps that allow one board to connect to another (for simplicity of argument, we’ll omit how they connect).
Declare the need for two Zephyr OS capable targets that are interconnected and indicate where to go to build the sources for them:
>>> @tcfl.tc.interconnect()
>>> @tcfl.tc.target("zephyr_board", app_zephyr = "node1")
>>> @tcfl.tc.target("zephyr_board", app_zephyr = "node0")
>>> class _test(tcfl.tc.tc_c):
Setup some hooks – if when receiving from the console on any target
it prints a fatal fault, fail the test (this will be evaluated when
calling tcfl.tc.target_c.expect()
or the full testcase expect
loop calling run()
on
tcfl.tc.tc_c.tls.expecter
>>> def setup(self, target, target1):
>>> target.on_console_rx("FAILURE", result = 'fail')
>>> target1.on_console_rx("FAILURE", result = 'fail')
When the evaluation phase is to start, power cycle the interconnect (to ensure it is fresh). Note we don’t do such with the Zephyr targets, as the app_zephyr plugin has inserted two start() functions to do it for us. Why? because he knows how to do start them better (as some boards might lose the flashed image if we power cycle them). It is possible to override the default actions that app_zephyr (and other Application Builders introduce).
>>> def start(self):
>>> ic.power.cycle()
Now we are going to wait for both targets to boot and report readiness
>>> def eval_0(self, target):
>>> target.expect("I am ready")
>>> target.report_pass("target ready")
>>>
>>> def eval_1(self, target1):
>>> target1.expect("I am ready")
>>> target1.report_pass("target1 ready")
Now we are going to do the actual connection test by requesting
target to connect to target1 and then target1 is told to to
accept the connection request–note each target exposes its address in
the network in a tag called address – we can use
tcfl.tc.target_c.kws
to format messages:
>>> def eval_3(self, target, target1):
>>> target.send("connect to %(address)s" % target1.kws)
>>> target.expect("connecting to %(address)s" % target1.kws)
>>>
>>> target1.expect("connect request from %(address)s" % target.kws)
>>> target1.send("connect accept from %(address)s" % target.kws)
>>>
>>> target1.expect("accepted connect from %(address)s" % target.kws)
>>> target.expect("connected to %(address)s" % target1.kws)
Now we wait for both targets to print a heartbeat that they emit every five seconds–if we have to wait more than ten seconds for both heartbeats, it will consider it a failure:
>>> def eval_4(self, target, target1):
>>> target.on_console_rx(re.compile("heartbeat #[0-9]+ ok"))
>>> target1.on_console_rx(re.compile("heartbeat #[0-9]+ ok"))
>>> self.expecter.run(10)
The tcfl.tc.tc_c.tls.expecter
is a
generic expectation loop to which anything can be attach to poll and
check while the loop runs. It will run until all the things it has
been asked to expect have ocurred or fail with a timeout.
It can be also used with context managers:
>>> def eval_5(self, target):
>>> times = 4
>>> with target.on_console_rx_cm(re.compile("heartbeat #[0-9]+ failure")
>>> result = "fail"):
>>> target.send("do_some_lengthy_operation")
>>> target1.send("do_some_lengthy_operation")
>>> target.on_console_rx(re.compile("completed"))
>>> target1.on_console_rx(re.compile("completed"))
>>> self.expecter.run()
APIs tcfl.tc.target_c.expect()
and tcfl.tc.target_c.wait()
are an example of doing something similar to this.
With the test concluded, we power down all the targets in reverse order:
>>> def teardown(self):
>>> for _n, target in reversed(self.targets.iteritems()):
>>> target.power.off()
Or not, it can also be left to ttbd to decide if they have to be powered down and when.
2.3.3. Connecting to network ports¶
A well designed target setup will have the test targets in an isolated
network, so the client cannot access remotely. However, using the
IP tunnel
extension, the
client can access the target’s network ports. This also allows to
establish SSH connections
.
Consider this example:
>>> r = target.ssh.check_output("echo -n %s > somefile" % self.ts)
>>> if r == 0:
>>> self.report_pass("created a file with SSH command")
This is an excerpt of a longer example
that shows how to do different SSH
and SCP operations.
Tunnels are only valid while the target is acquired.
2.3.4. Network test cases¶
FIXME: this needs a good intro
2.3.5. Capturing tcpdumps of network traffic¶
When using the network setups controlled by the TCF server, ttbd, it is possible to capture the network traffic that the server sees for further analysis.
A well designed test network will interconnect one or more targets and also will include one server interface, which usually is associated to the interconnect target that defines said test network.
Using in the server the conf_00_lib.vlan_pci
to bring up the
network, like with the configuration functions
conf_00_lib.nw_default_targets_add()
to create networks, those
targets will have the tcpdump capability.
To use, you would, declare a test that uses an interconnect and before powering the interconnect, in any start() method, you would set the tcpdump property in the interconnect to unique file name:
>>> @tcfl.tc.interconnect(spec = "ipv4_addr")
>>> @tcfl.tc.target()
>>> @tcfl.tc.target()
>>> class something(tcfl.tc.tc_c):
>>> ...
>>> def start(self, ic, target, target1):
>>> # Tell the interconnect we want to capture network data to a
>>> # file named after self.kws['tc_hash'] (to avoid conflicts)
>>> ic.property_set('tcpdump', self.kws['tc_hash'] + ".cap")
>>> ic.power.cycle()
>>> ...
Later on, in the teardown() methods, bring the data back from the server to a file in the current work directory called tcpdump-RUNID:HASHID.cap:
>>> def teardown(self, ic):
>>> ic.power.off() # ensure tcpdump flushes
>>> # Get the TCP dump from the interconnect to a file in the CWD
>>> # called tcpdump-HASH.cap
>>> ic.broker_files.dnload(self.kws['tc_hash'] + ".cap",
>>> "tcpdump-%(runid)s:%(tc_hash)s.cap" % self.kws)
From the command line, this would be:
$ tcf acquire NETWORKTARGET
$ tcf propert-set NETWORKTARGET tcpdump CAPTURENAME
$ tcf power-cycle NETWORKTARGET
... do network operations ...
$ tcf power-off NETWORKTARGET
$ tcf broker-file-dnload NETWORKTARGET CAPTURENAME myfile.cap
$ tcf release NETWORKTARGET
now myfile.cap can be opened with Wireshark or processed with any other tool for further analysis.
2.3.6. Saving data and files to a location¶
Sometimes there is a need to keep files around for post analysis, there are different ways to do this:
- provide
--no-remove-tmpdir
to tcf run; it will provide the name of the temporary directory where all the temporary files are maintained and will not delete it upon exit (as it does by default) - provide
--tmpdir=DIR
, whereDIR
is an existing, empty directory where tcf run will place all the temporary files.
Whichever is the temporary directory (autogenerated or specified), the files are placed in subdirectories named after each test case run’s *HASH*.
If you need to create files (or copy files, etc) in the testcase,
use to the tmpdir
variable to generate
or copy files to the directory assigned to the testcase, for example:
with open(os.path.join(self.tmpdir, "somefile.orig")) as f:
f.write("original file")
target.ssh.copy_from(os.path.join(self.tmpdir, "somefile.orig"))
# ... do something on the target
target.ssh.copy_to("somefile", self.tmpdir)
# compare files
2.3.7. Connecting things¶
Some targets supports things (other targets) that can be connected or disconnected. If they do, their tags will show:
$ tcf list -vv qlf04a
...
things: [u'a101-05', u'usb-key0']
...
How this is done is specific to the driver given in the configuration,
but it might be a target that is physically connected to another via a
USB cutter. The USB cutter is an object implementing a plugger
interface
which is given to the
configuration method ttbl.test_target.thing_add()
in the config
files.
- to find available things to connect, use the tcf thing-list or
within a test script,
tcfl.tc.target_c.thing_list()
- to connect, use thing-plug or within a test script,
tcfl.tc.target_c.thing_plug()
- to disconnect, use thing-unplug or within a test script,
tcfl.tc.target_c.thing_unplug()
Note you must own both the target and the thing to be able to plug one into another.
orphan: |
---|
2.4. Testcase training¶
Before you get started, this training guide assumes:
- You have installed the client software
- have access to a server (local or remote)
- you know your way around: Linux, Git, building the Zephyr kernel
- you know some basics of Python
2.4.1. A basic testcase¶
#! /usr/bin/python2
import tcfl.tc
class _test(tcfl.tc.tc_c):
def eval(self):
self.report_info("Hello")
Inherit the basic class and create an evaluation method (where we run the test), just print something
$ tcf run test_01.py
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
Quite succint? append one more more
-v
to tcf run.Increase its verbosity, add
dlevel = -1
or-2
to self.report_info()Why not use Python’s print() instead of self.report_info()? it sidesteps TCF’s login mechanism and if you you want stuff logged and reported with proper verbosity control, you need to use TCF’s reporting system.
As well, when running many testcases and targets at the same time, it helps to keep the information organized; more on that later.
2.4.2. What’s in a testcase¶
- A testcase has six phases (which can be individually inhibited)
- configuration
- build
- assignment of targets
- deployment
- evaluation (subphases setup/start/teardown)
- cleanup
- Can be written in any language, as long as there is a driver to plug it into the framework (Python scripting driver by default, also available driver for Zephyr Sanitycheck’s testcase.ini)
- Can need zero or more targets
- example of no targets: static checkers, checkpatch, things that can run locally
- one target: Zephyr Sanity Checks, some network testcases
- multiple targets: device driver I/O testcases, most network testcases
- Testcase is just a name, it can be used during development for fast flashing and whatever suits you
2.4.3. Breaking it up¶
#! /usr/bin/python2
import tcfl.tc
class _test(tcfl.tc.tc_c):
def eval_00(self):
self.report_info("Hello 1")
def eval_01(self):
self.report_info("Hello 2")
- You can have multiple evaluation functions, as long as they are called eval_*()
- They are executed sequentially in alphabetical order
$ tcf run -vv test_02.py
INFO2/ toplevel @local: scanning for test cases
INFO1/gixj test_02.py#_test @local: will run on target group 'localic-localtg'
INFO2/gixjE#1 test_02.py#_test @local: Hello 1
INFO2/gixjE#1 test_02.py#_test @local: Hello 2
PASS1/gixj test_02.py#_test @local: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.4. Give me a target¶
#! /usr/bin/python2
import tcfl.tc
@tcfl.tc.target(mode = "any")
class _test(tcfl.tc.tc_c):
@staticmethod
def eval_00(target):
target.report_info("Hello 1")
@staticmethod
def eval_01(target):
target.report_info("Hello 2")
@tcfl.tc.target()
allows the testcase to request a target..or two, or seven hundred. By default they are called target, target1… but you can usename = "NAME"
to give your own name.- The only arguments you can pass to the eval_*() methods are target names. Note how we pass that to eval_01() and use it to report; this will have an impact when using multiple targets.
- Targets can be selected to run a tescase on in any of three ways
(which you can set with teh
mode = "MODE"
parameter):- any: pick one, any one will do
- all: pick all
- one-per-type: pick one of each type
$ tcf run -vv test_03.py
INFO2/ toplevel @local: scanning for test cases
INFO1/taqq test_03.py#_test @local/qz46a-riscv32:riscv32: will run on target group 'target=local/qz46a-riscv32:riscv32'
INFO2/taqqE#1 test_03.py#_test @local/qz46a-riscv32:riscv32: Hello 1
INFO2/taqqE#1 test_03.py#_test @local/qz46a-riscv32:riscv32: Hello 2
PASS1/taqq test_03.py#_test @local/qz46a-riscv32:riscv32: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.5. Give me a target that can run Zephyr¶
#! /usr/bin/python2
import tcfl.tc
@tcfl.tc.target("zephyr_board", mode = "any")
class _test(tcfl.tc.tc_c):
@staticmethod
def eval_00(target):
target.report_info("Hello 1")
@staticmethod
def eval_01(target):
target.report_info("Hello 2")
@tcfl.tc.target()
’s first argument is a logical expression which can use the tags a target exports (which you can see withtcf list -vv TARGETNAME
)- zephyr_board means any target that exports a tag called zephyr_board with a value. That maps to the BOARD arg to the Zephyr build.
$ tcf run -vv test_04.py
INFO2/ toplevel @local: scanning for test cases
INFO1/vzqp test_04.py#_test @local/qz39a-arm:arm: will run on target group 'target=local/qz39a-arm:arm'
INFO2/vzqpE#1 test_04.py#_test @local/qz39a-arm:arm: Hello 1
INFO2/vzqpE#1 test_04.py#_test @local/qz39a-arm:arm: Hello 2
PASS1/vzqp test_04.py#_test @local/qz39a-arm:arm: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.6. Give me a target that can run Zephyr and is x86¶
#! /usr/bin/python2
import tcfl.tc
@tcfl.tc.target('zephyr_board and bsp == "x86"')
class _test(tcfl.tc.tc_c):
@staticmethod
def eval_00(target):
target.report_info("Hello 1")
def eval_01(self):
self.report_info("Hello 2")
many combinations are possible with logical expressions.
Can get tricky, though;
-vvvv
will give you lots of details of the selection process; also the same expression can be passed to tcf list to figure out how it works.Removing
mode = "any"
defaults to running on one target of each type..which might result on a lot of execution if you have many different types of targets.
$ tcf run -v test_05.py
INFO1/1qvi test_05.py#_test @srrsotc03/ti-01:x86+arc+arm/x86: will run on target group 'target=srrsotc03/ti-01:x86+arc+arm'
INFO1/lwq8 test_05.py#_test @srrsotc03/qc1000-01:x86+arc/x86: will run on target group 'target=srrsotc03/qc1000-01:x86+arc'
INFO1/dvj6 test_05.py#_test @jfsotc03/a101-16:x86: will run on target group 'target=jfsotc03/a101-16:x86'
INFO1/gjfa test_05.py#_test @jfsotc02/qz34l-x86:x86: will run on target group 'target=jfsotc02/qz34l-x86:x86'
INFO1/luzj test_05.py#_test @jfsotc03/qc1000-24:x86: will run on target group 'target=jfsotc03/qc1000-24:x86'
INFO1/nycw test_05.py#_test @jfsotc03/a101-16:x86+arc/x86: will run on target group 'target=jfsotc03/a101-16:x86+arc'
INFO1/ra1f test_05.py#_test @jfsotc02/mv-09:x86: will run on target group 'target=jfsotc02/mv-09:x86'
...
PASS0/ toplevel @local: 9 tests (9 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.7. But put some Zephyr into it?¶
#! /usr/bin/python2
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.target('zephyr_board', mode = 'any',
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
'samples', 'hello_world'))
class _test(tcfl.tc.tc_c):
@staticmethod
def eval_00(target):
target.report_info('Hello 1')
def eval_01(self):
self.report_info('Hello 2')
- feed
@target()
a path to a Zephyr App. app_zephyr
enables a plugin that will monkey patch methods in your test class that tells it how to build, flash and startup a target running Zephyr (functions called(configure|build|deploy|start)_50_for_target()
)- The app is built, flashed and the target reset before eval_*() are called
$ tcf run -vv test_06.py
INFO1/bxot test_06.py#_test @jfsotc02/a101-01:x86: will run on target group 'target=jfsotc02/a101-01:x86'
PASS2/bxot test_06.py#_test @jfsotc02/a101-01:x86: configure passed
PASS1/bxot test_06.py#_test @jfsotc02/a101-01:x86: build passed
PASS2/bxot test_06.py#_test @jfsotc02/a101-01:x86: deploy passed
INFO2/bxotE#1 test_06.py#_test @jfsotc02/a101-01:x86: Reset
INFO2/bxotE#1 test_06.py#_test @jfsotc02/a101-01:x86: Hello 1
INFO2/bxotE#1 test_06.py#_test @jfsotc02/a101-01:x86: Hello 2
PASS1/bxot test_06.py#_test @jfsotc02/a101-01:x86: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.8. Now we are building all the time, some time savers¶
create a temporary directory to save the build products:
$ mkdir tmp
add
--tmpddir tmp
totcf run
–this way the builder will be able to reuse those build productsby default, a temporary directory is created and removed when done–this helps when you have a lot of code being run in many targets and you just care about the results.
as a bonus, in
tmp/run.log
you’ll get a log file with all the step by step details,--log-file
also gets you that.work just with one target–this means that we don’t have to recompile for new targets constantly (as they are assigned randomly) and reuse what is already built in the temporary directory:
Let a target be assigned running normally:
$ tcf run --tmpdir tmp test_06.py ... INFO1/bxot test_06.py#_test @jfsotc02/a101-01:x86: will run on target group 'target=jfsotc02/a101-01:x86'
Save that jfsotc02/a101-01, that’s your target ID that you feed to
tcf run
with-t
:$ tcf run --tmpdir tmp -t jfsotc02/a101-01 test_06.py INFO1/bxot test_06.py#_test @jfsotc02/a101-01:x86: will run on target group 'target=jfsotc02/a101-01:x86'...
2.4.9. So, is Zephyr booting?¶
#! /usr/bin/python2
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.target('zephyr_board', mode = 'any',
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
'samples', 'hello_world'))
class _test(tcfl.tc.tc_c):
@staticmethod
@tcfl.tc.serially()
def build(target):
target.zephyr.config_file_write('banner_config',
'CONFIG_BOOT_BANNER=y')
@staticmethod
def eval_00(target):
target.expect('***** BOOTING ZEPHYR OS')
def eval_01(self):
self.report_info('Hello 2')
- build*() are build methods, things that have to happen when we build (akin to eval*())
- Use the target’s
zephyr
API extension to enable the boot banner with a config fragment introduced withconfig_file_write
- Use expect() to wait for something on the default console; in this
case, Zephyr’s boot banner (it can be a
Python regex
re.compile(REGEX)
) and if it timeouts, it raises a failure exception, no more eval*() will execute.
$ tcf run --tmpdir tmp -vv test_08.py
2.4.10. And is the code doing what I want? Hello World?¶
#! /usr/bin/python2
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.target('zephyr_board', mode = 'any',
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
'samples', 'hello_world'))
class _test(tcfl.tc.tc_c):
@staticmethod
@tcfl.tc.serially()
def build(target):
target.zephyr.config_file_write('banner_config',
'CONFIG_BOOT_BANNER=y')
@staticmethod
def eval_00(target):
target.expect('***** BOOTING ZEPHYR OS')
@staticmethod
def eval_01(target):
target.expect('Hello World!')
- Same as before, use expect() to require the console to print Hello World! to determine if the testcase is passing.
$ tcf run --tmpdir tmp -vv test_08.py
INFO2/ toplevel @local: scanning for test cases
INFO1/lyse test_08.py#_test @local/qz36b-arm:arm: will run on target group 'target=local/qz36b-arm:arm'
PASS2/lyse test_08.py#_test @local/qz36b-arm:arm: configure passed
PASS1/lyse test_08.py#_test @local/qz36b-arm:arm: build passed
PASS2/lyse test_08.py#_test @local/qz36b-arm:arm: deploy passed
INFO2/lyseE#1 test_08.py#_test @local/qz36b-arm:arm: Reset
PASS2/lyseE#1 test_08.py#_test @local/qz36b-arm:arm: found expected `***** BOOTING ZEPHYR OS` in console `local/qz36b-arm:default` at 0.05s
PASS2/lyseE#1 test_08.py#_test @local/qz36b-arm:arm: eval pass: found expected `***** BOOTING ZEPHYR OS` in console `local/qz36b-arm:default` at 0.05s
PASS2/lyseE#1 test_08.py#_test @local/qz36b-arm:arm: found expected `Hello World!` in console `local/qz36b-arm:default` at 0.04s
PASS2/lyseE#1 test_08.py#_test @local/qz36b-arm:arm: eval pass: found expected `Hello World!` in console `local/qz36b-arm:default` at 0.04s
PASS1/lyse test_08.py#_test @local/qz36b-arm:arm: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.11. What happens when it fails? Let’s make it fail¶
#! /usr/bin/python2
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.target('zephyr_board', mode = 'any',
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
'samples', 'hello_world'))
class _test(tcfl.tc.tc_c):
@staticmethod
@tcfl.tc.serially()
def build(target):
target.zephyr.config_file_write(
'banner_config', 'CONFIG_BOOT_BANNER=y')
@staticmethod
def eval_00(target):
target.expect('***** BOOTING ZEPHYR OS')
@staticmethod
def eval_01(target):
target.expect('Hello Kitty!')
- Instead of Hello World, look for Hello Kitty
- After waiting for sixy seconds to receive Hello Kitty, it raises
an exception and fails.
- This will generate a file called
report-HASHID.txt
in the current directory with detailed information HASHID
is a UUID from the testcase name and the target(s) where it ran, azus in the example below.
- This will generate a file called
$ tcf run --tmpdir tmp -vv test_09.py
INFO2/ toplevel @local: scanning for test cases
INFO1/azus test_09.py#_test @local/qz36a-arm:arm: will run on target group 'target=local/qz36a-arm:arm'
PASS2/azus test_09.py#_test @local/qz36a-arm:arm: configure passed
PASS1/azus test_09.py#_test @local/qz36a-arm:arm: build passed
PASS2/azus test_09.py#_test @local/qz36a-arm:arm: deploy passed
INFO2/azusE#1 test_09.py#_test @local/qz36a-arm:arm: Reset
PASS2/azusE#1 test_09.py#_test @local/qz36a-arm:arm: found expected `***** BOOTING ZEPHYR OS` in console `local/qz36a-arm:default` at 0.05s
PASS2/azusE#1 test_09.py#_test @local/qz36a-arm:arm: eval pass: found expected `***** BOOTING ZEPHYR OS` in console `local/qz36a-arm:default` at 0.05s
FAIL2/azusE#1 test_09.py#_test @local/qz36a-arm:arm: eval failed: expected console output 'Hello Kitty!' from console 'qz36a-arm:default' NOT FOUND after 60.1 s
FAIL0/azus test_09.py#_test @local/qz36a-arm:arm: evaluation failed
FAIL0/ toplevel @local: 1 tests (0 passed, 1 failed, 0 blocked, 0 skipped) - failed
/tmp/tcf-k9WBkM.mk:2: recipe for target 'tcf-jobserver-run' failed
make: *** [tcf-jobserver-run] Error 1
Note
ignore the make messages at the bottom, it is a subproduct of using make’s jobserver.
2.4.12. Running a Zephyr sanity check testcase¶
- A
builtin driver
understand’s Zephyr’s Sanity Check testcase.ini files and by default runs them on all available targets (one of each type) - You can use
-u
to override and force it to run on all targets (stress test) or-y
to run on any.
$ cd $ZEPHYR_BASE
$ mkdir tmp
$ tcf run --tmpdir tmp -v tests/kernel/common
INFO1/ihuu tests/kernel/common/testcase.ini#test @local/qz35a-arm:arm: will run on target group 'target=local/qz35a-arm:arm'
INFO1/7jbr tests/kernel/common/testcase.ini#test @local/qz42a-nios2:nios2: will run on target group 'target=local/qz42a-nios2:nios2'
INFO1/hegm tests/kernel/common/testcase.ini#test @local/qz32b-x86:x86: will run on target group 'target=local/qz32b-x86:x86'
INFO1/c8y1 tests/kernel/common/testcase.ini#test @local/qz45b-riscv32:riscv32: will run on target group 'target=local/qz45b-riscv32:riscv32'
PASS1/c8y1 tests/kernel/common/testcase.ini#test @local/qz45b-riscv32:riscv32: build passed
PASS1/hegm tests/kernel/common/testcase.ini#test @local/qz32b-x86:x86: build passed
PASS1/7jbr tests/kernel/common/testcase.ini#test @local/qz42a-nios2:nios2: build passed
PASS1/ihuu tests/kernel/common/testcase.ini#test @local/qz35a-arm:arm: build passed
PASS1/c8y1 tests/kernel/common/testcase.ini#test @local/qz45b-riscv32:riscv32: evaluation passed
PASS1/hegm tests/kernel/common/testcase.ini#test @local/qz32b-x86:x86: evaluation passed
PASS1/7jbr tests/kernel/common/testcase.ini#test @local/qz42a-nios2:nios2: evaluation passed
PASS1/ihuu tests/kernel/common/testcase.ini#test @local/qz35a-arm:arm: evaluation passed
PASS0/ toplevel @local: 4 tests (4 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.13. Double down¶
#! /usr/bin/python2
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.target('zephyr_board', mode = 'any',
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
'samples', 'hello_world'))
@tcfl.tc.target('zephyr_board', mode = 'any',
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
'samples', 'hello_world'))
class _test(tcfl.tc.tc_c):
@staticmethod
def eval_00(target, target1):
target.expect('Hello World!')
target1.expect('Hello World!')
- Two targets, flashed with Hello World
- The second one is called, by default target1. You can also use name = “TARGETNAME”.
- Note how the messages update vs running a single target
- Wait first for target and then target1 to both print Hello World!
$ tcf run --tmpdir tmp -vv test_10.py
INFO1/zdad test_10.py#_test @zc7o: will run on target group 'target=local/qz40a-nios2:nios2 target1=local/qz48b-riscv32:riscv32'
PASS2/zdad test_10.py#_test @zc7o: configure passed
PASS1/zdad test_10.py#_test @zc7o: build passed
PASS2/zdad test_10.py#_test @zc7o: deploy passed
INFO2/zdadE#1 test_10.py#_test @zc7o|local/qz40a-nios2: Reset
INFO2/zdadE#1 test_10.py#_test @zc7o|local/qz48b-riscv32: Reset
PASS2/zdadE#1 test_10.py#_test @zc7o|local/qz40a-nios2: found expected `Hello World!` in console `local/qz40a-nios2:default` at 0.11s
PASS2/zdadE#1 test_10.py#_test @zc7o|local/qz40a-nios2: eval pass: found expected `Hello World!` in console `local/qz40a-nios2:default` at 0.11s
PASS2/zdadE#1 test_10.py#_test @zc7o|local/qz48b-riscv32: found expected `Hello World!` in console `local/qz48b-riscv32:default` at 0.06s
PASS2/zdadE#1 test_10.py#_test @zc7o|local/qz48b-riscv32: eval pass: found expected `Hello World!` in console `local/qz48b-riscv32:default` at 0.06s
PASS1/zdad test_10.py#_test @zc7o: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
Note how now the test reports running on @zc7o, a unique identifier for a set of targets. Log messages specific to an specific target get that prefixed to the target name (as in @zc7o|local/qz48b-riscv32).
2.4.14. Double down more efficiently¶
We were waiting for one to complete, then the other, but they were running at the same time. We can set expectations and then wait for them to happen in parallel.
Expectation: poll and check
#! /usr/bin/python2
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.target('zephyr_board', mode = 'any',
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
'samples', 'hello_world'))
@tcfl.tc.target('zephyr_board', mode = 'any',
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
'samples', 'hello_world'))
class _test(tcfl.tc.tc_c):
def eval_00(self, target, target1):
with target.on_console_rx_cm('Hello World!'), \
target1.on_console_rx_cm('Hello World!'):
self.expecter.run()
- Asks to expect receiving from each target the same string, but all at the same time and run the expectation loop until they are all received or it timesout (meaning failure).
- That said, you will only notice speed ups on things that take longer to execute and thus, parallelize well :)
$ tcf run --tmpdir tmp -vv test_11.py
INFO2/ toplevel @local: scanning for test cases
INFO1/0d6z test_11.py#_test @6hj6: will run on target group 'target=local/qz44b-nios2:nios2 target1=local/qz41b-nios2:nios2'
PASS2/0d6z test_11.py#_test @6hj6: configure passed
PASS1/0d6z test_11.py#_test @6hj6: build passed
PASS2/0d6z test_11.py#_test @6hj6: deploy passed
INFO2/0d6zE#1 test_11.py#_test @6hj6|local/qz44b-nios2: Reset
INFO2/0d6zE#1 test_11.py#_test @6hj6|local/qz41b-nios2: Reset
PASS2/0d6zE#1 test_11.py#_test @6hj6|local/qz44b-nios2: found expected `Hello World!` in console `local/qz44b-nios2:default` at 0.09s
PASS2/0d6zE#1 test_11.py#_test @6hj6|local/qz44b-nios2: eval pass: found expected `Hello World!` in console `local/qz44b-nios2:default` at 0.09s
PASS2/0d6zE#1 test_11.py#_test @6hj6|local/qz41b-nios2: found expected `Hello World!` in console `local/qz41b-nios2:default` at 0.13s
PASS2/0d6zE#1 test_11.py#_test @6hj6|local/qz41b-nios2: eval pass: found expected `Hello World!` in console `local/qz41b-nios2:default` at 0.13s
PASS2/0d6zE#1 test_11.py#_test @6hj6: eval pass: all expectations found
PASS1/0d6z test_11.py#_test @6hj6: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.15. Double down catchas¶
- If your testcase takes one target and it shall run on one of each
type and you have 6 different target types, it will:
- choose one target per type to run the testcase
- build and flash 6 times, evaluate 6 times (one on each type of target)
- If your testcase needs two different targets and there are six
available:
- it will choose 6^2 = 36 permutations of targets
- build and flash 36 times twice (once per target), eval 36 times
- If your testcase needs three different targets and there are six
available
- it will choose 6^3 = 216 permutations of targets:
- build and flash 216 times thrice (once per target), eval 216 times
tcf run limits by default to 10 permutations, but you can tweak that
with -PNUMBER
2.4.16. Two interconnected targets?¶
Use @tcfl.tc.interconnect()
(a target–maybe conceptual–which
connects other targets together)
#! /usr/bin/python2
import tcfl.tc
@tcfl.tc.interconnect()
@tcfl.tc.target()
@tcfl.tc.target()
class _test(tcfl.tc.tc_c):
def eval(self):
self.report_info("got two interconnected targets")
- Looking at the target’s interconnecs tags, tcf run can determine which targets are connected to which interconnects.
- Interconnects also use tags to describe what they are or how they operate (maybe is an IP interconnect, or just a group that describes targets that are in the same room, etc).
- By requesting an interconnect and two targets that belong to it, we will get a lot of permutations of two interconnected targets.
$ tcf run --tmpdir tmp -v test_12.py
INFO1/0hgf test_12.py#_test @qlan-tfdh: will run on target group 'ic=local/nwa target=local/qz34a-x86:x86 target1=local/qz45a-riscv32:riscv32'
INFO1/20i5 test_12.py#_test @qlan-xk2u: will run on target group 'ic=local/nwa target=local/qz43a-nios2:nios2 target1=local/qz39a-arm:arm'
INFO1/9rdg test_12.py#_test @qlan-xo7w: will run on target group 'ic=local/nwa target=local/qz34a-x86:x86 target1=local/qz40a-nios2:nios2'
...
PASS1/soip test_12.py#_test @qlan-xeb3: evaluation passed
PASS1/4t2r test_12.py#_test @qlan-xqnu: evaluation passed
PASS1/ku9e test_12.py#_test @qlan-kdrw: evaluation passed
PASS1/20i5 test_12.py#_test @qlan-xk2u: evaluation passed
PASS1/9rdg test_12.py#_test @qlan-xo7w: evaluation passed
PASS0/ toplevel @local: 10 tests (10 passed, 0 failed, 0 blocked, 0 skipped) - passed
Note that because 4 target types were available (QEMU Zephyr for x86,
NIOS2, ARM and Risc v32), it has generated 16 different permutations
and only taken the first 10 (because of the default -P10
setting).
2.4.17. Networking and the Zephyr echo server¶
Let’s get a Zephyr network application, the Echo Server, built and deployed in a target along with a network. Being part of a network assigns IP addresses to targets, which we can query for building:
#! /usr/bin/python2
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.interconnect("ipv4_addr")
@tcfl.tc.target(name = "zephyr_server",
spec = """zephyr_board in [
'frdm_k64f', 'qemu_x86',
'arduino_101', 'sam_e70_xplained'
]""",
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "net", "echo_server"))
class _test(tcfl.tc.tc_c):
@staticmethod
@tcfl.tc.serially()
def build_00_server_config(zephyr_server):
if 'mac_addr' in zephyr_server.kws:
zephyr_server.zephyr.config_file_write(
"mac_addr",
"CONFIG_SLIP_MAC_ADDR=\"%s\"\n"
% zephyr_server.kws['mac_addr'])
else:
zephyr_server.zephyr.config_file_write("mac_addr", "")
zephyr_server.zephyr.config_file_write(
"ip_addr",
"CONFIG_NET_APP_SETTINGS=y\n"
"CONFIG_NET_APP_MY_IPV4_ADDR=\"%s\"\n"
% zephyr_server.kws['ipv4_addr'])
@staticmethod
def start_00(ic):
ic.power.cycle()
@staticmethod
def eval_00_server(zephyr_server):
zephyr_server.expect("init_app: Run echo server")
zephyr_server.expect("receive: Starting to wait")
Use
@tcfl.tc.interconnect("ipv4_addr")
to request an interconnect that declares having an IPv4 address.Request a Zephyr capable target; use
name
to name them andspec
to filter the targets where the echo server can run (based on the configuration files available). Useapp_zephyr
to point to the right source.Use build_*() to set configuration values, as these applications need to know their IP addresses at build time. These are available in the target’s kws dictionary, which export the target’s tags.
Use start_*() to power cycle the network before starting the test, otherwise it will not work. ic is the default name assigned by
@tcfl.tc.interconnect()
.Before evaluating, the setup*() and start*() functions are executed serially in alphabetical order. That’s why we call it _00_ to make sure it gets run before the default Zephyr’s start function (start_50_*()).
$ tcf run --tmpdir tmp -vv test_13.py
INFO2/ toplevel @local: scanning for test cases
INFO1/pz15 test_13.py#_test @qlan-4coc: will run on target group 'ic=local/nwa zephyr_server=local/qz33a-x86:x86'
PASS2/pz15 test_13.py#_test @qlan-4coc: configure passed
PASS1/pz15 test_13.py#_test @qlan-4coc: build passed
PASS2/pz15 test_13.py#_test @qlan-4coc: deploy passed
INFO2/pz15E#1 test_13.py#_test @qlan-4coc|local/nwa: Power cycled
INFO2/pz15E#1 test_13.py#_test @qlan-4coc|local/qz33a-x86: Reset
PASS2/pz15E#1 test_13.py#_test @qlan-4coc|local/qz33a-x86: found expected `init_app: Run echo server` in console `local/qz33a-x86:default` at 0.06s
PASS2/pz15E#1 test_13.py#_test @qlan-4coc|local/qz33a-x86: eval pass: found expected `init_app: Run echo server` in console `local/qz33a-x86:default` at 0.06s
PASS2/pz15E#1 test_13.py#_test @qlan-4coc|local/qz33a-x86: found expected `receive: Starting to wait` in console `local/qz33a-x86:default` at 0.05s
PASS2/pz15E#1 test_13.py#_test @qlan-4coc|local/qz33a-x86: eval pass: found expected `receive: Starting to wait` in console `local/qz33a-x86:default` at 0.05s
PASS1/pz15 test_13.py#_test @qlan-4coc: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.18. Add on the Zephyr echo client¶
A full Zephyr network echo client/server needs the client code too, so we add it:
#! /usr/bin/python2
import os
import re
import tcfl.tc
import tcfl.tl
@tcfl.tc.interconnect("ipv4_addr")
@tcfl.tc.target(name = "zephyr_server",
spec = """zephyr_board in [
'frdm_k64f', 'qemu_x86',
'arduino_101', 'sam_e70_xplained'
]""",
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "net", "echo_server"))
@tcfl.tc.target(name = "zephyr_client",
spec = """zephyr_board in [
'frdm_k64f', 'qemu_x86',
'arduino_101', 'sam_e70_xplained'
]""",
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "net", "echo_client"))
class _test(tcfl.tc.tc_c):
@staticmethod
@tcfl.tc.serially()
def build_00_server_config(zephyr_server):
if 'mac_addr' in zephyr_server.kws:
zephyr_server.zephyr.config_file_write(
"mac_addr",
"CONFIG_SLIP_MAC_ADDR=\"%s\"\n"
% zephyr_server.kws['mac_addr'])
else:
zephyr_server.zephyr.config_file_write("mac_addr", "")
zephyr_server.zephyr.config_file_write(
"ip_addr",
"CONFIG_NET_APP_SETTINGS=y\n"
"CONFIG_NET_APP_MY_IPV4_ADDR=\"%s\"\n"
% zephyr_server.kws['ipv4_addr'])
@staticmethod
@tcfl.tc.serially()
def build_00_client_config(zephyr_client, zephyr_server):
if 'mac_addr' in zephyr_client.kws:
zephyr_client.zephyr.config_file_write(
"mac_addr",
"CONFIG_SLIP_MAC_ADDR=\"%s\"\n"
% zephyr_client.kws['mac_addr'])
else:
zephyr_client.zephyr.config_file_write("mac_addr", "")
zephyr_client.zephyr.config_file_write(
"ip_addr",
"CONFIG_NET_APP_SETTINGS=y\n"
"CONFIG_NET_APP_MY_IPV4_ADDR=\"%s\"\n"
"CONFIG_NET_APP_PEER_IPV4_ADDR=\"%s\"\n"
% (zephyr_client.kws['ipv4_addr'],
zephyr_server.kws['ipv4_addr'],))
@staticmethod
def start_00(ic):
ic.power.cycle()
@staticmethod
def eval_00_server(zephyr_server):
zephyr_server.expect("init_app: Run echo server")
zephyr_server.expect("receive: Starting to wait")
@staticmethod
def eval_10_client(zephyr_client):
zephyr_client.expect("init_app: Run echo client")
zephyr_client.expect(re.compile("Compared [0-9]+ bytes, all ok"))
Note how build_00_client_config takes as argument both the zephyr_client and zephyr_server targets, because it needs to know the server’s IP address to configure the client build.
This is the reason for
@tcfl.tc.serially
. build*() functions that take target arguments will be executed in parallel and cause an error if the targets overlap. The decorator forces them to execute serially to avoid race conditions in the use of resources (eg: temporary directories) associated to each target.In the same fashion, eval_10_client() is named _10_ to make sure it runs after the server’s evaluation function.
$ tcf run --tmpdir tmp -vv test_14.py
INFO2/ toplevel @local: scanning for test cases
INFO1/51yw test_14.py#_test @v2xd-d4h6: will run on target group 'ic=local/nwb zephyr_client=local/qz32b-x86:x86 zephyr_server=local/qz31b-x86:x86'
PASS2/51yw test_14.py#_test @v2xd-d4h6: configure passed
PASS1/51yw test_14.py#_test @v2xd-d4h6: build passed
PASS2/51yw test_14.py#_test @v2xd-d4h6: deploy passed
INFO2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/nwb: Power cycled
INFO2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz32b-x86: Reset
INFO2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz31b-x86: Reset
PASS2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz31b-x86: found expected `init_app: Run echo server` in console `local/qz31b-x86:default` at 0.41s
PASS2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz31b-x86: eval pass: found expected `init_app: Run echo server` in console `local/qz31b-x86:default` at 0.41s
PASS2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz31b-x86: found expected `receive: Starting to wait` in console `local/qz31b-x86:default` at 0.07s
PASS2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz31b-x86: eval pass: found expected `receive: Starting to wait` in console `local/qz31b-x86:default` at 0.07s
PASS2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz32b-x86: found expected `init_app: Run echo client` in console `local/qz32b-x86:default` at 0.08s
PASS2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz32b-x86: eval pass: found expected `init_app: Run echo client` in console `local/qz32b-x86:default` at 0.08s
PASS2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz32b-x86: found expected `Compared [0-9]+ bytes, all ok` in console `local/qz32b-x86:default` at 1.33s
PASS2/51ywE#1 test_14.py#_test @v2xd-d4h6|local/qz32b-x86: eval pass: found expected `Compared [0-9]+ bytes, all ok` in console `local/qz32b-x86:default` at 1.33s
PASS1/51yw test_14.py#_test @v2xd-d4h6: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.19. Cover more bases on the Zephyr echo server/client¶
In this case, we want to make sure that the order at which the targets are starting is more under our control, because we need to make sure the network (interconnect) is powered on first, then the server and then finally the client.
#! /usr/bin/python2
import os
import re
import tcfl.tc
import tcfl.tl
@tcfl.tc.interconnect("ipv4_addr")
@tcfl.tc.target(name = "zephyr_server",
spec = """zephyr_board in [
'frdm_k64f', 'qemu_x86',
'arduino_101', 'sam_e70_xplained'
]""",
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "net", "echo_server"))
@tcfl.tc.target(name = "zephyr_client",
spec = """zephyr_board in [
'frdm_k64f', 'qemu_x86',
'arduino_101', 'sam_e70_xplained'
]""",
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "net", "echo_client"))
class _test(tcfl.tc.tc_c):
@staticmethod
@tcfl.tc.serially()
def build_00_server_config(zephyr_server):
if 'mac_addr' in zephyr_server.kws:
zephyr_server.zephyr.config_file_write(
"mac_addr",
"CONFIG_SLIP_MAC_ADDR=\"%s\"\n"
% zephyr_server.kws['mac_addr'])
else:
zephyr_server.zephyr.config_file_write("mac_addr", "")
zephyr_server.zephyr.config_file_write(
"ip_addr",
"CONFIG_NET_APP_SETTINGS=y\n"
"CONFIG_NET_APP_MY_IPV4_ADDR=\"%s\"\n"
% zephyr_server.kws['ipv4_addr'])
@staticmethod
@tcfl.tc.serially()
def build_00_client_config(zephyr_client, zephyr_server):
if 'mac_addr' in zephyr_client.kws:
zephyr_client.zephyr.config_file_write(
"mac_addr",
"CONFIG_SLIP_MAC_ADDR=\"%s\"\n"
% zephyr_client.kws['mac_addr'])
else:
zephyr_client.zephyr.config_file_write("mac_addr", "")
zephyr_client.zephyr.config_file_write(
"ip_addr",
"CONFIG_NET_APP_SETTINGS=y\n"
"CONFIG_NET_APP_MY_IPV4_ADDR=\"%s\"\n"
"CONFIG_NET_APP_PEER_IPV4_ADDR=\"%s\"\n"
% (zephyr_client.kws['ipv4_addr'],
zephyr_server.kws['ipv4_addr'],))
def start_50_zephyr_server(self, zephyr_server):
pass
def start_50_zephyr_client(self, zephyr_client):
pass
def start_00(self, ic, zephyr_server, zephyr_client):
ic.power.cycle()
self.overriden_start_50_zephyr_server(zephyr_server)
zephyr_server.expect("init_app: Run echo server")
zephyr_server.expect("receive: Starting to wait")
self.overriden_start_50_zephyr_client(zephyr_client)
@staticmethod
def eval_10_client(zephyr_client):
zephyr_client.expect("init_app: Run echo client")
zephyr_client.expect(re.compile("Compared [0-9]+ bytes, all ok"))
- First we override the default App Zephyr start methods (start_50_zephyr_server() and start_50_zephyr_client()) to do nothing. This will actually have them renamed as overriden_start_50_zephyr_server() and overriden_start_50_zephyr_client().
- Then add to the existing start_00() a call to start the server target, wait for the banner indicating it has started to serve and then call to start the client.
- This renders eval_00_server() unnecesary, as we did that check on start_00() to ensure it had started properly.
$ tcf run --tmpdir tmp -vv test_14.py
...
INFO2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/nwb: Power cycled
INFO2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz30b-x86: Reset
INFO2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz31b-x86: Reset
PASS2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz31b-x86: found expected `init_app: Run echo server` in console `local/qz31b-x86:default` at 0.10s
PASS2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz31b-x86: eval pass: found expected `init_app: Run echo server` in console `local/qz31b-x86:default` at 0.10s
PASS2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz31b-x86: found expected `receive: Starting to wait` in console `local/qz31b-x86:default` at 0.10s
PASS2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz31b-x86: eval pass: found expected `receive: Starting to wait` in console `local/qz31b-x86:default` at 0.10s
PASS2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz30b-x86: found expected `init_app: Run echo client` in console `local/qz30b-x86:default` at 0.08s
PASS2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz30b-x86: eval pass: found expected `init_app: Run echo client` in console `local/qz30b-x86:default` at 0.08s
PASS2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz30b-x86: found expected `Compared [0-9]+ bytes, all ok` in console `local/qz30b-x86:default` at 1.61s
PASS2/jgszE#1 test_14.py#_test @v2xd-xwv4|local/qz30b-x86: eval pass: found expected `Compared [0-9]+ bytes, all ok` in console `local/qz30b-x86:default` at 1.61s
PASS1/jgsz test_14.py#_test @v2xd-xwv4: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.20. But let’s test Zephyr’s echo server/client better¶
We should be looking for more than just one occurrence of the all ok message:
#! /usr/bin/python2
import os
import re
import time
import tcfl.tc
import tcfl.tl
@tcfl.tc.interconnect("ipv4_addr")
@tcfl.tc.target(name = "zephyr_server",
spec = """zephyr_board in [
'frdm_k64f', 'qemu_x86',
'arduino_101', 'sam_e70_xplained'
]""",
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "net", "echo_server"))
@tcfl.tc.target(name = "zephyr_client",
spec = """zephyr_board in [
'frdm_k64f', 'qemu_x86',
'arduino_101', 'sam_e70_xplained'
]""",
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "net", "echo_client"))
class _test(tcfl.tc.tc_c):
@staticmethod
@tcfl.tc.serially()
def build_00_server_config(zephyr_server):
if 'mac_addr' in zephyr_server.kws:
zephyr_server.zephyr.config_file_write(
"mac_addr",
"CONFIG_SLIP_MAC_ADDR=\"%s\"\n"
% zephyr_server.kws['mac_addr'])
else:
zephyr_server.zephyr.config_file_write("mac_addr", "")
zephyr_server.zephyr.config_file_write(
"ip_addr",
"CONFIG_NET_APP_SETTINGS=y\n"
"CONFIG_NET_APP_MY_IPV4_ADDR=\"%s\"\n"
% zephyr_server.kws['ipv4_addr'])
@staticmethod
@tcfl.tc.serially()
def build_00_client_config(zephyr_client, zephyr_server):
if 'mac_addr' in zephyr_client.kws:
zephyr_client.zephyr.config_file_write(
"mac_addr",
"CONFIG_SLIP_MAC_ADDR=\"%s\"\n"
% zephyr_client.kws['mac_addr'])
else:
zephyr_client.zephyr.config_file_write("mac_addr", "")
zephyr_client.zephyr.config_file_write(
"ip_addr",
"CONFIG_NET_APP_SETTINGS=y\n"
"CONFIG_NET_APP_MY_IPV4_ADDR=\"%s\"\n"
"CONFIG_NET_APP_PEER_IPV4_ADDR=\"%s\"\n"
% (zephyr_client.kws['ipv4_addr'],
zephyr_server.kws['ipv4_addr'],))
def start_50_zephyr_server(self, zephyr_server):
pass
def start_50_zephyr_client(self, zephyr_client):
pass
def start_00(self, ic, zephyr_server, zephyr_client):
ic.power.cycle()
self.overriden_start_50_zephyr_server(zephyr_server)
zephyr_server.expect("init_app: Run echo server")
zephyr_server.expect("receive: Starting to wait")
self.overriden_start_50_zephyr_client(zephyr_client)
def eval_10_client(self, zephyr_client):
zephyr_client.expect("init_app: Run echo client")
for count in range(1,10):
time.sleep(20)
zephyr_client.report_info("Running for 30s (%d/10)" % count)
[ target.active() for target in self.targets.values() ]
# Ensure we have at least one "all ok" message or fail
r = re.compile("Compared [0-9]+ bytes, all ok")
zephyr_client.expect(r)
s = zephyr_client.console.read() # Read all the output we got
self.report_info("DEBUG: read %s" % s)
matches = re.findall(r, s)
need = 10
if len(matches) < need:
raise tcfl.tc.failed_e("Didn't get at least %d 'all ok' "
"messages (but %d)" % (need, len(matches)))
zephyr_client.report_pass("Got at least %d 'all ok' messages"
% len(matches))
- note how after we detect the client has started running, we let the client run, watiting a couple of times for thirty seconds…
- … so we can mark all the targets as active using
tcfl.tc.target_c.active()
to avoid the server powering them off due to inactivity. - once the target has had at least a minute to run, we read all the console output and count how many allow ok messages have been received
$ tcf run --tmpdir tmp -vv test_16.py
...
PASS2/4odfE#1 test_16.py#_test @qlan-pbuz|local/qz34a-x86: found expected `init_app: Run echo client` in console `local/qz34a-x86:default` at 0.08s
PASS2/4odfE#1 test_16.py#_test @qlan-pbuz|local/qz34a-x86: eval pass: found expected `init_app: Run echo client` in console `local/qz34a-x86:default` at 0.08s
INFO2/4odfE#1 test_16.py#_test @qlan-pbuz|local/qz34a-x86: Running for 30s (1/1)
INFO2/4odfE#1 test_16.py#_test @qlan-pbuz|local/qz34a-x86: Running for 30s (1/2)
PASS2/4odfE#1 test_16.py#_test @qlan-pbuz|local/qz34a-x86: found expected `Compared [0-9]+ bytes, all ok` in console `local/qz34a-x86:default` at 0.40s
PASS2/4odfE#1 test_16.py#_test @qlan-pbuz|local/qz34a-x86: eval pass: found expected `Compared [0-9]+ bytes, all ok` in console `local/qz34a-x86:default` at 0.40s
INFO2/4odfE#1 test_16.py#_test @qlan-pbuz|local/qz34a-x86: read console 'local/qz34a-x86:<default>' @0 2374B
PASS2/4odfE#1 test_16.py#_test @qlan-pbuz|local/qz34a-x86: Got at least 10 'all ok' messages
PASS1/4odf test_16.py#_test @qlan-pbuz: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.21. Developing Zephyr apps with the TCF’s help¶
Let’s write our own Zephyr App, test_18/src/main.c
/*
* Copyright (c) 2017 Intel Corp
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr.h>
#include <misc/printk.h>
#include <drivers/rand32.h>
int run_some_test(void)
{
uint32_t r;
r = sys_rand32_get();
return r & 0x1;
}
void main(void)
{
if (run_some_test())
printk("PASS\n");
else
printk("FAIL\n");
}
This is a random test case, sometimes passes, sometimes fails.
Our app some assistance:
test_18
: a directory where to place all the information for ittest_18/Makefile
: Makefile to integrate into ZephyrBOARD ?= qemu_x86 CONF_FILE = prj.conf include ${ZEPHYR_BASE}/Makefile.test
test_18/src
: directory where to place the sourcetest_18/src/Makefile
: Makefile Zephyr will call to build the appinclude $(ZEPHYR_BASE)/tests/Makefile.test obj-y = main$(SUBSAMPLE).o
We use the
$(SUBPHASE)
append to the file name so we can control from the environment which file we compile, as we evolve the testcase.test_18/test.py
: TCF test script integrationimport os, time, tcfl.tc @tcfl.tc.target('zephyr_board', app_zephyr = os.path.join(".")) class _test(tcfl.tc.tc_c): def setup_catch_failures(self, target): target.on_console_rx("FAIL", result = 'fail', timeout = False) def eval(self, target): target.expect("PASS")
The setup*() functions are called before starting the targets and in this case we setup a hook on the console to fail the testcase if we receive a
FAIL
string.
When you run it and passes:
$ tcf run --tmpdir tmp -vvy test_18/
INFO2/ toplevel @local: scanning for test cases
INFO1/vdac test_18/test.py#_test @srrsotc03/qz37g-riscv32:riscv32: will run on target group 'target=srrsotc03/qz37g-riscv32:riscv32'
PASS2/vdac test_18/test.py#_test @srrsotc03/qz37g-riscv32:riscv32: configure passed
PASS1/vdac test_18/test.py#_test @srrsotc03/qz37g-riscv32:riscv32: build passed
PASS2/vdac test_18/test.py#_test @srrsotc03/qz37g-riscv32:riscv32: deploy passed
INFO2/vdacE#1 test_18/test.py#_test @srrsotc03/qz37g-riscv32:riscv32: Reset
PASS2/vdacE#1 test_18/test.py#_test @srrsotc03/qz37g-riscv32:riscv32: found expected `PASS` in console `srrsotc03/qz37g-riscv32:default` at 0.62s
PASS2/vdacE#1 test_18/test.py#_test @srrsotc03/qz37g-riscv32:riscv32: eval pass: found expected `PASS` in console `srrsotc03/qz37g-riscv32:default` at 0.62s
PASS1/vdac test_18/test.py#_test @srrsotc03/qz37g-riscv32:riscv32: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
and when it fails:
$ tcf run --tmpdir tmp -vvy test_18/
INFO2/ toplevel @local: scanning for test cases
INFO1/f95t test_18/test.py#_test @srrsotc03/qz35h-nios2:nios2: will run on target group 'target=srrsotc03/qz35h-nios2:nios2'
PASS2/f95t test_18/test.py#_test @srrsotc03/qz35h-nios2:nios2: configure passed
PASS1/f95t test_18/test.py#_test @srrsotc03/qz35h-nios2:nios2: build passed
PASS2/f95t test_18/test.py#_test @srrsotc03/qz35h-nios2:nios2: deploy passed
INFO2/f95tE#1 test_18/test.py#_test @srrsotc03/qz35h-nios2:nios2: Reset
FAIL2/f95tE#1 test_18/test.py#_test @srrsotc03/qz35h-nios2:nios2: eval failed: found expected (for failure) `FAIL` in console `srrsotc03/qz35h-nios2:default` at 0.10s
FAIL0/f95t test_18/test.py#_test @srrsotc03/qz35h-nios2:nios2: evaluation failed
FAIL0/ toplevel @local: 1 tests (0 passed, 1 failed, 0 blocked, 0 skipped) - failed
We are going to evolve this app to see what is in a Zephyr testcase.
2.4.21.1. Evolving into a TC test case¶
Anything can be used to comunicate via the console if it passes or
fails, however, to be consistent and make it easy, Zephyr has
standarized on the TC macros and the ztest framework; copy
main.c
to main-b.c
and edit it adding:
/*
* Copyright (c) 2017 Intel Corp
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr.h>
#include <misc/printk.h>
#include <drivers/rand32.h>
#include <tc_util.h>
int run_some_test(void)
{
uint32_t r;
r = sys_rand32_get();
return r & 0x1;
}
void main(void)
{
int r;
TC_START("random test");
if (run_some_test())
r = TC_PASS;
else
r = TC_FAIL;
TC_END_RESULT(r);
TC_END_REPORT(r);
}
Our testing functions are slightly modified (no arguments or return values) and they just have to call ztest functions to indicate a failure. A suite is declared to tie them all together and launch it. Error messages will be printed.
import os, time, tcfl.tc
@tcfl.tc.target('zephyr_board', app_zephyr = os.path.join("."))
class _test(tcfl.tc.tc_c):
def setup_catch_failures(self, target):
target.on_console_rx("PROJECT EXECUTION FAILED",
result = 'fail', timeout = False)
def eval(self, target):
target.expect("RunID: %(runid)s:%(tghash)s" % target.kws)
target.expect("PROJECT EXECUTION SUCCESSFUL")
Now run:
$ (export SUBSAMPLE=-c; tcf run --tmpdir tmp -vvvy test_18/test$SUBSAMPLE.py)
(remember that for the sake of brevity in the training, we use the SUBSAMPLE environment variable to select which Python test file script and which C source files we want to use)
produces:
...
PASS2/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: found expected `PASS` in console `srrsotc03/qc1000-01:default` at 1.17s
PASS3/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: console output: ***** BOOTING ZEPHYR OS v1.7.99 - BUILD: Jun 4 2017 12:31:00 *****
PASS3/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: console output: tc_start() - random test
PASS3/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: console output: ===================================================================
PASS3/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: console output: PASS - main.
PASS3/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: console output: ===================================================================
PASS3/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: console output: RunID: :wpgl
PASS3/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: console output: PROJECT EXECUTION SUCCESSFUL
PASS2/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: eval pass: found expected `PASS` in console `srrsotc03/qc1000-01:default` at 1.17s
...
The message RunID: :wpgl
from this line:
PASS3/rmk0E#1 test_18/test.py#_test @srrsotc03/qc1000-01:x86: console output: RunID: :wpgl
will be unique for each combination of testcase name, target group where it runs and the app itself (in our case test_18/src) and it is always good to verify it was printed to ensure the right image was found. For that, we can use target.kws’s tghash and runid keys:
target.expect("RunID: %(runid)s:%(tghash)" % target.kws)
2.4.21.2. Evolving into a ztest test case¶
ztest is a unit test library, whose API can be found in tests/ztest/include.
Copy src/main-b.c
to src/main-c.c
and introduce the
highlighted modifications:
/*
* Copyright (c) 2017 Intel Corp
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr.h>
#include <misc/printk.h>
#include <drivers/rand32.h>
#include <ztest.h>
void run_some_test1(void)
{
uint32_t r = sys_rand32_get();
zassert_true(r & 0x1, "random1");
}
void run_some_test2(void)
{
uint32_t r = sys_rand32_get();
zassert_true(r & 0x1, "random2");
}
void test_main(void) /* note test_main() */
{
ztest_test_suite( /* declare the test suite */
test_18,
ztest_unit_test(run_some_test1),
ztest_unit_test(run_some_test2));
ztest_run_test_suite(test_18); /* run it */
}
Thus when a testcase passes, it will print PROJECT EXECUTION
SUCCESSFUL
or PROJECT EXECUTION FAILED
and a few other
messages; copy test.py
to test-b.py
and add:
import os, time, tcfl.tc
@tcfl.tc.target('zephyr_board', app_zephyr = os.path.join("."))
class _test(tcfl.tc.tc_c):
@tcfl.tc.serially()
def build_00(self, target):
target.zephyr.config_file_write('ztest', 'CONFIG_ZTEST=y')
def setup_catch_failures(self, target):
target.on_console_rx("PROJECT EXECUTION FAILED",
result = 'fail', timeout = False)
def eval(self, target):
target.expect("RunID: %(runid)s:%(tghash)s" % target.kws)
target.expect("PROJECT EXECUTION SUCCESSFUL")
A new configuration setting is needed CONFIG_ZTEST
, which we can
do using a build method or by modifying prj.conf
.
Runing a case that fails:
$ (export SUBSAMPLE=-c; ~/z/v0.11-tcf.git/tcf run --tmpdir tmp -vyvvvv test_18/test$SUBSAMPLE.py)
INFO1/tgdh test_18/test-c.py#_test @local/qz30a-x86:x86: will run on target group 'target=local/qz30a-x86:x86'
PASS2/tgdh test_18/test-c.py#_test @local/qz30a-x86:x86: configure passed
PASS1/tgdh test_18/test-c.py#_test @local/qz30a-x86:x86: build passed
PASS2/tgdh test_18/test-c.py#_test @local/qz30a-x86:x86: deploy passed
INFO2/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: Reset
PASS2/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: found expected `RunID: :5ohy` in console `local/qz30a-x86:default` at 0.05s
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: ***** BOOTING ZEPHYR OS v1.7.99 - BUILD: Jun 4 2017 17:36:02 *****
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: Running test suite test_18
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: tc_start() - run_some_test1
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: ===================================================================
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: PASS - run_some_test1.
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: tc_start() - run_some_test2
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: ===================================================================
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: PASS - run_some_test2.
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: ===================================================================
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: RunID: :5ohy
PASS3/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: console output: PROJECT EXECUTION SUCCESSFUL
PASS2/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: eval pass: found expected `RunID: :5ohy` in console `local/qz30a-x86:default` at 0.05s
...
PASS2/tgdhE#1 test_18/test-c.py#_test @local/qz30a-x86:x86: found expected `PROJECT EXECUTION SUCCESSFUL` in console `local/qz30a-x86:default` at 0.05s
...
PASS1/tgdh test_18/test-c.py#_test @local/qz30a-x86:x86: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
2.4.22. Change of pace, input to a Zephyr test case¶
Let’s play with a Zephyr shell example:
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.target(
'zephyr_board '
# Shell app can't run on NIOS2/RISCv32 due to no
# IRQ-based UART support
'and not zephyr_board in [ "qemu_nios2", "qemu_riscv32" ]',
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "subsys", "shell", "shell"))
class _test(tcfl.tc.tc_c):
zephyr_filter = "UART_CONSOLE"
zephyr_filter_origin = os.path.abspath(__file__)
def eval(self, target):
self.expecter.timeout = 20
target.crlf = "\r"
target.expect("shell>")
target.send("select sample_module")
target.expect("sample_module>")
- you can use shell based apps to implement multiple test cases on a single Zephyr app using the TC framework.
- Use
target.send
to send data to the target’s console, as if you were typing it.
$ tcf run --tmpdir tmp -yvv test_17.py
INFO2/ toplevel @local: scanning for test cases
INFO1/3daq test_17.py#_test @local/qz36a-arm:arm: will run on target group 'target=local/qz36a-arm:arm'
PASS2/3daq test_17.py#_test @local/qz36a-arm:arm: configure passed
PASS1/3daq test_17.py#_test @local/qz36a-arm:arm: build passed
PASS2/3daq test_17.py#_test @local/qz36a-arm:arm: deploy passed
INFO2/3daqE#1 test_17.py#_test @local/qz36a-arm:arm: Reset
PASS2/3daqE#1 test_17.py#_test @local/qz36a-arm:arm: found expected `shell>` in console `local/qz36a-arm:default` at 0.05s
PASS2/3daqE#1 test_17.py#_test @local/qz36a-arm:arm: eval pass: found expected `shell>` in console `local/qz36a-arm:default` at 0.05s
INFO2/3daqE#1 test_17.py#_test @local/qz36a-arm:arm: wrote 'select sample_module' to console 'local/qz36a-arm:<default>'
PASS2/3daqE#1 test_17.py#_test @local/qz36a-arm:arm: found expected `sample_module>` in console `local/qz36a-arm:default` at 0.05s
PASS2/3daqE#1 test_17.py#_test @local/qz36a-arm:arm: eval pass: found expected `sample_module>` in console `local/qz36a-arm:default` at 0.05s
PASS1/3daq test_17.py#_test @local/qz36a-arm:arm: evaluation passed
PASS0/ toplevel @local: 1 tests (1 passed, 0 failed, 0 blocked, 0 skipped) - passed
Note how now, you can acquire the target and interact with it:
$ tcf acquire qz36a-arm
$ tcf console-write -i qz36a-arm
WARNING: This is a very limited interactive console
Escape character twice ^[^[ to exit
shell> shell> select sample_module # This was typed by the testcase
sample_module> help # I typed this 'help'
help
ping
params
sample_module> ping # Same with 'ping'
pong
sample_module>
Warning
Make sure it is on and it is not taken away from you in the middle due to inactivity, a trick for that is to run:
$ while true; do tcf acquire qz36a-arm || true; sleep 30s; done &
this is a loop, in the background, that is twice a minute refreshing your acquisition of the target, while you interact with it.
Kill it when you are done, otherwise others won’t be able to use it.
Note
the interactive console is quite limited; plus some targets (QEMU) have a tendency to drop characters or not echo input, some stop working half way (SAMe70).
2.4.23. FIXME: Missing¶
- USB - console - mount
- Power
- Network
- Network + tunnel
- Network + linux
2.5. Report Drivers¶
A report driver is what gets called to report information by the different components of the TCF client test runner.
The APIs that are the entry points for reporting are:
- target
object
:report_info
,report_pass
,report_fail
,report_blck
,report_skip
- testcase
object
:report_info
,report_pass
,report_fail
,report_blck
,report_skip
The system provides default drivers that report to the console and a log file, as well as create report files with results of failures.
To create a new driver, one can override
tcfl.report.report_c
:
#! /usr/bin/env python
import tcfl.report
class report_ex_c(tcfl.report.report_c):
def _report(self, level, alevel, ulevel, _tc, tag, message, attachments):
print "REPORTING ", level, alevel, _tc, tag, message, attachments
tcfl.report.report_c.driver_add(report_ex_c("results.log"))
with the following being:
- level is the verbosity level of the message; note report levels greater or equal to 1000 are using to pass control messages, so they shall not be subject to normal verbosity control.
- alevel is the verbosity level at which the attachments are reported
- ulevel is deprecated
- _tc is the
tcfl.tc.tc_c
ortcfl.tc.target_c
object that is reporting- tag is a string PASS, FAIL, BLCK, SKIP or INFO, indicating what condition is being reported.
- message is the message the caller is reporting; if it starts with “COMPLETION “, this is the final message issued to recap the result of executing a single testcase.
- attachments a dictionary keyed by strings of objects that the reporter decided to pass as extra information
Warning
This function will be called for every single report the internals of the test runner and the testcases do from multiple threads at the same time, so it makes sense to protect for concurrency if accessing shared resources or to ignore high log levels.
From these functions basically anything can be done–but they being called frequently, it has to be efficient or will slow testcase execution considerably. Actions done in this function can be:
- filtering (to only run for certain testcases, log levels, tags or messages)
- dump data to a database
- record to separate files based on whichever logistics
- etc
2.5.1. Example 1: reporting completion of testcase execution¶
For example, to report all the testcases that finalize to a file
called results.log, consider this example
:
#! /usr/bin/env python
"""
Example report driver
"""
import threading
import time
import tcfl.report
import tcfl.tc
class report_ex_c(tcfl.report.report_c):
"""
Example report driver
"""
def __init__(self, log_file_name):
tcfl.report.report_c.__init__(self)
self.log_file_name = log_file_name
with open(log_file_name, "w") as f:
f.write("%f started" % time.time())
self.lock = threading.Lock()
def _report(self, level, alevel, ulevel, _tc, tag, message, attachments):
"""
Report data
Note this can be called concurrently, so the file could be
overriden; measures to avoid that involve a lock, like what is
used here.
"""
# We don't operate on the global reporter
if getattr(_tc, "skip_reports", False) == True:
return
# The top level completion message starts with COMPLETION
if not message.startswith("COMPLETION"):
return
# _tc can be a target_c, but not for COMPLETION
assert isinstance(_tc, tcfl.tc.tc_c)
# Write it down!
with self.lock, open(self.log_file_name, "w") as f:
f.write("%s DEBUG HASHID %s TC %s RESULT %s\n" %
(time.time(), _tc.ticket, _tc.name, tag))
for twn, target in _tc.targets.iteritems():
f.write("DEBUG TARGET %s = %s:%s\n" % (twn, target.fullid,
target.bsp_model))
tcfl.report.report_c.driver_add(report_ex_c("results.log"))
note how this example:
- creates the file with a timestamp when the driver is initialized in the __init__ method
- skips any reporter that has a skip_reports attribute
- acts only on testcase completion by looking for the COMPLETION string at the beginning of the message
- correctly assumes the testcase might be assigned none, one or more targets, depending on the testcase – it merely walks the list of targets assigned to the testcase to print information about them as needed.
- accesses a shared resource (the file) by taking a lock, making sure only one thread is accessing it at the same time, to avoid corruption.
- registers the driver instantiating the class
2.5.2. Example 2: reporting failures¶
The builtin report failure driver
works similarly, but collecting information as we go to differerent
files for each testcase instantiation. When the testcase completes, it
writes a single report with all the information using the same method
as we described in the first example.
See tcfl.report.file_c
.
2.6. App Builders¶
FIXME
2.6.1. Overriding actions¶
Any defined application builder will insert in the testcase, for each
named target (targetX) declared with tcfl.tc.target()
(or
tcfl.tc.interconnect()
) the following methods:
configure_50_targetX(self, targetX)
build_50_targetX(self, targetX)
deploy_50_targetX(self, targetX)
setup_50_targetX(self, targetX)
start_50_targetX(self, targetX)
teardown_50_targetX(self, targetX)
clean_50_targetX(self, targetX)
however, you can override any by defining it yourself in your test class:
def build_50_targetX(self, targetX):
targetX.report_info("Doing something else for building")
...
while you can still call the overriden function by its new name,
overriden_build_50_target()
.
def build_50_targetX(self, targetX):
targetX.report_info("Doing something else before building")
...
self.overriden_build_50_target(self, targetX)
so the functionality is quite quick to reuse.
See test_zephyr_override.py
, where that is done to build Zephyr’s
Hello World!:
#! /usr/bin/python2
#
# Copyright (c) 2017 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
import os
import tcfl.tc
import tcfl.tl
@tcfl.tc.tags(**tcfl.tl.zephyr_tags())
# Ask for a target that defines an zephyr_board field, which indicates
# it can run the Zephyr OS
@tcfl.tc.target("zephyr_board",
app_zephyr = os.path.join(tcfl.tl.ZEPHYR_BASE,
"samples", "hello_world"))
@tcfl.tc.tags(ignore_example = True)
class _test(tcfl.tc.tc_c):
"""
Show an example on how to override pre-defined actions created by
app_zephyr while running Hello World!
"""
def build_50_target(self, target):
target.report_info("building our own way, we are "
"going to hack the source first")
self.overriden_build_50_target(target)
@staticmethod
def eval(target):
target.expect("Hello World! %s" % target.kws['zephyr_board'])
2.7. Contributing¶
Checkout your code in (eg) ~/tcf.git
$ git clone https://github.com/intel/tcf.git tcf.git
Note
we are on the v0.11 stabilization branch (otherwise, you’d clone master).
2.7.1. Support & reporting issues¶
Please report any issues found into the Github project page
2.7.2. Running the TCF client from the source tree¶
If you are developing TCF client code, it is helpful to be able to run the local, checked out copy of the code rather than having to install system wide.
For that, you can set the configuration:
$ mkdir ~/.tcf
$ cd ~/.tcf
$ ln -s ~/tcf.git/zephyr/conf_zephyr.py
If you have installed TCF systemwide, you might have to remove
/etc/tcf/conf_zephyr.py or alternatively, pass --config-path
:~/tcf.git/zephyr
, but it can be repetitive (the initial : removes
the predefined path /etc/tcf).
And now you can run:
$ cd anywhere
$ ~/tcf.git/tcf command...
Add servers as needed in your toplevel ~/.tcf or /etc/tcf/:
$ echo "tcfl.config.url_add('https://SERVER:5000', ssl_ignore = True)" >> conf_servers.py
A useful trick to be able to quickly switch servers (when only wanting to work on a set of servers S1 versus a set of servers S2):
Create directory ~/s/S1, add a conf_server.py there pointing to the servers in said set; when running tcf, use:
$ ~/tcf.git/tcf --config-path ~/s/S1 COMMAND
Maybe easier, is to call the directory ~/s/S1/.tcf, cd into ~/s/S1 and run tcf from there:
$ cd ~/s/S1 $ ~/tcf.git/tcf command...
I have different directories, one call production/.tcf with all the production servers, another staging/.tcf, with all the test servers, local/.tcf, for my local server, etc…
2.7.3. Running the TCF server (ttbd) from the source tree¶
If you are developing TCF server code, running said code without installing system wide (and potentially conflicting versions), requires some setup. This is usually called the staging server, running locally on your machine:
Disable SELinux:
# setenforce 0
Build what’s needed (ttblc.so):
$ cd ~/z/tcf.git/ttbd $ python setup.py build $ ln -s build/lib.linux-x86_64-2.7/ttblc.so
Ensure your home directory and such are readable by users members of your group:
$ chmod g+rX ~ $ chmod -R g+rX ~/tcf.git
Create a staging configuration directory /etc/ttbd-staging, make it owned by your user, so you don’t have to work as root:
$ sudo install -d -o $LOGNAME -g $LOGNAME /etc/ttbd-staging
link the following config files from your source tree:
$ cd /etc/ttbd-staging $ ln -s ~/tcf.git/ttbd/conf_00_lib.py $ ln -s ~/tcf.git/ttbd/conf_06_default.py $ ln -s ~/tcf.git/ttbd/zephyr/conf_06_zephyr.py
Create a local configuration, so you can login without a password from the local machine to port 5001 (port 5000 we leave it for production instances):
$ cat > /etc/ttbd-staging/conf_local.py <<EOF local_auth.append(“127.0.0.1”) host = “0.0.0.0” port = 5001 EOF
To have TCF use this daemon, add a configuration line:
tcfl.config.url_add('https://SERVER:5001', ssl_ignore = True)
to any TCF config file which your client will read.
Create a local configuration file conf_10_local.py with local configuration statements to enable hardware as needed. The default configuration has only virtual machines.
If you will use local Linux VMs (qlf*), set up the images by following this FIXME: procedure.
Create a configuration for systemd to start the daemon:
# cp ~user/tcf.git/ttbd/ttbd@.service /etc/systemd/systemctl/ttbd@staging.service
Edit said file and:
In Supplementary groups, append your login name, so the process can access your home directory
In ExecStart, replace /usr/bin/ttbd with /home/USERNAME/tcf.git/ttbd/ttbd so it starts the copy of the daemon you are working on
(note if you ever need to run strace on the daemon, you can prefix /usr/bin/strace -f -o /tmp/ttbd.strace.log to record every single system call…for those hard debug cases :)
- Reload the systemd configuration::
# systemctl daemon-reload
Start the daemon with:
# systemctl restart ttbd@staging
Make it always start automatically with:
# systemctl enable ttbd@staging
2.7.4. Workflow for contributions¶
Adapted from http://docs.zephyrproject.org/contribute/contribute_guidelines.html#contribution-workflow
Make small, logically self-contained, controlled changes to simplify review. Makes merging and rebasing easier, and keep the change history clear and clean.
example
cleaning up code would be a set of commits:
- only whitespace changes to adapt to convention
- fix one type of warnings
- fix one type of errors
- etc…
Provide as much information as you can about your change, update appropriate documentation, and testing changes thoroughly before submitting.
We accept contributions as GitHub pull requests, to save everyone’s time and provide a consistent review platform for all.
A github-based workflow can be:
Create a fork to your personal account on GitHub (click on the fork button in the top right corner of the project repo page in GitHub)
On your development computer, clone the fork you just made:
$ git clone https://github.com/<your github id>/tcf.git
Configure git to know about the upstream repo:
$ git remote add upstream https://github.com/intel/tcf $ git remote -v
Create a topic branch (off of master or anyother branch) for your work (if you’re addressing an issue, we suggest including the issue number in the branch name):
$ git checkout master $ git checkout -b fix_comment_typo
Make changes, test locally, change, test, test again; some base testcases we will run are at least:
$ cd ~/tcf.git $ ./lint-all.py $ ./tcf run ~/tcf.git/tests
Start the pull request process by adding your changed files:
$ git add [file(s) that changed, add -p if you want to be more specific]
You can see files that are not yet staged using:
$ git status
Verify changes to be committed look as you expected:
$ git diff --cached
Commit your changes to your local repo:
$ git commit -vs
-s
option automatically adds your Signed-off-by: to your commit message. Your commit will be rejected without this line that indicates your agreement with the DCO (Developer Certificate of Origin).Commits messages shall be explanatory and concise, properly spelled and in the form:
AREA: SHORT SUMMARY Longer description that can be obviated if the commit is quite obvious and/or the summary already says it all. Note implementation details shall be detailed in the code, it is ok for the commit message to point to those, as we don't want information duplicated innecesarily. Signed-off-by: Random Developer <random.developer@somewhere.org>
Push your topic branch with your changes to your fork in your personal GitHub account:
$ git push origin fix_comment_typo
In your web browser, go to your forked repo and click on the Compare & pull request button for the branch you just worked on and you want to open a pull request with.
Review the pull request changes, and verify that you are opening a pull request for the appropriate branch. The title and message from your commit message should appear as well.
GitHub will assign one or more suggested reviewers (based on the CODEOWNERS file in the repo). If you are a project member, you can select additional reviewers now too.
Click on the submit button and your pull request is sent and awaits review. Email will be sent as review comments are made, or you can check on your pull request at https://github.com/intel/tcf/pulls.
While you’re waiting for your pull request to be accepted and merged, you can create another branch to work on another issue (be sure to make your new branch off of master and not the previous branch):
$ git checkout master $ git checkout -b fix_another_issue
and use the same process described above to work on this new topic branch.