8. APIs¶
8.1. TCF run: testcase API and target manipulation during testcases¶
8.1.1. TCF’s backbone test case finder and runner¶
This implements a high level test case finder and runner to find and execute test cases.
The execution of each test case consists of the following phases:
- configure
- build
- target acquisition
- deploy
- one or more evaluation sequences (each consisting of setup, start, evaluation per se, teardown)
- clean [not executed by default]
the configuration, build and deployment phases happen in the local host; the evaluation phases can happen in the local host (for static tests) or in remote targets (for dynamic tests).
The backbone is designed so multiple drivers (test case drivers) can
be implemented that find test cases and specify to the backbone how to
build, run and test for success or failure. This is done by
subclassing tcfl.tc.tc_c
and extending or redefining
tc_c.is_testcase()
.
Testcases are defined also by subclassing tcfl.tc.tc_c
and
implementing the different methods the meta runner will call to
evaluate. Testcases can manipulate targets (if they need any) by using
the APIs defined by tcfl.tc.target_c
.
The runner will collect the list of available targets and determine whih testcases have to be run on which targets, and then create an instance of each testcase for those groups of targets where it has to run. All the instances will then be run in parallel throgh a multiprocessing pool.
To testcases report results via the report API, which are handled by
drivers defined following tcfl.report.report_c
, which can be
subclassed and extended to report to different destinations according
to need. Default drivers report to console and logfiles.
8.1.1.1. Testcase run identification¶
A message identification mechanism for which all the messages are prefixed with a code:
[RUNID:]HASH{CBDEL}[XX][.N]
- RUNID is a constant string to identify a run of a set of test cases (defaults to nothing, can be autogenerated with -i or an specific string given with -i RUNID).
- HASH is a base32 encoded hash of the testcase name, targets where it is to be run and their BSP model.
- CBDEL is one capital letter representing the phase being ran (Configure, Build, Deploy, Evaluation, cLean)
- [XX]: base32 encoded hash of the BSP name, applies only to dynamic test case builds per BSP.
This helps locating anything specific to a testcase by grepping in the logfile for a given string and adding more components restricts the output.
This means that the message IDs are stable across runs, save RUNIDs being specified.
We also use the RUNID:TC combination for a ticket when requesting a target lock–note this does not conflict with other users, as the tickets are namespaced per-user. This allows the server log to be used to crossreference what was being ran, to sort out issues.
-
tcfl.tc.
import_mp_pathos
()¶
-
tcfl.tc.
import_mp_std
()¶
-
exception
tcfl.tc.
exception
(description, attachments=None)¶ General base exception for reporting results of any phase of test cases
Parameters: - msg (str) – a message to report
- attachments (dict) –
a dictionary of items to report, with a few special fields:
- target: this is a
tcfl.tc.target_c
which shall be used for reporting - dlevel: this is an integer that indicates the relative level of verbosity (FIXME: link to detailed explanation)
- alevel: this is an integer that indicates the relative level of verbosity for attachments (FIXME: link to detailed explanation)
- any other fields will be passed verbatim and reported
- target: this is a
Have to use a dictionary (vs using kwargs) so the name of the keys can contain spaces, for better reporting.
-
attachments_get
()¶
-
exception
tcfl.tc.
pass_e
(description, attachments=None)¶ The test case passed
-
exception
tcfl.tc.
blocked_e
(description, attachments=None)¶ The test case could not be completed because something failed and disallowed testing if it woud pass or fail
-
exception
tcfl.tc.
error_e
(description, attachments=None)¶ Executing the test case found an error
-
exception
tcfl.tc.
failed_e
(description, attachments=None)¶ The test case failed
-
exception
tcfl.tc.
skip_e
(description, attachments=None)¶ A decission was made to skip executing the test case
-
class
tcfl.tc.
target_extension_c
(_target)¶ Implement API extensions to targets
An API extension allows you to extend the API for the
tcfl.tc.target_c
class so that more functionality can be added to the target objects passed to testcase methods (like build*(), eval*(), etc) and used as:>>> class extension_a(target_extension_c): >>> def function(self): >>> self.target.report_info("Hello world from extension_a") >>> variable = 34 >>> ... >>> target_c.extension_register(etension_a) >>> ...
Now, in an (e.g) evaluation function in a testcase:
>>> @tcfl.tc.target() >>> @tcfl.tc.target() >>> class _test(tcfl.tc.tc_c): >>> >>> def eval_something(self, target, target1): >>> target1.extension_a.function() >>> if target1.extension_a.variable > 3: >>> do_something() >>> ...
Extensions have to be registered with
tcfl.tc.target_c.extension_register()
. Unregister withtcfl.tc.target_c.extension_unregister()
The extension can be anything, but are commonly used to provide the code to access APIs that are not distributed as part of the core TCF distribution (like for example, an API to access an special sensor).
A package might add support on the server side for an interface to access the target and on the client side to access said interfaces.
The __init__() method will typically first check if the target meets the criteria needed for the extension to work or be used. If not it can raise
target_extension_c.unneeded
to avoid the extension to be created.Then it proceeds to create an instance that will be attached to the target for later use.
-
exception
unneeded
¶ Raise this from __init__() if this extension is not needed for this target.
-
target
= None¶ Target this extension applies to
-
exception
-
class
tcfl.tc.
target_c
(rt, testcase, bsp_model, target_want_name)¶ A remote target that can be manipulated
Parameters: - rt (dict) – remote target descriptor (dictionary) as returned
by
tcfl.ttb_client.rest_target_find_all()
and others. - tescase (tc_c) – testcase descriptor to which this target instance will be uniquely assigned.
A target always operate in a given BSP model, as decided by the testcase runner. If a remote target A has two BSP models (1 and 2) and a testcase T shall be run on both, it will create two testcase instances, T1 and T2. Each will be assigned an instance of
target_c
, A1 and A2 respectively, representing the same target A, but each set to a different BSP model.Note these object expose the basic target API, but then different extensions provide APIs to access other interfaces, depending on if the target exposes it or not; these is the current list of implemented interfaces:
console
capture
for stream and snapshot captures of audio, video, network traffic, etcdebug
fastboot
images
ioc_flash_server_app
power
shell
ssh
tunnel
zephyr
-
want_name
= None¶ Name this target is known to by the testcase (as it was claimed with the
tcfl.tc.target()
decorator)
-
rt
= None¶ Remote tags of this target
-
id
= None¶ (short) id of this target
-
fullid
= None¶ Full id of this target
-
type
= None¶ Type name of this target
-
ticket
= None¶ ticket used to acquire this target
-
testcase
= None¶ Testcase that this target is currently executing
-
keep_active
= None¶ Make sure the testcase indicates the daemon that this target is to be marked as active during the testcase execution of expectation loops.
-
bsps_stub
= None¶ Dict of BSPs that have to be stubbed for the board to work correctly in the current BSP model (if the board has two BSPs but BSP1 needs to have an image of something so BSP2 can start). The App builder can manipulate this to remove BSPs that can be ignored. The value is a tuple (app, srcinfo) that indicates the App builder that will build the stub and with which source information (path to the source).
-
tmpdir
= None¶ Temporary directory where to store files – this is the same as the testcase’s – it is needed for the report driver to find where to put stuff.
-
kws
= None¶ Keywords for
%(KEY)[sd]
substitution specific to this target and current active BSP model and BSP as set withbsp_set()
.These are obtained from the remote target descriptor (self.rt) as obtained from the remote ttbd server.
These can be used to generate strings based on information, as:
>>> print "Something %(id)s" % target.kws >>> target.shcmd_local("cp %(id)s.config final.config")
To find which fields are available for a target:
$ tcf list -vv TARGETNAME
The testcase will provide also other fields, in
tcfl.tc.tc_c.kws
, which are rolled in in this variable too. See how to more available keywords hereNote that testcases might be setting more keywords in the target or the testcase with:
>>> target.kw_set("keywordname", "somevalue") >>> self.kw_set("keywordname", "somevalue")
as well, any of the target’s properties set with
TARGET.property_set
(or it’s command line equivalenttcf property-set TARGET PROPERTY VALUE
) will show up as keywords.
-
kws_origin
= None¶ Origin of keys defined in self.kws
-
classmethod
extension_register
(ext_cls, name=None)¶ Register an extension to the
tcfl.tc.target_c
class.This is usually called from a config file to register an extension provided by a package.
See
target_extension_c
for detailsParameters: - ext_cls (target_extension_c) – a class that provides an extension
- name (str) – (optional) name of the extension (defaults to the class name)
-
classmethod
extension_unregister
(ext_cls, name=None)¶ Unregister an extension to the
tcfl.tc.target_c
class.This is usually used by unit tests. There usually is no need to unregister extensions.
See
target_extension_c
for detailsParameters: - ext_cls (target_extension_c) – a class that provides an extension
- name (str) – (optional) name of the extension (defaults to the class name)
-
bsps_all
¶ Return a list of all BSPs in the target (note this might be more than the ones available in the currentlt selected BSP model).
-
bsp_set
(bsp=None)¶ Set the active BSP
If the BSP is omitted, this will select the first BSP in the current BSP model. This means that if there is a preference in multiple BSPs, they have to be listed as such in the target’s configuration.
If there are no BSPs, this will raise an exception
Parameters: bsp (str) – (optional) The name of any BSP supported by the board (not necessarily in the BSP-Model list of active BSPs. These are always in bsps_all
. If this argument is False, then the active BSP is reset to none.
-
kws_set
(d, bsp=None)¶ Set a bunch of target’s keywords and values
Parameters:
-
kw_set
(kw, val, bsp=None)¶ Set a target’s keyword and value
Parameters:
-
kw_unset
(kw, bsp=None)¶ Unset a target’s string keyword
Parameters:
-
kws_required_verify
(kws)¶ Verify if a target exports required keywords, raise blocked exception if any is missing.
-
ic_field_get
(ic, field, field_description='')¶ Obtain the value of a field for a target in an interconnect
A target might be a member of one or more interconnects, as described by its tags (interconnects section).
Parameters: - ic (tcfl.tc.target_c) – target describing the interconnect
of which this target is a member (as defined in a
@
tcfl.tc.interconnect()
decorator to the testcase class) - field (str) – name of the filed whose value we want.
>>> def eval_somestep(self, ic, target1, target2): >>> target1.shell.run("ifconfig eth0 %s/%s" >>> % (target2.addr_get(ic, 'ipv4'), >>> target2.ic_field_get(ic, 'ipv4_addr_len'))
- ic (tcfl.tc.target_c) – target describing the interconnect
of which this target is a member (as defined in a
@
-
addr_get
(ic, tech, instance=None)¶ Obtain the address for a target in an interconnect
A target might be a member of one or more interconnects, as described by its tags (interconnects section).
Parameters: - ic (tcfl.tc.target_c) – target describing the interconnect
of which this target is a member (as defined in a
@
tcfl.tc.interconnect()
decorator to the testcase class) - tech (str) –
name of the technology on which address we are interested.
As part of said membership, one or more key/value pairs can be specified. Assigned addresses are always called TECHNOLOGY_addr, were TECHNOLOGY can be things like ipv4, ipv6, bt, mac, etc…
If tech fits a whole key name, it will be used instead.
- instance (str) –
(optional) when this target has multiple connections to the same interconnect (via multiple physical or virtual network interfaces), you can select which instance of those it is wanted.
By default this will return the default instance (eg, the one corresponding to the interconnect
ICNAME
), but if an instance is added, it will return the IP address forICNAME#INSTANCE
as declared in the target’s configuration with functions such asttbl.test_target.add_to_interconnect()
.
When the target, for the current testcase is member of a single interconnect, any TECHNOLOGY_addr for the interconnect key/value will be available in the
kws
member as for example.>>> target.kws['ipv4_addr']
However, when member of multiple interconnects, which members are promoted to top level is undertermined if both interconnects provide address information for the same technology. Use this function to obtain the interconnect-specific information.
>>> def eval_somestep(self, ic, target1, target2): >>> target1.shell.run("scp /etc/passwd %s:/etc/passwd" >>> % target2.addr_get(ic, 'ipv4'))
- ic (tcfl.tc.target_c) – target describing the interconnect
of which this target is a member (as defined in a
@
-
app_get
(bsp=None, noraise=True)¶ Return the App builder that is assigned to a particular BSP in the target.
Parameters:
-
report_pass
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_fail
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_error
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_blck
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_skip
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_info
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_data
(domain, name, value, expand=True, level=None, dlevel=0)¶ Report measurable data
When running a testcase, if data is collected that has to be reported for later analysis, use this function to report it. This will be reported by the report driver in a way that makes it easy to collect later on.
Measured data is identified by a domain and a name, plus then the actual value.
A way to picture how this data can look once aggregated is as a table per domain, on which each invocation is a row and each column will be the values for each name.
Parameters: - domain (str) – to which domain this measurement applies (eg: “Latency Benchmark %(type)s”);
- name (str) – name of the value (eg: “context switch (microseconds)”); it is recommended to always add the unit the measurement represents.
- value – value to report for the given domain and name; any type can be reported.
- expand (bool) –
(optional) by default, the domain and name fields will be %(FIELD)s expanded with the keywords of the testcase or target. If False, it will not be expanded.
This enables to, for example, specify a domain of “Latency measurements for target %(type)s” which will automatically create a different domain for each type of target.
-
report_tweet
(what, result, extra_report='', ignore_nothing=False, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
shcmd_local
(cmd, origin=None, reporter=None, logfile=None)¶ Run a shell command in the local machine, substituting %(KEYWORD)[sd] with keywords defined by the target and testcase.
-
acquire
()¶ Shall we acquire this target? By default the testcases get the targets they request acquired for exclusive use, but in some cases, it might not be needed (default: True)
-
release
()¶ Release a target
-
active
()¶ Mark an owned target as active
For long running tests, indicate to the server that this target is still active.
-
property_get
(property_name, default=None)¶ Read a property from the target
Parameters: property_name (str) – Name of the property to read Returns str: value of the property (if set) or None
-
property_set
(property_name, value=None)¶ Set a property on the target
Parameters:
-
thing_plug
(thing)¶ Connect a thing described in the target’s
tags
things dictionary to the target.Parameters: thing (str) – thing to connect
-
thing_unplug
(thing)¶ Disconnect a thing described in the target’s
tags
things dictionary from the target.Parameters: thing (str) – thing to disconnect
-
thing_list
()¶ Return a list of connected things
-
console_tx
(data, console=None)¶ Transmits the data over the given console
Parameters: - data – data to be sent; data can be anything that can be transformed into a sequence of bytes
- console (str) – (optional) name of console over which to send the data (otherwise use the default one).
Note this function is equivalent to
tcfl.target_ext_console.console.write()
, which is the raw version of this function.However, this function works with the send/expect engine and will flush the expect buffer so that next time we call
expect()
, it wil look for the expected data only in data received after calling this function.
-
crlf
¶ What will
target_c.send()
use for CR/LF when sending data to the target’s consoles. Defaults to\r\n
, but it can be set to any string, even""
for an empty string.
-
send
(data, console=None, crlf=None)¶ Like
console_tx()
, transmits the string of data over the given console.This function, however, differs in takes only strings and that it will append a CRLF sequence at the end of the given string. As well, it will flush the receive pipe so that next time we
expect()
something, it will be only for anything received after we called this function.Parameters: - data (str) – string of data to send
- console (str) – (optional) name of console over which to send the data (otherwise use the default one).
- ctlf (str) –
(optional) CRLF technique to use, or what to append to the string as a CRLF:
- None: use whatever is in
target_c.crlf
\r
: use carriage return\r\n
: use carriage return and line feed\n
: use line feedANYSTRING
: append ANYSTRING
- None: use whatever is in
-
console_rx_read
(console=None, offset=0)¶ Return the data that has been read for a console until now.
Parameters:
-
console_rx_size
(console=None)¶ Return how many bytes have returned for a console
Parameters: console (str) – (optional) name of console on which the the data was received (otherwie use the default one).
-
on_console_rx
(regex_or_str, timeout=None, console=None, result='pass')¶ Set up an action to perform (pass, fail, block or skip) when a string or regular expresison is received on a given console in this target.
Note this does not wait for said string; you need to run the testcase’s expecter loop with:
>>> self.tls.expecter.run()
Als well, those actions will be performed when running
expect()
orwait()
for blocking versions.This allows you to specify many different things you are waiting for from one or more targets and wait for all of them at the same time and block until all of them are received (or timeout).
Parameters: - regex_or_str – string or regular expression (compiled
with
re.compile()
. - timeout (int) – Seconds to wait for regex_or_str to be
received, raise
tcfl.tc.failed_e
otherwise. If False, no timeout check is done; if None, it is taken from the default timeout set by the testcase. - console (str) – (optional) name of console from which to receive the data
- result –
what to do when that regex_or_str is found on the given console:
- pass, (default) raise
tcfl.tc.pass_e
- block, raise
tcfl.tc.blocked_e
- error, raise
tcfl.tc.error_e
, - failed, raise
tcfl.tc.failed_e
, - blocked, raise
tcfl.tc.blocked_e
Note that when running an expecter loop, if seven different actions are added indicating they are expected to pass, the seven of them must have raised a pass exception (or indicated passage somehow) before the loop will consider it a full pass. See
tcfl.expecter.expecter_c.run()
. - pass, (default) raise
Raises: tcfl.tc.pass_e
,tcfl.tc.blocked_e
,tcfl.tc.failed_e
,tcfl.tc.error_e
,tcfl.tc.skip_e
, any other exception from runtimes.Returns: True if a poller for the console was added to the testcase’s expecter loop, False otherwise.
- regex_or_str – string or regular expression (compiled
with
-
wait
(regex_or_str, timeout=None, console=None)¶ Wait for a particular regex/string to be received on a given console of this target before a given timeout.
See
expect()
for a version that just raises exceptions when the output is not received.Parameters: - timeout (int) – Seconds to wait for regex_or_str to be
received, raise
tcfl.tc.error_e
otherwise. If False, no timeout check is done; if None, it is taken from the default timeout set by the testcase. - console (str) – (optional) name of console from which to receive the data
Returns: True if the output was received before the timeout, False otherwise.
- timeout (int) – Seconds to wait for regex_or_str to be
received, raise
-
expect
(regex_or_str, timeout=None, console=None)¶ Wait for a particular regex/string to be received on a given console of this target before a given timeout.
Similar to
wait()
, it will raise an exception if @regex_or_str is not received before @timeout on @console.Parameters: - timeout (int) – Seconds to wait for regex_or_str to be
received, raise
tcfl.tc.error_e
otherwise. If False, no timeout check is done; if None, it is taken from the default timeout set by the testcase. - console (str) – (optional) name of console from which to receive the data
Returns: Nothing, if the output is received.
Raises: tcfl.tc.blocked_e
on error,tcfl.tc.error_e
if not received, any other runtime exception.- timeout (int) – Seconds to wait for regex_or_str to be
received, raise
-
on_console_rx_cm
(**kwds)¶ When regex_or_str is received on the given console (default console), execute the action given by result.
Context Manager version of
on_console_rx()
.Parameters: - regex_or_str – string or regular expression (compiled
with
re.compile()
. - result –
what to do when that regex_or_str is found on the given console:
- pass, raise
tcfl.tc.pass_e
- block, raise
tcfl.tc.blocked_e
- failed, raise
tcfl.tc.error_e
, - failed, raise
tcfl.tc.failed_e
, - blocked, raise
tcfl.tc.blocked_e
- pass, raise
Raises: tcfl.tc.pass_e
,tcfl.tc.blocked_e
,tcfl.tc.error_e
,tcfl.tc.failed_e
,tcfl.tc.skip_e
).Returns: Nothing
- regex_or_str – string or regular expression (compiled
with
-
stub_app_add
(bsp, _app, app_src, app_src_options='')¶ Add App builder information for a BSP that has to be stubbed.
When running on a target that has multiple BSPs but some of then will not be used by the current BSP model, stubs might have to be added to those BSPs to make sure their CPUs are not going wild. Use this function to specify which app builder is going to be used, the path to the stub source and build options. The App building mechanism will take it from there.
An app builder might determine that a given BSP needs no stub; in said case it can remove it from the dict
bsps_stub()
with:>>> del target.bsps_stub[BSPNAME]
This is like the app information added by _target_app_add(), but it is stored in the target_c instance, not to the testcase class.
This is because the stubbing that has to be done is specific to each target (as the BSPs to stub each target have might be different depending on the target and BSP-model).
Note this information is only added if there is nothing existing about said BSP. To override, you need to delete and add:
>>> del target.bsps_stub[BSPNAME] >>> target.stub_app_add(BSPNAME, etc etc)
-
bsp_model_suffix
()¶
-
bsp_suffix
()¶
-
report_mk_prefix
()¶
- rt (dict) – remote target descriptor (dictionary) as returned
by
-
class
tcfl.tc.
target_group_c
(descr)¶ A unique group of targets (each set to an specific BSP model) assigned to to a testcase for execution.
A testcase can query a
tcfl.tc.target_c
instance of the remote target to manipualte it by declaring it as an argument to a testcase method, querying thetargets
dictionary or callingtarget()
:>>> @tcfl.tc.target(name = "mytarget") >>> class mytest(tcfl.tc.tc_c): >>> ... >>> >>> def eval_1(self, mytarget): >>> mytarget.power.cycle() >>> >>> def eval_2(self): >>> mytarget = self.target_group.target("mytarget") >>> mytarget.power.cycle() >>> >>> def eval_3(self): >>> mytarget = self.targets["mytarget"] >>> mytarget.power.cycle() >>>
-
name
¶
-
name_set
(tgid)¶
-
len
()¶ Return number of targets in the group
-
target
(target_name)¶ Return the instance of
tcfl.tc.target_c
that represents a remote target that met the specification requestedtcfl.tc.target()
decorator with name target_name
-
target_add
(target_name, _target)¶
-
targets
¶ Dictionary of
tcfl.tc.target_c
descriptor for a remote target keyed by the name they were requested with thetcfl.tc.target()
decorator.
-
-
class
tcfl.tc.
result_c
(passed=0, errors=0, failed=0, blocked=0, skipped=0)¶ -
total
()¶
-
summary
()¶
-
normalized
()¶
-
static
from_retval
(retval)¶
-
report
(tc, message, attachments=None, level=None, dlevel=0, alevel=2)¶
-
static
report_from_exception
(_tc, e, attachments=None, force_result=None)¶ Given an exception, report using the testcase or target report infrastructure on the exception, traces to it as well as any attachments it came with and return a valid
result_c
code.By default, this is the mapping:
tc_c.report_pass()
is used forpass_e
tc_c.report_error()
is used forerror_e
tc_c.report_fail()
is used forfailed_e
tc_c.report_blck()
is used forblocked_e
and any other exceptiontc_c.report_skip()
is used forskip_e
However, it can be forced by passing as force_result or each testcase can be told to consider specific exceptions as others per reporting using the
tcfl.tc.tc_c.exception_to_result
.Parameters: force_result (bool) – force the exception to be interpreted as tcfl.tc.pass_e
,error_e
,failed_e
,tcfl.tc.blocked_e
, orskip_e
; note there is also translation that can be done fromtcfl.tc.tc_c.exception_to_result
.
-
static
from_exception
(fn)¶ Call a phase function to translate exceptions into
tcfl.tc.result_c
return codes.Passes through the return code, unless it is None, in which case we just return result_c(1, 0, 0, 0, 0)
Note this function prints some more extra detail in case of fail/block/skip. it.
-
-
class
tcfl.tc.
tc_logadapter_c
(logger, extra)¶ Logging adapter to prefix test case’s current BSP model, bsp and target name.
Initialize the adapter with a logger and a dict-like object which provides contextual information. This constructor signature allows easy stacking of LoggerAdapters, if so desired.
You can effectively pass keyword arguments as shown in the following example:
adapter = LoggerAdapter(someLogger, dict(p1=v1, p2=”v2”))
-
id
= None¶
-
prefix
= None¶
-
process
(msg, kwargs)¶ Process the logging message and keyword arguments passed in to a logging call to insert contextual information. You can either manipulate the message itself, the keyword args or both. Return the message and kwargs modified (or not) to suit your needs.
Normally, you’ll only need to override this one method in a LoggerAdapter subclass for your specific needs.
-
Add tags to a testcase
Parameters:
-
tcfl.tc.
serially
()¶ Force a testcase method to run serially (vs
concurrently()
).Remember methods that are ran serially are run first and by default are those that
- take more than one target as arguments
- are evaluation methods
-
tcfl.tc.
concurrently
()¶ Force a testcase method to run concurrently after all the serial methods (vs decorator
serially()
).Remember methods that are ran concurrently are run after the serial methods and by default those that:
- are not evaluation methods
- take only one target as argument (if you force two methods that share a target to run in parallel, it is your responsiblity to ensure proper synchronization
-
tcfl.tc.
target_want_add
(_tc, target_want_name, spec, origin, **kwargs)¶ Add a requirement for a target to a testcase instance
Given a testcase instance, add a requirement for it to need a target, filtered with the given specification (spec, which defaults to any), a name and optional arguments in the form of keywords.
This is equivalent to the
tcfl.tc.target()
decorator, which adds the requirement to the class, not to the instance. Please refer to it for the arguments.
-
tcfl.tc.
target
(spec=None, name=None, **kwargs)¶ Add a requirement for a target to a testcase instance
For each target this testcase will need, a filtering specification can be given (spec), a name (or it will default to targetN except for the first one, which is just target) and optional arguments in the form of keywords.
Of those optional arguments, the most important are the app_* arguments. An app_* argument is supplying a source path for an application that has to be (maybe) configured, built and deployed to the target. The decorator will add phase methods to the testcase to configure, build and deploy the application. Now, depending on the application drivers installed, the application can be built or not. FIXME make this explanation better.
Parameters: - spec (str) – specification to filter against the tags the remote target exposes.
- name (str) – name for the target (must not exist already). If none, first declared target is called target, the next target1, then target2 and so on.
- kwargs (dict) –
extra keyword arguments are allowed, which might be used in different ways that are still TBD. Main ones recognized:
- app_NAME = dict(BSP1: PATH1, BSP1: PATH2): specify a list
of paths to apps that shall be built and deployed to the
given BSPs by App builder app_NAME; App builders exist for
Zephyr, Arduino Sketch and other setups, so you don’t
manually have to build your apps. You can create your own
too.
When a board is being run in a multiple BSP mode, each BSP has to be added to an App builder if using the App builder support, otherwise is an error condition.
- app_NAME = PATH: the same, but one for when one BSP is
used; it applies to any BSP in a single BSP model
target.
FIXME: add link to app builders
- app_NAME_options = STRING: Extra options to pass to the APP
- builder FIXME: support also BSP_options?
- mode: how to consider this target at the time of generating
- multiple permutations of targets to run a testcase:
- any: run the testcase on any target that can be found to match the specification
- one-per-type: run the testcase on one target of each type that meets the specification (so if five targets match the specification but they are all of the same type, only one will run it; however, if there are two different types in the set of five, one of each type will run it)
- all: run on every single target that matches the specification
Specially on testcases that require multiple targets, there can be a huge number of permutations on how to run the testcase to ensure maximum coverage of different combinations of targets; some experimentation is needed to decide how to tell TCF to run the testcase and balance how many resources are used.
- app_NAME = dict(BSP1: PATH1, BSP1: PATH2): specify a list
of paths to apps that shall be built and deployed to the
given BSPs by App builder app_NAME; App builders exist for
Zephyr, Arduino Sketch and other setups, so you don’t
manually have to build your apps. You can create your own
too.
-
tcfl.tc.
interconnect
(spec=None, name=None, **kwargs)¶ Add a requirement for an interconnect to a testcase instance
An interconect is a target that binds two or more targets together, maybe provides interconnectivity services (networking or any other); we declare it’s need just like any other target–however, we add the name to an special list so it is easier to handle later.
The arguments are the same as to
tcfl.tc.target()
.
-
class
tcfl.tc.
tc_c
(name, tc_file_path, origin)¶ A testcase, with instructions for configuring, building, deploying, setting up, running, evaluating, tearing down and cleaning up.
Derive this class to create a testcase, implementing the different testcase methods to build, deploy and evaluate if it is considered a pass or a failure:
>>> class sometest(tcfl.tc.tc_c): >>> >>> def eval_device_present(self): >>> if not os.path.exists("/dev/expected_device"): >>> raise tcfl.tc.error_e("Device not connected") >>> >>> def eval_mode_correct(self): >>> s = os.stat("/dev/expected_device"): >>> if s.st_mode & 0x644 == 0: >>> raise tcfl.tc.failed_e("wrong mode")
Note
the class will be ignored as a testcase if its name starts with _base_; this is useful to create common code which will be instantiated in another class without it being confused with a testcase.
The runner will call the testcase methods to evaluate the test; any failure/blockage causes the evaluation to stop and move on to the next testcase:
configure*() for getting source code, configuring a buid, etc ..
build*() for building anything that is needed to run the testcase
deploy*() for deploying the build products or artifacts needed to run the testcase to the diffrent targets
For evaluating:
- setup*() to setup the system/fixture for an evaluation run
- start*() to start/power-on the targets or anything needed for the test case evaluation
- eval*() to actually do evaluation actions
- teardown*() for powering off
As well, any test*() methods will be run similarly, but for each, the sequence called will be setup/start/test/teardown (in contrast to eval methods, where they are run in sequence without calling setup/start/teardown in between).
clean*() for cleaning up (ran only if -L is passed on the command line)
- class_teardown is mostly used for self-testing and debugging,
but are functions called whenever every single testcase of the same class has completed executing.
Methods can take no arguments or the names of one or more targets they will operate with/on. These targets are declared using the
tcfl.tc.target()
(for a normal target) andtcfl.tc.interconnect()
(for a target that interconnects/groups the rest of the targets together).The methods that take no targets will be called sequentially in alphabetical order (not in declaration order!). The methods that take different targets will be called in parallel (to maximize multiple cores, unless decorated with
tcfl.tc.serially()
). Evaluation functions are always called sequentially, except if decorated with :func:The testcase methods use the APIs exported by this class and module:
to report information at the appropiate log level:
report_pass()
,report_fail()
,report_blck()
andreport_info()
raise an exception to indicate result of this method:
- pass, raise
tcfl.tc.pass_e
(or simply return) - failed, raise
tcfl.tc.failed_e
, - error, raise
tcfl.tc.error_e
, - blocked, raise
tcfl.tc.blocked_e
; any other uncaught Python exception is also converted to this. - skipped, raise
tcfl.tc.skip_e
- pass, raise
run commands in the local machine with
shcmd_local()
; the command can be formatted with %(KEYWORD)[sd] that will be substituted with values found inkws
.Interact with the remote targets through instances of
target_c
that represent them:via arguments to the method
via
targets
, a dictionary keyed by the names of the targets requested with thetarget()
andinterconnect()
decorators; for example:>>> @tcfl.tc.interconnect() # named "ic" by default >>> @tcfl.tc.target() # named "target1" by default >>> @tcfl.tc.target() # named "target" by default >>> class mytest(tcfl.tc.tc_c): >>> ... >>> >>> def start(self, ic, target, target1): >>> ic.power.cycle() >>> target.power.cycle() >>> target1.power.cycle() >>> >>> def eval_1(self, target): >>> target.expect("Hello world") >>> >>> def eval_2(self): >>> target2 = self.target_group.target("target2") >>> target2.expect("Ready") >>> >>> def eval_3(self): >>> mytarget = self.targets["ic"] >>> ic.expect("targets are online") >>> >>> def teardown(self): >>> for _n, target in reversed(self.targets.iteritems()): >>> target.power.off() >>>
target_c
expose APIs to act on the targets, such as power control, serial console access, image deployment
-
kws
= None¶ Keywords for %(KEY)[sd] substitution specific to this testcase.
Note these do not include values gathered from remote targets (as they would collide with each other). Look at data:target.kws <tcfl.tc.target_c.kws> for that.
These can be used to generate strings based on information, as:
>>> print "Something %(FIELD)s" % target.kws >>> target.shcmd_local("cp %(FIELD)s.config final.config")
Fields available:
- runid: string specified by the user that applies to all the testcases
- srcdir and srcdir_abs: directory where this testcase was found
- thisfile: file where this testscase as found
- tc_hash: unique four letter ID assigned to this
testcase instance. Note that this is the same for all
the targets it runs on. A unique ID for each target of
the same testcase instance is the field tg_hash in the
target’s keywords
target.kws
(FIXME: generate, currently only done by app builders)
-
kws_origin
= None¶ Origin of the keyword in self.kws; the values for these are lists of places where the setting was set, or re-set
-
buffers_lock
= None¶ Lock to access
buffers
safely from multiple threads at the same time for the same testcase.
-
origin
= '/home/inaky/t/master-tcf.git/tcfl/tc.py:5983'¶
-
build_only
= []¶ List of places where we declared this testcase is build only
-
targets
= None¶ Target objects
) in which this testcase is running (keyed by target want name, as given to decoratorstcfl.tc.target()
and func:tcfl.tc.interconnect. Note this maps toself._target_groups_c.targets()
for convenience.
-
result_eval
= None¶ Result of the last evaluation run
When an evaluation is run (setup/start/eval/teardown), this variable reflexts the evaluation status; it is meant to be used during the teardown phase, so for example, in case of failure, the teardown phase might decide to gather information about the current target’s state.
-
result
= None¶ Result of the last run of all phases in this testcase
we might need to look at this in other testcases executed inmediately after (as added with
post_tc_append()
).
-
report_file_prefix
= None¶ Report file prefix
When needing to create report file collateral of any kind, prefix it with this so it always shows in the same location for all the collateral related to this testcase:
>>> target.shell.file_copy_from("remotefile", >>> self.report_file_prefix + "remotefile")
will produce LOGDIR/report-RUNID:HASHID.remotefile if –log-dir LOGDIR -i RUNID was provided as command line.
>>> target.capture.get('screen', >>> self.report_file_prefix + "screenshot.png")
will produce LOGDIR/report-RUNID:HASHID.screenshot.png
-
is_static
()¶ Returns True if the testcase is static (needs to targets to execute), False otherwise.
-
report_pass
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_error
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_fail
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_blck
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_skip
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_info
(message, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5)¶
-
report_data
(domain, name, value, expand=True)¶ Report measurable data
When running a testcase, if data is collected that has to be reported for later analysis, use this function to report it. This will be reported by the report driver in a way that makes it easy to collect later on.
Measured data is identified by a domain and a name, plus then the actual value.
A way to picture how this data can look once aggregated is as a table per domain, on which each invocation is a row and each column will be the values for each name.
Parameters: - domain (str) – to which domain this measurement applies (eg: “Latency Benchmark %(type)s”);
- name (str) – name of the value (eg: “context switch (microseconds)”); it is recommended to always add the unit the measurement represents.
- value – value to report for the given domain and name; any type can be reported.
- expand (bool) –
(optional) by default, the domain and name fields will be %(FIELD)s expanded with the keywords of the testcase or target. If False, it will not be expanded.
This enables to, for example, specify a domain of “Latency measurements for target %(type)s” which will automatically create a different domain for each type of target.
-
report_tweet
(what, result, extra_report='', ignore_nothing=False, attachments=None, level=None, dlevel=0, alevel=2, ulevel=5, dlevel_failed=0, dlevel_blocked=0, dlevel_passed=0, dlevel_skipped=0, dlevel_error=0)¶
-
shcmd_local
(cmd, origin=None, reporter=None, logfile=None)¶ Run a shell command in the local machine, substituting %(KEYWORD)[sd] with keywords defined by the testcase.
-
classmethod
file_ignore_add_regex
(regex, origin=None)¶ Add a regex to match a file name to ignore when looking for testcase files
Parameters: - regex (str) – Regular expression to match against the file name (not path)
- origin (str) – [optional] string describing where this regular expression comes from (eg: FILE:LINENO).
-
classmethod
dir_ignore_add_regex
(regex, origin=None)¶ Add a regex to match a directory name to ignore when looking for testcase files
Parameters: - regex (str) – Regular expression to match against the directory name (not path)
- origin (str) – [optional] string describing where this regular expression comes from (eg: FILE:LINENO).
-
classmethod
driver_add
(_cls, origin=None, *args)¶ Add a driver to handle test cases (a subclass of :class:tc_c)
A testcase driver is a subclass of
tcfl.tc.tc_c
which overrides the methods used to locate testcases and implements the different testcase configure/build/evaluation functions.>>> import tcfl.tc >>> class my_tc_driver(tcfl.tc.tc_c) >>> tcfl.tc.tc_c.driver_add(my_tc_driver)
Parameters: - _cls (tcfl.tc.tc_c) – testcase driver
- origin (str) – (optional) origin of this call
-
hook_pre
= []¶ (list of callables) a list of functions to call before starting execution of each test case instance (right before any phases are run)
Usable to do final testcase touch up, adding keywords needed for the site deployment. etc.
Note these will be called as methods in the order in the list, so the first argument will be always be the the testcase instance.
E.g.: in a TCF configuration file .tcf/conf_hook.py you can set:
>>> def _my_hook_fn(tc): >>> # Classify testcases based on category: >>> # - red >>> # - green >>> # - blue >>> # >>> # tc_name keyword has the path of the testcase, which >>> # we are using for the sake of example to categorize; >>> # keywords can be dumped by running `tcf run >>> # /usr/share/examples/test_dump_kws*py. >>> >>> name = tc.kws['tc_name'] >>> categories = set() >>> for category in [ 'red', 'green', 'blue' ]: >>> # if test's path has CATEGORY, add it >>> if category in name: >>> categories.add(category) >>> if not categories: >>> categories.add('uncategorized') >>> tc.kw_set('categories', ",".join(categories)) >>> tc.log.error("DEBUG categories: %s", ",".join(categories)) >>> >>> tcfl.tc.tc_c.hook_pre.append(_my_hook_fn) >>>
Warning
- this is a global variable for all testcases of all classes and instances assigned to run in different targets
- these functions will execute on different threads and processes, so do not use shared data or global variables.
- only add to this list from configuration files, never from testcases or testcase driver code.
-
type_map
= {}¶ (dict) a dictionary to translate target type names, from TYPE[:BSP] to another name to use when reporting as it is useful/convenient to your application (eg: if what you are testing prefers other type names); will be only translated if present. E.g.:
>>> tcfl.tc_c.type_map = { >>> # translate to Zephyr names >>> "arduino-101:x86" = "arduino_101", >>> "arduino-101:arc" = "arduino_101_ss", >>> }
-
exception_to_result
= {<type 'exceptions.AssertionError'>: <class 'tcfl.tc.blocked_e'>}¶ Map exception types to results
this allows to automaticall map an exception raised automatically and be converted to a type. Any testcase can define their own version of this to decide how to convert exceptions from the default of them being considered blockage to skip, fail or pass
>>> class _test(tcfl.tc.tc_c): >>> def configure_exceptions(self): >>> self.exception_to_result[OSError] = tcfl.tc.error_e
-
eval_repeat
= 1¶ How many times do we repeat the evaluation (for stress/MTBF)
-
eval_count
= 0¶ Which evaluation are we currently running (out of
eval_repeat
)
-
testcase_patchers
= []¶ List of callables that will be executed when a testcase is identified; these can modify as needed the testcase (eg: scanning for tags)
-
runid
= None¶
-
runid_visible
= ''¶
-
tmpdir
= '/tmp/tcf.run-eh_Xad'¶ temporary directory where testcases can drop things; this will be specific to each testcase instance (testcase and target group where it runs).
-
buffers
= None¶ temporary directory where to store information (serial console, whatever) that will be captured on each different evaluation; on each invocation of the evaluation, a new buffer dir will be allocated and code that captures things from the target will store captures in there.
-
jobs
= 1¶ Number of testcases running on targets
-
rt_all
= None¶
-
release
= True¶
-
report_mk_prefix
()¶ Update the prefix we use for the logging/reports when some parameter changes.
-
target_group
¶ Group of targets this testcase is being ran on
-
tag_set
(tagname, value=None, origin=None)¶ Set a testcase tag.
Parameters: Note that there are a few tags that have speciall conventions:
component/COMPONENTNAME is a tag with value COMPONENTNAME and it is used to classify the testcases by component. Multiple tags like this might exist if the testcase belongs to multiple components. Note it should be a single word.
TCF will create a tag components with value COMPONENTNAME1 COMPONENTNAME2 … (space separated list of components) which shall match the component/COMPONENTx name contains the name of the testcase after testcase instantiation.
Set multiple testcase tags.
Parameters: Same notes as for
tag_set()
apply
-
kw_set
(key, value, origin=None)¶ Set a testcase’s keyword and value
Parameters:
-
kw_unset
(kw)¶ Unset a string keyword for later substitution in commands
Parameters: kw (str) – keyword name
-
kws_set
(d, origin=None)¶ Set a bunch of testcase’s keywords and values
Parameters: d (dict) – A dictionary of keywords and values
-
tag_get
(tagname, value_default, origin_default=None)¶ Return a tuple (value, origin) with the value of the tag and where it was defined.
-
target_event
= <threading._Event object>¶
-
assign_timeout
= 1000¶
-
targets_active
(*skip_targets)¶ Mark each target this testcase use as being used
This is to be called when operations are being done on the background that the daemon can’t see and thus consider the target active (e.g.: you are copying a big file over SSH)
>>> class mytest(tcfl.tc.tc_c): >>> ... >>> def eval_some(self): >>> ... >>> self.targets_active() >>> ...
If any target is to be skipped, they can be passed as arguments:
>>> @tcfl.tc.interconnect() >>> @tcfl.tc.target() >>> @tcfl.tc.target() >>> class mytest(tcfl.tc.tc_c): >>> ... >>> def eval_some(self, target): >>> ... >>> self.targets_active(target) >>> ...
-
finalize
(result)¶
-
mkticket
()¶
-
post_tc_append
(tc)¶ Append a testcase that shall be executed inmediately after this testcase is done executing in the same target group.
This is a construct that can be used for:
- execute other testcases that have been detected as needed only during runtime
- reporting subtestcases of a main testcase (relying only on
the output of the main testcase execution, such as in
tcfl.tc_zephyr_sanity.tc_zephyr_subsanity_c
.
Parameters: tc (tc_c) – [instance of a] testcase to append; note this testcase will be executed in the same target group as this testcase is being executed. So the testcase has to declare at the same targets (with the same names) or a subset of them. Example:
>>> @tcfl.tc.target("target1") >>> @tcfl.tc.target("target2") >>> @tcfl.tc.target("target3") >>> class some_tc(tcfl.tc.tc_c): >>> ... >>> def eval_something(self, target2): >>> new_tc = another_tc(SOMEPARAMS) >>> self.post_tc_append(new_tc) >>> >>> >>> @tcfl.tc.target("target2") >>> class another_tc(tcfl.tc.tc_c): >>> ... >>> def eval_something(self, target2): >>> self.report_info("I'm running on target2") >>> >>> @tcfl.tc.target("target1") >>> @tcfl.tc.target("target3") >>> class yet_another_tc(tcfl.tc.tc_c): >>> ... >>> def eval_something(self, target1, target3): >>> self.report_info("I'm running on target1 and target3")
-
file_regex
= <_sre.SRE_Pattern object>¶
-
classmethod
is_testcase
(path, _from_path)¶
-
class_result
= 0 (0 0 0 0 0)¶
-
classmethod
find_in_path
(tcs, path)¶ Given a path, scan for test cases and put them in the dictionary @tcs based on filename where found. list of zero or more paths, scan them for files that contain testcase tc information and report them. :param dict tcs: dictionary where to add the test cases found
Parameters: path (str) – path where to scan for test cases Returns: result_c with counts of tests passed/failed (zero, as at this stage we cannot know), blocked (due to error importing) or skipped(due to whichever condition).
-
tcfl.tc.
find
(args)¶ Discover test cases in a list of paths
-
tcfl.tc.
testcases_discover
(tcs_filtered, args)¶
-
tcfl.tc.
argp_setup
(arg_subparsers)¶
8.1.2. Test library (utilities for testcases)¶
Common utilities for test cases
Evaluate the build environment and make sure all it is needed to build Zephyr apps is in place.
If not, return a dictionary defining a skip tag with the reason that can be fed directly to decorator
tcfl.tc.tags()
; usage:>>> import tcfl.tc >>> import qal >>> >>> @tcfl.tc.tags(**qal.zephyr_tests_tags()) >>> class some_test(tcfl.tc.tc_c): >>> ...
-
tcfl.tl.
console_dump_on_failure
(testcase)¶ If a testcase has errored, failed or blocked, dump the consoles of all the targets.
Parameters: testcase (tcfl.tc.tc_c) – testcase whose targets’ consoles we want to dump Usage: in a testcase’s teardown function:
>>> import tcfl.tc >>> import tcfl.tl >>> >>> class some_test(tcfl.tc.tc_c): >>> ... >>> >>> def teardown_SOMETHING(self): >>> tcfl.tl.console_dump_on_failure(self)
-
tcfl.tl.
setup_verify_slip_feature
(zephyr_client, zephyr_server, _ZEPHYR_BASE)¶ The Zephyr kernel we use needs to support CONFIG_SLIP_MAC_ADDR, so if any of the targets needs SLIP support, make sure that feature is Kconfigurable Note we do this after building, because we need the full target’s configuration file.
Parameters: - zephyr_client (tcfl.tc.target_c) – Client Zephyr target
- zephyr_server (tcfl.tc.target_c) – Client Server target
- _ZEPHYR_BASE (str) – Path of Zephyr source code
Usage: in a testcase’s setup methods, before building Zephyr code:
>>> @staticmethod >>> def setup_SOMETHING(zephyr_client, zephyr_server): >>> tcfl.tl.setup_verify_slip_feature(zephyr_client, zephyr_server, tcfl.tl.ZEPHYR_BASE)
Look for a complete example in
../examples/test_network_linux_zephyr_echo.py
.
-
tcfl.tl.
teardown_targets_power_off
(testcase)¶ Power off all the targets used on a testcase.
Parameters: testcase (tcfl.tc.tc_c) – testcase whose targets we are to power off. Usage: in a testcase’s teardown function:
>>> import tcfl.tc >>> import tcfl.tl >>> >>> class some_test(tcfl.tc.tc_c): >>> ... >>> >>> def teardown_SOMETHING(self): >>> tcfl.tl.teardown_targets_power_off(self)
Note this is usually not necessary as the daemon will power off the targets when cleaning them up; usually when a testcase fails, you want to keep them on to be able to inspect them.
-
tcfl.tl.
tcpdump_enable
(ic)¶ Ask an interconnect to capture IP traffic with TCPDUMP
Note this is only possible if the server to which the interconnect is attached has access to it; if the interconnect is based on the :class:vlan_pci driver, it will support it.
Note the interconnect must be power cycled after this for the setting to take effect. Normally you do this in the start method of a multi-target testcase
>>> def start(self, ic, server, client): >>> tcfl.tl.tcpdump_enable(ic) >>> ic.power.cycle() >>> ...
-
tcfl.tl.
tcpdump_collect
(ic, filename=None)¶ Collects from an interconnect target the tcpdump capture
Parameters: - ic (tcfl.tc.target_c) – interconnect target
- filename (str) – (optional) name of the local file where to copy the tcpdump data to; defaults to report-RUNID:HASHID-REP.tcpdump (where REP is the repetition count)
-
tcfl.tl.
linux_os_release_get
(target, prefix='')¶ Return in a dictionary the contents of a file /etc/os-release (if it exists)
-
tcfl.tl.
linux_ssh_root_nopwd
(target, prefix='')¶ Configure a SSH deamon to allow login as root with no passwords
-
tcfl.tl.
deploy_linux_ssh_root_nopwd
(_ic, target, _kws)¶
-
tcfl.tl.
linux_ipv4_addr_get_from_console
(target, ifname)¶ Get the IPv4 address of a Linux Interface from the Linux shell using the ip addr show command.
Parameters: - target (tcfl.tc.target_c) – target on which to find the IPv4 address.
- ifname (str) – name of the interface for which we want to find the IPv4 address.
Raises: tcfl.tc.error_e – if it cannot find the IP address.
Example:
>>> import tcfl.tl >>> ... >>> >>> @tcfl.tc.interconnect("ipv4_addr") >>> @tcfl.tc.target("pos_capable") >>> class my_test(tcfl.tc.tc_c): >>> ... >>> def eval(self, tc, target): >>> ... >>> ip4 = tcfl.tl.linux_ipv4_addr_get_from_console(target, "eth0") >>> ip4_config = target.addr_get(ic, "ipv4") >>> if ip4 != ip4_config: >>> raise tcfl.tc.failed_e( >>> "assigned IPv4 addr %s is different than" >>> " expected from configuration %s" % (ip4, ip4_config))
-
tcfl.tl.
sh_export_proxy
(ic, target)¶ If the interconnect ic defines a proxy environment, issue a shell command in target to export environment variables that configure it:
>>> class test(tcfl.tc.tc_c): >>> >>> def eval_some(self, ic, target): >>> ... >>> tcfl.tl.sh_export_proxy(ic, target)
would yield a command such as:
$ export http_proxy=http://192.168.98.1:8888 https_proxy=http://192.168.98.1:8888 no_proxy=127.0.0.1,192.168.98.1/24,fc00::62:1/112 HTTP_PROXY=$http_proxy HTTPS_PROXY=$https_proxy NO_PROXY=$no_proxy
being executed in the target
-
tcfl.tl.
linux_wait_online
(ic, target, loops=10, wait_s=0.5)¶ Wait on the serial console until the system is assigned an IP
We make the assumption that once the system is assigned the IP that is expected on the configuration, the system has upstream access and thus is online.
8.1.3. Provisioning/deploying/flashing PC-class devices with a Provisioning OS¶
This module provides tools to image devices with a Provisioning OS.
The general operation mode for this is instructing the device to boot the Provisioning OS; at this point, the test script (or via the tcf client line) can interact with the POS over the serial console.
Then the device can be partitioned, formatted, etc with general Linux
command line. As well, we can provide an rsync server
to provide OS images that can be flashed
Booting to POS can be accomplished:
- by network boot and root over NFS
- by a special boot device pre-configured to always boot POS
- any other
Server side modules used actively by this system:
- DHCP server
ttbl.dhcp
: provides dynamic IP address assignment; it can be configured so a pre-configured IP address is always assigned to a target and will provide also PXE/TFTP boot services to boot into POS mode (working in conjunction with a HTTP, TFTP and NFS servers). - rsync server
ttbl.rsync
: provides access to images to rsync into partitions (which is way faster than some other imaging methods when done over a 1Gbps link). - port redirector
ttbl.socat
: not strictly needed for POS, but useful to redirect ports out of the NUT to the greater Internet. This comes handy if as part of the testing external software has to be installed or external services acccessed.
Note installation in the server side is needed, as described in POS setup.
-
tcfl.pos.
image_spec_to_tuple
(i)¶
-
tcfl.pos.
image_list_from_rsync_output
(output)¶
-
tcfl.pos.
image_select_best
(image, available_images, target)¶
-
tcfl.pos.
target_power_cycle_to_pos_pxe
(target)¶
-
tcfl.pos.
target_power_cycle_to_normal_pxe
(target)¶
-
tcfl.pos.
mk_persistent_tcf_d
(target, subdirs=None)¶
-
tcfl.pos.
deploy_linux_kernel
(ic, target, _kws)¶ Deploy a linux kernel tree in the local machine to the target’s root filesystem
This is normally given to
target.pos.deploy_image
as:>>> target.kw_set("pos_deploy_linux_kernel", SOMELOCALLOCATION) >>> target.pos.deploy_image(ic, IMAGENAME, >>> extra_deploy_fns = [ tcfl.pos.deploy_linux_kernel ])
as it expects
kws['pos_deploy_linux_kernel']
which points to a local directory in the form:- boot/* - lib/modules/KVER/*
all those will be rsynced to the target’s persistent root area (for speed) and from there to the root filesystem’s /boot and /lib/modules. Anything else in the
/boot/
and/lib/modules/
directories will be replaced with what comes from the kernel tree.Low level details
When the target’s image has been flashed in place,
tcfl.pos.deploy_image
is asked to call this function.The client will rsync the tree from the local machine to the persistent space using
target.pos.rsync
, which also caches it in a persistent area to speed up multiple transfers.
-
tcfl.pos.
capability_fns
= {'boot_config': {'uefi': <function boot_config_multiroot at 0x7f6a711f28c0>}, 'boot_config_fix': {'uefi': <function boot_config_fix at 0x7f6a711f2938>}, 'boot_to_normal': {'pxe': <function target_power_cycle_to_normal_pxe at 0x7f6a711be488>}, 'boot_to_pos': {'pxe': <function target_power_cycle_to_pos_pxe at 0x7f6a711be410>}, 'mount_fs': {'multiroot': <function mount_fs at 0x7f6a711f20c8>}}¶ Functions to boot a target into POS
Different target drivers can be loaded and will add members to these dictionaries to extend the abilities of the core system to put targets in Provisioning OS mode.
This then allows a single test script to work with multiple target types without having to worry about details.
-
tcfl.pos.
capability_register
(capability, value, fns)¶
-
class
tcfl.pos.
extension
(target)¶ Extension to
tcfl.tc.target_c
to handle Provisioning OS capabilities.-
cap_fn_get
(capability, default=None)¶ Return a target’s POS capability.
Parameters: - capability (str) – name of the capability, as defined in the target’s tag *pos_capable*.
- default (str) – (optional) default to use if not specified; DO NOT USE! WILL BE DEPRECATED!
-
boot_to_pos
(pos_prompt=None, timeout=60, boot_to_pos_fn=None)¶
-
boot_normal
(boot_to_normal_fn=None)¶ Power cycle the target (if neeed) and boot to normal OS (vs booting to the Provisioning OS).
-
mount_fs
(image, boot_dev)¶ Mount the target’s filesystems in /mnt
When completed, this function has (maybe) formatted/reformatted and mounted all of the target’s filesystems starting in /mnt.
For example, if the final system would have filesystems /boot, / and /home, this function would mount them on:
- / on /mnt/
- /boot on /mnt/boot
- /home on /mnt/home
This allows
deploy_image()
to rysnc content into the final system.Parameters:
-
rsyncd_start
(ic)¶ Start an rsync server on a target running Provisioning OS
This can be used to receive deployment files from any location needed to execute later in the target. The server is attached to the
/mnt
directory and the target is upposed to mount the destination filesystems there.This is usually called automatically for the user by the likes of
deploy_image()
and others.It will create a tunnel from the server to the target’s port where the rsync daemon is listening. A client can then connect to the server’s port to stream data over the rsync protocol. The server address and port will be stored in the target’s keywords rsync_port and rsync_server and thus can be accessed with:
>>> print target.kws['rsync_server'], target.kws['rsync_port']
Parameters: ic (tcfl.tc.target_c) – interconnect (network) to which the target is connected.
-
rsync
(src=None, dst=None, persistent_name=None, persistent_dir='/persistent.tcf.d', path_append='/.', rsync_extra='')¶ rsync data from the local machine to a target
The local machine is the machine executing the test script (where tcf run was called).
This function will first rsync data to a location in the target (persistent storage
/persistent.tcd.d
) that will not be overriden when flashing images. Then it will rsync it from there to the final location.This allows the content to be cached in between testcase execution that reimages the target. Thus, the first run, the whole source tree is transferred to the persistent area, but subsequent runs will already find it there even when if the OS image has been reflashed (as the reflashing will not touch the persistent area). Of course this assumes the previous executions didn’t wipe the persistent area or the whole disk was not corrupted.
This function can be used, for example, when wanting to deploy extra data to the target when using
deploy_image()
:>>> @tcfl.tc.interconnect("ipv4_addr") >>> @tcfl.tc.target("pos_capable") >>> class _test(tcfl.tc.tc_c) >>> ... >>> >>> @staticmethod >>> def _deploy_mygittree(_ic, target, _kws): >>> tcfl.pos.rsync(os.path.expanduser("~/somegittree.git"), >>> dst = '/opt/somegittree.git') >>> >>> def deploy(self, ic, target): >>> ic.power.on() >>> target.pos.deploy_image( >>> ic, "fedora::29", >>> extra_deploy_fns = [ self._deploy_mygittree ]) >>> >>> ...
In this example, the user has a cloned git tree in
~/somegittree.git
that has to be flashed to the target into/opt/somegittree.git
after ensuring the root file system is flashed with Fedora 29.deploy_image()
will start the rsync server and then call _deploy_mygittree() which will usetarget.pos.rsync
to rsync from the user’s machine to the target’s persistent location (in/mnt/persistent.tcf.d/somegittree.git
) and from there to the final location of/mnt/opt/somegittree.git
. When the system boots it will be of course in/opt/somegittree.git
Because
target.pos.rsyncd_start
has been called already, we have now these keywords available that allows to know where to connect to.>>> target.kws['rsync_server'] >>> target.kws['rsync_port']
as setup by calling
target.pos.rsyncd_start
on the target. Functions such astarget.pos.deploy_image
do this for you.Parameters: - src (str) – (optional) source tree/file in the local machine to be copied to the target’s persistent area. If not specified, nothing is copied to the persistent area.
- dst (str) – (optional) destination tree/file in the target machine; if specified, the file is copied from the persistent area to the final destination. If not specified, nothing is copied from the persistent area to the final destination.
- persistent_name (str) – (optional) name for the file/tree in the persistent area; defaults to the basename of the source file specification.
- persistent_dir (str) – (optional) name for the persistent area in the target, defaults to /persistent.tcf.d.
-
rsync_np
(src, dst, option_delete=False, path_append='/.', rsync_extra='')¶ rsync data from the local machine to a target
The local machine is the machine executing the test script (where tcf run was called).
Unlike
rsync()
, this function will rsync data straight from the local machine to the target’s final destination, but without using the persistent storage/persistent.tcd.d
.This function can be used, for example, to flash a whole distribution from the target–however, because that would be very slow,
deploy_image()
is used to transfer a distro as a seed from the server (faster) and then from the local machine, just whatever changed (eg: some changes being tested in some package):>>> @tcfl.tc.interconnect("ipv4_addr") >>> @tcfl.tc.target("pos_capable") >>> class _test(tcfl.tc.tc_c) >>> ... >>> >>> def deploy_tree(_ic, target, _kws): >>> target.pos.rsync_np("/SOME/DIR/my-fedora-29", "/") >>> >>> def deploy(self, ic, target): >>> ic.power.on() >>> target.pos.deploy_image( >>> ic, "fedora::29", >>> extra_deploy_fns = [ self.deploy_tree ]) >>> >>> ...
In this example, the target will be flashed to whatever fedora 29 is available in the server and then
/SOME/DIR/my-fedora-29
will be rsynced on top.Parameters: - src (str) – (optional) source tree/file in the local machine to be copied to the target’s persistent area. If not specified, nothing is copied to the persistent area.
- dst (str) – (optional) destination tree/file in the target machine; if specified, the file is copied from the persistent area to the final destination. If not specified, nothing is copied from the persistent area to the final destination.
- option_delete (bool) – (optional) Add the
--delete
option to delete anything in the target that is not present in the source (%(default)s).
-
rsyncd_stop
()¶ Stop an rsync server on a target running Provisioning OS
A server was started with
target.pos.rsyncd_start
; kill it gracefully.
-
deploy_image
(ic, image, boot_dev=None, root_part_dev=None, partitioning_fn=None, extra_deploy_fns=None, pos_prompt=None, timeout=60, timeout_sync=240, target_power_cycle_to_pos=None, boot_config=None)¶ Deploy an image to a target using the Provisioning OS
Parameters: - ic (tcfl.tc.tc_c) – interconnect off which we are booting the
Provisioning OS and to which
target
is connected. - image (str) –
name of an image available in an rsync server specified in the interconnect’s
pos_rsync_server
tag. Each image is specified asIMAGE:SPIN:VERSION:SUBVERSION:ARCH
, e.g:- fedora:workstation:28::x86_64
- clear:live:25550::x86_64
- yocto:core-image-minimal:2.5.1::x86
Note that you can specify a partial image name and the closest match to it will be selected. From the previous example, asking for fedora would auto select fedora:workstation:28::x86_64 assuming the target supports the x86_64 target.
- boot_dev (str) –
(optional) which is the boot device to use, where the boot loader needs to be installed in a boot partition. e.g.:
sda
for /dev/sda ormmcblk01
for /dev/mmcblk01.Defaults to the value of the
pos_boot_dev
tag. - root_part_dev (str) –
(optional) which is the device to use for the root partition. e.g:
mmcblk0p4
for /dev/mmcblk0p4 orhda5
for /dev/hda5.If not specified, the system will pick up one from all the different root partitions that are available, trying to select the one that has the most similar to what we are installing to minimize the install time.
- extra_deploy_fns –
list of functions to call after the image has been deployed. e.g.:
>>> def deploy_linux_kernel(ic, target, kws, kernel_file = None): >>> ...
the function will be passed keywords which contain values found out during this execution
Returns str: name of the image that was deployed (in case it was guessed)
- FIXME:
- increase in property bd.stats.client.sos_boot_failures and bd.stats.client.sos_boot_count (to get a baseline)
- tag bd.stats.last_reset to DATE
Note: you might want the interconnect power cycled
- ic (tcfl.tc.tc_c) – interconnect off which we are booting the
Provisioning OS and to which
-
-
tcfl.pos.
image_seed_match
(lp, goal)¶ Given two image/seed specifications, return the most similar one
>>> lp = { >>> 'part1': 'clear:live:25550::x86-64', >>> 'part2': 'fedora:workstation:28::x86', >>> 'part3': 'rtk::91', >>> 'part4': 'rtk::90', >>> 'part5': 'rtk::114', >>> } >>> _seed_match(lp, "rtk::112") >>> ('part5', 0.933333333333, 'rtk::114')
-
tcfl.pos.
deploy_tree
(_ic, target, _kws)¶ Rsync a local tree to the target after imaging
This is normally given to
target.pos.deploy_image
as:>>> target.deploy_tree_src = SOMELOCALLOCATION >>> target.pos.deploy_image(ic, IMAGENAME, >>> extra_deploy_fns = [ tcfl.pos.deploy_linux_kernel ])
-
tcfl.pos.
deploy_path
(ic, target, _kws, cache=True)¶ Rsync a local tree to the target after imaging
This is normally given to
target.pos.deploy_image
as:>>> target.deploy_path_src = self.kws['srcdir'] + "/collateral/movie.avi" >>> target.deploy_path_dest = "/root" # optional,defaults to / >>> target.pos.deploy_image(ic, IMAGENAME, >>> extra_deploy_fns = [ tcfl.pos.deploy_linux_kernel ])
-
class
tcfl.pos.
tc_pos0_base
(name, tc_file_path, origin)¶ A template for testcases that install an image in a target that can be provisioned with Provisioning OS.
Unlike
tc_pos_base
, this class needs the targets being declared and called ic and target, such as:>>> @tc.interconnect("ipv4_addr") >>> @tc.target('pos_capable') >>> class my_test(tcfl.tl.tc_pos0_base): >>> def eval(self, ic, target): >>> target.shell.run("echo Hello'' World", >>> "Hello World")
Please refer to
tc_pos_base
for more information.-
image_requested
= None¶ Image we want to install in the target
Note this can be specialized in a subclass such as
>>> class my_test(tcfl.tl.tc_pos_base): >>> >>> image_requested = "fedora:desktop:29" >>> >>> def eval(self, ic, target): >>> ...
-
image
= 'image-not-deployed'¶ Once the image was deployed, this will be set with the name of the image that was selected.
-
deploy_image_args
= {}¶
-
login_user
= 'root'¶ Which user shall we login as
-
delay_login
= 0¶ How many seconds to delay before login in once the login prompt is detected
-
deploy_50
(ic, target)¶
-
start_50
(ic, target)¶
-
teardown_50
()¶
-
class_result
= 0 (0 0 0 0 0)¶
-
-
class
tcfl.pos.
tc_pos_base
(name, tc_file_path, origin)¶ A template for testcases that install an image in a target that can be provisioned with Provisioning OS.
This basic template deploys an image specified in the environment variable
IMAGE
or in self.requested_image, power cycles into it and waits for a prompt in the serial console.This forcefully declares this testcase needs:
- a network that supports IPv4 (for provisioning over it)
- a target that supports Provisioning OS
if you want more control over said conditions, use class:tc_pos0_base, for which the targets have to be declared. Also, more knobs are available there.
To use:
>>> class my_test(tcfl.tl.tc_pos_base): >>> def eval(self, ic, target): >>> target.shell.run("echo Hello'' World", >>> "Hello World")
All the methods (deploy, start, teardown) defined in the class are suffixed
_50
, so it is easy to do extra tasks before and after.>>> class my_test(tcfl.tl.tc_pos_base): >>> def start_60(self, ic): >>> ic.release() # we don't need the network after imaging >>> >>> def eval(self, ic, target): >>> target.shell.run("echo Hello'' World", >>> "Hello World")
-
class_result
= 0 (0 0 0 0 0)¶
-
tcfl.pos.
cmdline_pos_capability_list
(args)¶
-
tcfl.pos.
cmdline_setup
(argsp)¶
This module provides capabilities to configure the boot of a UEFI system with the Provisioning OS.
One of the top level call is boot_config_multiroot()
which is
called by tcfl.pos.deploy_image
to configure the boot for a target
that just got an image deployed to it using the multiroot methodology.
-
tcfl.pos_uefi.
boot_config_multiroot
(target, boot_dev, image)¶ Configure the target to boot using the multiroot
-
tcfl.pos_uefi.
boot_config_fix
(target)¶
The Provisioning OS multiroot methodology partitions a system with multiple root filesystems; different OSes are installed in each root so it is fast to switch from one to another to run things in automated fashion.
The key to the operation is that the server maintains a list of OS images available to be rsynced to the target’s filesystem. rsync can copy straight or transmit only the minimum set of needed changes.
This also speeds up deployment of an OS to the root filesystems, as by picking a root filesystem that has already installed one similar to the one to be deployed (eg: a workstation vs a server version), the amount of data to be transfered is greatly reduced.
Like this, the following scenario sorted from more to less data transfer (and thus slowest to fastest operation):
- can install on an empty root filesystem: in this case a full installation is done
- can refresh an existing root fileystem to the destination: some
things might be shared or the same and a partial transfer can be
done; this might be the case when:
- moving from a distro to another
- moving from one version to another of the same distro
- moving from one spin of one distro to another
- can update an existing root filesystem: in this case very little change is done and we are just verifying nothing was unduly modified.
-
tcfl.pos_multiroot.
mount_fs
(target, image, boot_dev)¶ Boots a root filesystem on /mnt
The partition used as a root filesystem is picked up based on the image that is going to be installed; we look for one that has the most similar image already installed and pick that.
Returns: name of the root partition device
8.1.4. Other target interfaces¶
8.1.4.1. Copy files from and to the server’s user storage area¶
-
class
tcfl.target_ext_broker_files.
broker_files
(_target)¶ Extension to
tcfl.tc.target_c
to run methods to manage the files available in the target broker for the current logged in use.Use as:
>>> files = target.broker_files.list() >>> target.broker_files.upload(REMOTE, LOCAL) >>> target.broker_files.dnload(REMOTE, LOCAL) >>> target.broker_files.delete(REMOTE)
Note these files are, for example:
images for the server to flash into targets (usually handled with the
tcfl.target_ext_images.images
extension)copying specific log files from the server (eg: downloading TCP dump captures form tcpdump as done by the
conf_00_lib.vlan_pci
network element).the storage area is commong to all targets of the server for each user, thus multiple test cases running in parallel can access it at the same time. Use the testcase’s hash to safely namespace:
>>> tc_hash = self.kws['tc_hash'] >>> target.broker_files.upload(tc_hash + "-NAME", LOCAL)
Presence of the broker_file attribute in a target indicates this interface is supported.
-
upload
(remote, local)¶ Upload a local file to a remote name
Parameters:
-
dnload
(remote, local)¶ Download a file to a local file name
Parameters:
-
delete
(remote)¶ Delete a remote file
Parameters: remote (str) – name of the file to remove from the server
-
list
()¶ List available files and their MD5 sums
8.1.4.2. Press and release buttons in the target¶
Extension to
tcfl.tc.target_c
to manipulate buttons connected to the target.Buttons can be pressed, released or a sequence of them (eg: press button1, release button2, wait 0.25s, press button 2, wait 1s release button1).
>>> target.buttons.list() >>> target.tunnel.press('button1') >>> target.tunnel.release('button2') >>> target.tunnel.sequence([ >>> ( 'button1', 'press' ), >>> ( 'button2', 'release' ), >>> ( 'wait 1', 0.25 ), >>> ( 'button2', 'press' ), >>> ( 'wait 2', 1 ), >>> ( 'button1', 'release' ), >>> ])
Note that for this interface to work, the target has to expose a buttons interface and expose said buttons (list them). You can use the command line:
$ tcf button-list TARGETNAME
to find the buttons available to a targert and use
button-press
,button-release
andbutton-click
to manipulate from the command line.
8.1.4.3. Capture screenshots or video/audio stream from the target¶
-
class
tcfl.target_ext_capture.
extension
(target)¶ When a target supports the capture interface, it’s tcfl.tc.target_c object will expose target.capture where the following calls can be made to capture data from it.
A streaming capturer will start capturing when
start()
is called and stop whenstop_and_get()
is called, bringing the capture file from the server to the machine executing tcf run.A non streaming capturer just takes a snapshot when
get()
is called.You can find available capturers with
list()
or:$ tcf capture-list TARGETNAME vnc0:ready screen:ready video1:not-capturing video0:ready
a ready capturer is capable of taking screenshots only
or:
$ tcf list TARGETNAME | grep capture: capture: vnc0 screen video1 video0
-
start
(capturer)¶ Start capturing the stream with capturer capturer
(if this is not an streaming capturer, nothing happens)
>>> target.capture.start("screen_stream")
Parameters: capturer (str) – capturer to use, as listed in the target’s capture Returns: dictionary of values passed by the server
-
stop_and_get
(capturer, local_filename)¶ If this is a streaming capturer, stop streaming and return the captured data or if no streaming, take a snapshot and return it.
>>> target.capture.stop_and_get("screen_stream", "file.avi") >>> target.capture.get("screen", "file.png") >>> network.capture.get("tcpdump", "file.pcap")
Parameters: Returns: dictionary of values passed by the server
-
stop
(capturer)¶ If this is a streaming capturer, stop streaming and discard the captured content.
>>> target.capture.stop("screen_stream")
Parameters: capturer (str) – capturer to use, as listed in the target’s capture
-
get
(capturer, local_filename)¶ This is the same
stop_and_get()
.
-
list
()¶ List capturers available for this target.
>>> r = target.capture.list() >>> print r >>> {'screen': 'ready', 'audio': 'not-capturing', 'screen_stream': 'capturing'}
Returns: dictionary of capturers and their state
-
-
tcfl.target_ext_capture.
cmdline_capture_start
(args)¶
-
tcfl.target_ext_capture.
cmdline_capture_stop_and_get
(args)¶
-
tcfl.target_ext_capture.
cmdline_capture_stop
(args)¶
-
tcfl.target_ext_capture.
cmdline_capture_list
(args)¶
-
tcfl.target_ext_capture.
cmdline_setup
(argsp)¶
8.1.4.4. Raw access to the target’s serial consoles¶
-
class
tcfl.target_ext_console.
console
(target)¶ Extension to
tcfl.tc.target_c
to run methods from the console management interface to TTBD targets.Use as:
>>> target.console.read() >>> target.console.write() >>> target.console.setup() >>> target.console.list()
-
read
(console_id=None, offset=0, fd=None)¶ Read data received on the target’s console
Parameters: Returns: data read (or if written to a file descriptor, amount of bytes read)
-
size
(console_id=None)¶ Return the amount of bytes so far read from the console
Parameters: console_id (str) – (optional) console to read from
-
write
(data, console_id=None)¶ Write data received to a console
Parameters: - data – data to write
- console_id (str) – (optional) console to read from
-
setup
(console_id=None, **kwargs)¶
-
list
()¶
-
8.1.4.5. Access target’s debugging capabilities¶
-
class
tcfl.target_ext_debug.
debug
(target)¶ Extension to
tcfl.tc.target_c
to run methods form the debug interface to TTBD targets.Use as:
>>> target.debug.reset_halt() >>> target.debug.resume()
etc …
-
start
()¶ Start debugging support on the target
-
info
()¶ Return a string with information about the target’s debugging support
-
halt
()¶ Issue a CPU halt to the target’s CPUs
-
reset
()¶ Issue a CPU reset and into runing to the target’s CPUs
-
reset_halt
()¶ Issue a CPU reset and halt the target’s CPUs
-
resume
()¶ Issue a CPU resume to the target’s CPUs
-
stop
()¶ Stop debugging support on the target
-
openocd
(command)¶ Run an OpenOCD command (if supported)
-
8.1.4.6. Flash the target with fastboot¶
-
class
tcfl.target_ext_fastboot.
fastboot
(target)¶ Extension to
tcfl.tc.target_c
to run fastboot commands on the target via the server.Use
run()
to execute a command on the target:>>> target.fastboot.run("flash_pos", "partition_boot", >>> "/home/ttbd/partition_boot.pos.img")
a target with the example configuration described in
ttbl.fastboot.interface
would run the command:$ fastboot -s SERIAL flash partition_boot /home/ttbd/partition_boot.pos.img
on the target.
Note that which fastboot commands are allowed in the target is meant to be severily restricted via target-specific configuration to avoid compromising the system’s security without compromising flexibility.
You can list allowed fastboot commands with (from the example above):
$ tcf fastboot-list TARGETNAME flash: flash partition_boot ^(.+)$ flash_pos: flash_pos partition_boot /home/ttbd/partition_boot.pos.img
-
run
(command_name, *args)¶
-
list
()¶
-
-
tcfl.target_ext_fastboot.
cmdline_fastboot
(args)¶
-
tcfl.target_ext_fastboot.
cmdline_fastboot_list
(args)¶
-
tcfl.target_ext_fastboot.
cmdline_setup
(argsp)¶
8.1.4.7. Flash the target with JTAGs and other mechanism¶
-
class
tcfl.target_ext_images.
images
(target)¶ Extension to
tcfl.tc.target_c
to run methods from the image management interface to TTBD targets.Use as:
>>> target.images.set()
Presence of the images attribute in a target indicates imaging is supported by it.
-
retries
= 4¶ When a deployment fails, how many times can we retry before failing
-
wait
= 4¶ When power cycling a target to retry a flashing operation, how much many seconds do we wait before powering on
-
upload_set
(images, wait=None, retries=None)¶ Upload and flash a set of images to the target
How this is done is HW specific on the target; however, upon return, the image is loaded in the target’s memory, or flashed into some flash ROM or into a hardrive.
Parameters:
-
get_types
()¶
-
8.1.4.8. Flash the target with ioc_flash_server_app¶
-
class
tcfl.target_ext_ioc_flash_server_app.
extension
(target)¶ Extension to
tcfl.tc.target_c
to the ioc_flash_server_app command to a target on the server in a safe way.To configure this interface on a target, see
ttbl.ioc_flash_server_app.interface
.-
run
(mode, filename, generic_id=None, baudrate=None)¶ Run the ioc_flash_server_app command on the target in the server in a safe way.
Parameters:
-
-
tcfl.target_ext_ioc_flash_server_app.
cmdline_ioc_flash_server_app
(args)¶
-
tcfl.target_ext_ioc_flash_server_app.
cmdline_setup
(argsp)¶
8.1.4.9. Power the target on or off¶
-
class
tcfl.target_ext_power.
power
(target)¶ Extension to
tcfl.tc.target_c
to run methods form the power control to TTBD targets.Use as:
>>> target.power.on() >>> target.power.off() >>> target.power.cycle() >>> target.power.reset()
-
on
()¶ Power on a target
-
get
()¶ Return a target’s power status, True if powered, False otherwise.
-
off
()¶ Power off a target
-
cycle
(wait=None)¶ Power cycle a target
-
reset
()¶ Reset a target
-
8.1.4.10. Run commands a shell available on a target’s serial console¶
Also allows basic file transmission over serial line.
8.1.4.10.1. Shell prompts¶
Waiting for a shell prompt is quite a harder problem that it seems to be at the beginning.
Problems:
Background processes or (in the serial console, the kernel) printing lines in the middle.
Even with line buffered output, when there are different CRLF conventions, a misplaced newline or carriage return can break havoc.
As well, if a background process / kernel prints a message after the prompt is printed, a
$
will no longer match. The\Z
regex operator cannot be used for the same reason.CRLF conventions make it harder to use the
^
and$
regex expression metacharacteds.ANSI sequences, human doesn’t see/notice them, but to the computer / regular expression they are
Thus, resorting to match a single line is the best bet; however, it is almost impossible to guarantee that it is the last one as the multiple formats of prompts could be matching other text.
-
tcfl.target_ext_shell.
shell_prompts
= ['[-/\\@_~: \\x1b=;\\[0-9A-Za-z]+ [\\x1b=;\\[0-9A-Za-z]*[#\\$][\\x1b=;\\[0-9A-Za-z]* ', '[^@]+@.*[#\\$] ', '[^:]+:.*[#\\$>]']¶ What is in a shell prompt?
-
class
tcfl.target_ext_shell.
shell
(target)¶ Extension to
tcfl.tc.target_c
for targets that support some kind of shell (Linux, Windows) to run some common remote commands without needing to worry about the details.The target has to be set to boot into console prompt, so password login has to be disabled.
>>> target.shell.up()
Waits for the shell to be up and ready; sets it up so that if an error happens, it will print an error message and raise a block exception. Note you can change what is expected as a
shell prompt
.>>> target.shell.run("some command")
Remove remote files (if the target supports it) with:
>>> target.shell.file_remove("/tmp/filename")
Copy files to the target with:
>>> target.shell.file_copy_to("local_file", "/tmp/remote_file")
-
shell_prompt_regex
= <_sre.SRE_Pattern object>¶
-
linux_shell_prompt_regex
= <_sre.SRE_Pattern object>¶ Deprecated, use
shell_prompt_regex
-
up
(tempt=None, user=None, login_regex=<_sre.SRE_Pattern object>, delay_login=0, password=None, password_regex=<_sre.SRE_Pattern object>, shell_setup=True, timeout=120)¶ Wait for the shell in a console to be ready
Giving it ample time to boot, wait for a
shell prompt
and set up the shell so that if an error happens, it will print an error message and raise a block exception. Optionally login as a user and password.>>> target.shell.up(user = 'root', password = '123456')
Parameters: - tempt (str) – (optional) string to send before waiting for the loging prompt (for example, to send a newline that activates the login)
- user (str) – (optional) if provided, it will wait for login_regex before trying to login with this user name.
- password (str) – (optional) if provided, and a password prompt is found, send this password.
- login_regex (str) – (optional) if provided (string or compiled regex) and user is provided, it will wait for this prompt before sending the username.
- password_regex (str) – (optional) if provided (string or compiled regex) and password is provided, it will wait for this prompt before sending the password.
- delay_login (int) – (optional) wait this many seconds before sending the user name after finding the login prompt.
- shell_setup (bool) – (optional, default) setup the shell up by disabling command line editing (makes it easier for the automation) and set up hooks that will raise an exception if a shell command fails.
- timeout (int) – [optional] seconds to wait for the login prompt to appear
-
run
(cmd=None, expect=None, prompt_regex=None, output=False, output_filter_crlf=True, timeout=None, trim=False)¶ Runs some command as a shell command and wait for the shell prompt to show up.
If it fails, it will raise an exception. If you want to get the error code or not have it raise exceptions on failure, you will have to play shell-specific games, such as:
>>> target.shell.run("failing-command || true")
Files can be easily generated in unix targets with commands such as:
>>> target.shell.run(""" >>> cat > /etc/somefile <<EOF >>> these are the >>> file contents >>> that I want >>> EOF""")
or collecting the output:
>>> target.shell.run("ls -1 /etc/", output = True) >>> for file in output.split('\r\n'): >>> target.report_info("file %s" % file) >>> target.shell.run("md5sum %s" % file)
Parameters: - cmd (str) – (optional) command to run; if none, only the expectations are waited for (if expect is not set, then only the prompt is expected).
- expect – (optional) output to expect (string or regex) before the shell prompt. This an also be a list of things to expect (in the given order)
- prompt_regex – (optional) output to expect (string or regex) as a shell prompt, which is always to be found at the end. Defaults to the preconfigured shell prompt (NUMBER $).
- output (bool) – (optional, default False) return the output of the command to the console; note the output includes the execution of the command itself.
- output_filter_crlf (bool) – (optional, default True) if we
are returning output, filter out
\r\n
to whatever our CRLF convention is. - trim (bool) – if
output
is True, trim the command and the prompt from the beginning and the end of the output respectively (True)
Returns str: if
output
is true, a string with the output of the command.Warning
if
output_filter_crlf
is False, this output will be\r\n
terminated and it will be confusing because regex won’t work right away. A quick, dirty, fix>>> output = output.replace('\r\n', '\n')
output_filter_crlf
enabled replaces this output with>>> output = output.replace('\r\n', target.crlf)
-
file_remove
(remote_filename)¶ Remove a remote file (if the target supports it)
-
files_remove
(*remote_filenames)¶ Remove a multiple remote files (if the target supports it)
-
file_copy_to
(local_filename, remote_filename)¶ Send a file to the target via the console (if the target supports it)
Encodes the file to base64 and sends it via the console in chunks of 64 bytes (some consoles are kinda…unreliable) to a file in the target called /tmp/file.b64, which then we decode back to normal.
Assumes the target has python3; permissions are not maintained
Note
it is slow. The limits are not well defined; how big a file can be sent/received will depend on local and remote memory capacity, as things are read hole. This could be optimized to stream instead of just read all, but still sending a way big file over a cheap ASCII protocol is not a good idea. Warned you are.
-
file_copy_from
(local_filename, remote_filename)¶ Send a file to the target via the console (if the target supports it)
Encodes the file to base64 and sends it via the console in chunks of 64 bytes (some consoles are kinda…unreliable) to a file in the target called /tmp/file.b64, which then we decode back to normal.
Assumes the target has python3; permissions are not maintained
Note
it is slow. The limits are not well defined; how big a file can be sent/received will depend on local and remote memory capacity, as things are read hole. This could be optimized to stream instead of just read all, but still sending a way big file over a cheap ASCII protocol is not a good idea. Warned you are.
-
8.1.4.11. Run commands to the target and copy files back and forth using SSH¶
-
class
tcfl.target_ext_ssh.
ssh
(target)¶ Extension to
tcfl.tc.target_c
for targets that support SSH to run remote commands via SSH or copy files around.Currently the target the target has to be set to accept passwordless login, either by:
disabling password for the target user (DANGEROUS!! use only on isolated targets)
storing SSH identities in SSH agents (FIXME: not implemented yet) and provisioning the keys via cloud-init or similar
Use as (full usage example in
/usr/share/tcf/examples/test_linux_ssh.py
):As described in
IP tunnels
, upon which this extension builds, this will only work with a target with IPv4/6 connectivity, which means there has to be an interconnect powered on and reachable for the server andkept active
, so the server doesn’t power it off.ensure the interconnect is powered on before powering on the target; otherwise some targets won’t acquire an IP configuration (as they will assume there is no interconnect); e.g.: on start:
>>> def start(self, ic, target): >>> ic.power.on() >>> target.power.cycle() >>> target.shell.linux_shell_prompt_regex = re.compile('root@.*# ') >>> target.shell.up(user = 'root')
indicate the tunneling system which IP address is to be used:
>>> target.tunnel.ip_addr = target.addr_get(ic, "ipv4")
Use SSH:
>>> exitcode, _stdout, _stderr = target.ssh.call("test -f file_that_should_exist") >>> target.ssh.check_output("test -f file_that_should_exist") >>> output = target.ssh.check_output("cat some_file") >>> if 'what_im_looking_for' in output: >>> do_something() >>> target.ssh.copy_to("somedir/local.file", "remotedir") >>> target.ssh.copy_from("someremotedir/file", "localdir")
FIXME: provide pointers to a private key to use
Troubleshooting:
SSH fails to login; open the report file generated with tcf run, look at the detailed error output:
returncode will show as 255: login error– do you have credentials loaded? is the configuration in the target allowing you to login as such user with no password? or do you have the SSH keys configured?:
E#1 @local eval errored: ssh command failed: echo hello E#1 @local ssh_cmd: /usr/bin/ssh -vp 5400 -q -o BatchMode yes -o StrictHostKeyChecking no root@jfsotc10.jf.intel.com -t echo hello ... E#1 @local eval errored trace: error_e: ('ssh command failed: echo hello', {'ssh_cmd': '/usr/bin/ssh -vp 5400 -q -o BatchMode yes -o StrictHostKeyChecking no root@jfsotc10.jf.intel.com -t echo hello', 'output': '', 'cmd': ['/usr/bin/ssh', '-vp', '5400', '-q', '-o', 'BatchMode yes', '-o', 'StrictHostKeyChecking no', 'root@jfsotc10.jf.intel.com', '-t', 'echo hello'], 'returncode': 255}) E#1 @local returncode: 255
For seeing verbose SSH output to debug, append
-v
to_ssh_cmdline_options
:>>> target.ssh._ssh_cmdline_options.append("-v")
-
host
= None¶ SSH destination host; this will be filled out automatically with any IPv4 or IPv6 address the target declares, but can be assigned to a new value if needed.
-
login
= None¶ SSH login identity; default to root login, as otherwise it would default to the login of the user running the daemon.
-
port
= None¶ SSH port to use
-
run
(cmd, nonzero_e=None)¶ Run a shell command over SSH, return exitcode and output
Similar to
subprocess.call()
; note SSH is normally run in verbose mode (unless-q
has been set it_ssh_cmdline_options
, so the stderr will contain SSH debug information.Parameters: - cmd (str) –
shell command to execute via SSH, substituting any
%(KEYWORD)[ds]
field from the target’s keywords intcfl.tc.target_c.kws
See how to find which fields are available.
- nonzero_e (tcfl.tc.exception) – exception to raise in case of non
zero exit code. Must be a subclass of
tcfl.tc.exception
(i.e.:tcfl.tc.failed_e
,tcfl.tc.error_e
,tcfl.tc.skip_e
,tcfl.tc.blocked_e
) or None (default) to not raise anything and just return the exit code.
Returns: tuple of
exitcode, stdout, stderr
, the two later being two tempfile file descriptors containing the standard output and standard error of running the command.The stdout (or stderr) can be read with:
>>> stdout.read()
- cmd (str) –
-
call
(cmd)¶ Run a shell command over SSH, returning the output
Please see
run()
for argument description; the only difference is this function raises an exception if the call fails.
-
check_call
(cmd, nonzero_e=<class 'tcfl.tc.error_e'>)¶ Run a shell command over SSH, returning the output
Please see
run()
for argument description; the only difference is this function raises an exception if the call fails.
-
check_output
(cmd, nonzero_e=<class 'tcfl.tc.error_e'>)¶ Run a shell command over SSH, returning the output
Please see
run()
for argument description; the only difference is this function returns the stdout only if the call succeeds and raises an exception otherwise.
-
copy_to
(src, dst='', recursive=False, nonzero_e=<class 'tcfl.tc.error_e'>)¶ Copy a file or tree with SCP to the target from the client
Parameters: - src (str) – local file or directory to copy
- dst (str) – (optional) destination file or directoy (defaults to root’s home directory)
- recursive (bool) – (optional) copy recursively (needed for directories)
- nonzero_e (tcfl.tc.exception) – exception to raise in case of
non zero exit code. Must be a subclass of
tcfl.tc.exception
(i.e.:tcfl.tc.failed_e
,tcfl.tc.error_e
,tcfl.tc.skip_e
,tcfl.tc.blocked_e
) or None (default) to not raise anything and just return the exit code.
-
copy_from
(src, dst='.', recursive=False, nonzero_e=<class 'tcfl.tc.error_e'>)¶ Copy a file or tree with SCP from the target to the client
Parameters: - src (str) – remote file or directory to copy
- dst (str) – (optional) destination file or directory (defaults to current working directory)
- recursive (bool) – (optional) copy recursively (needed for directories)
- nonzero_e (tcfl.tc.exception) – exception to raise in case of
non zero exit code. Must be a subclass of
tcfl.tc.exception
(i.e.:tcfl.tc.failed_e
,tcfl.tc.error_e
,tcfl.tc.skip_e
,tcfl.tc.blocked_e
) or None (default) to not raise anything and just return the exit code.
8.1.4.12. Create and remove network tunnels to the target via the server¶
-
class
tcfl.target_ext_tunnel.
tunnel
(target)¶ Extension to
tcfl.tc.target_c
to create IP tunnels to targets with IP connectivity.Use by indicating a default IP address to use for interconnect ic or explicitly indicating it in the
add()
function:>>> target.tunnel.ip_addr = target.addr_get(ic, "ipv4") >>> target.tunnel.add(PORT) >>> target.tunnel.remove(PORT) >>> target.tunnel.list()
Note that for tunnels to work, the target has to be acquired and IP has to be up on it, which might requires it to be connected to some IP network (it can be a TCF interconnect or any other network).
-
add
(port, ip_addr=None, proto=None)¶ Setup a TCP/UDP/SCTP v4 or v5 tunnel to the target
A local port of the given protocol in the server is fowarded to the target’s port. Teardown with
remove()
.If the tunnel already exists, it is not recreated, but the port it uses is returned.
Redirects targets TCP4 port 3000 to server_port in the server that provides
target
(target.kws[‘server’]).>>> server_name = target.rtb.parsed_url.hostname >>> server_port = target.tunnel.add(3000)
Now connecting to
server_name:server_port
takes you to the target’s port 3000.Parameters: Returns int local_port: port in the server where to connect to in order to access the target.
-
remove
(port, ip_addr=None, proto=None)¶ Teardown a TCP/UDP/SCTP v4 or v5 tunnel to the target previously created with
add()
.Parameters:
-
list
()¶ List existing IP tunnels
Returns: list of tuples (protocol, target-ip-address, port, port-in-server)
-
-
tcfl.target_ext_tunnel.
cmdline_tunnel_add
(args)¶
-
tcfl.target_ext_tunnel.
cmdline_tunnel_remove
(args)¶
-
tcfl.target_ext_tunnel.
cmdline_tunnel_list
(args)¶
-
tcfl.target_ext_tunnel.
cmdline_setup
(argsp)¶
8.1.5. TCF run Application builders¶
Application builder are a generic tool for building applications of different types.
They all use the same interface to make it easy and fast for the test
case writer to specify what has to be built for which BSP of which
target with a very simple specification given to the
tcfl.tc.target()
decorator:
>>> tcfl.tc.target(app_zephyr = { 'x86': "path/to/zephyr_app" },
>>> app_sketch = { 'arc': "path/to/sketch" })
>>> class mytestcase(tcfl.tc.tc_c):
>>> ...
which allows the testcase developer to point the app builders to the locations of the source code and on which BSPs of the targets it shall run and have it deal with the details of inserting the right code to build, deploy, setup and start the testcase.
This allows the testcase writer to focus on writing the test application.
App builders:
- can be made once and reused multiple times
- they are plugins to the testcase system
- keep no state; they need to be able to gather everything from the parameters passed (this is needed so they can be called from multiple threads).
- are always called app_SOMETHING
Note implementation details on tcfl.app.app_c
; drivers can
be added with tcfl.app.driver_add()
.
Currently available application buildrs for:
-
tcfl.app.
import_mp_pathos
()¶
-
tcfl.app.
import_mp_std
()¶
-
tcfl.app.
args_app_src_check
(app_name, app_src)¶ Verify the source specification for a given App Driver
-
tcfl.app.
driver_add
(cls, name=None)¶ Add a new driver for app building
Note the driver will be called as the class name; it is recommended to call then app_something.
-
tcfl.app.
driver_valid
(name)¶
-
tcfl.app.
get_real_srcdir
(origin_filename, _srcdir)¶ Return the absolute version of _srcdir, which might be relative which the file described by origin_file.
-
tcfl.app.
configure
(ab, testcase, target, app_src)¶
-
tcfl.app.
build
(ab, testcase, target, app_src)¶
-
tcfl.app.
deploy
(images, ab, testcase, target, app_src)¶
-
tcfl.app.
setup
(ab, testcase, target, app_src)¶
-
tcfl.app.
start
(ab, testcase, target, app_src)¶
-
tcfl.app.
teardown
(ab, testcase, target, app_src)¶
-
tcfl.app.
clean
(ab, testcase, target, app_src)¶
-
class
tcfl.app.
app_c
¶ Subclass this to create an App builder, provide implementations only of what is needed.
The driver will be invoked by the test runner using the methods
tcfl.app.configure()
,tcfl.app.build()
,tcfl.app.deploy()
,tcfl.app.setup()
,tcfl.app.start()
,tcfl.app.teardown()
,tcfl.app.clean()
.If your App builder does not need to implement any, then it is enough with not specifying it in the class.
Targets with multiple BSPs
When the target contains multiple BSPs the App builders are invoked for each BSP in the same order as they were declared with the decorator
tcfl.tc.target()
. E.g.:>>> @tcfl.tc.target(app_zephyr = { 'arc': 'path/to/zephyr_code' }, >>> app_sketch = { 'x86': 'path/to/arduino_code' })
We are specifying that the x86 BSP in the target has to run code to be built with the Arduino IDE/compiler and the arc core will run a Zephyr app, built with the Zephyr SDK.
If the target is being ran in a BSP model where one or more of the BSPs are not used, the App builders are responsible for providing stub information with
tcfl.tc.target_c.stub_app_add()
. As well, if an app builder determines a BSP does not need to be stubbed, it can also remove it from the target’s list with:>>> del target.bsps_stub[BSPNAME]
Note this removal is done at the specific target level, as each target might have different models or needs.
Note you can use the dictionary
tcfl.tc.tc_c.buffers()
to store data to communicate amongst phases. This dictionary:- will be cleaned in between evaluation runs
- is not multi-threaded protected; take
tcfl.tc.tc_c.buffers_lock()
if you need to access it from different paralell execution methods (setup/start/eval/test/teardown methods are always executed serially). - take care not to start more than once; app builders are setup to start a target only if there is not a field started-TARGETNAME set to True.
-
static
configure
(testcase, target, app_src)¶
-
static
build
(testcase, target, app_src)¶
-
static
deploy
(images, testcase, target, app_src)¶
-
static
setup
(testcase, target, app_src)¶
-
static
start
(testcase, target, app_src)¶
-
static
teardown
(testcase, target, app_src)¶
-
static
clean
(testcase, target, app_src)¶
-
tcfl.app.
make_j_guess
()¶ How much paralellism?
In theoryt there is a make job server that can help throtle this, but in practice this also influences how much virtual the build of a bunch of TCs can do so…
So depending on how many jobs are already queued, decide how much -j we want to give to make.
-
tcfl.app_zephyr.
boot_delay
= {}¶ for each target type, an integer on how long we shall wait to boot Zephyr
-
class
tcfl.app_zephyr.
app_zephyr
¶ Support for configuring, building, deploying and evaluating a Zephyr-OS application.
To setup:
a toolchain capable of building Zephyr has to be installed in the system and the corresponding environment variables exported, such as:
- ZEPHYR_SDK_INSTALL_DIR for the Zephyr SDK
- ISSM_INSTALLATION_PATH for the Intel ISSM toolchain
- ESPRESSIF_TOOLCHAIN_PATH for the Espress toolchain
- XTENSA_SDK for the Xtensa SDK
environment variables set:
- ZEPHYR_TOOLCHAIN_VARIANT (ZEPHYR_GCC_VARIANT before v1.11) pointing to the toolchain to use (zephyr, issm, espressif, xcc, etc…)
- ZEPHYR_BASE pointing to the path where the Zephyr tree is located
note these variables can be put in a TCF configuration file or they can also be specified as options to app_zephyr (see below).
Usage:
Declare in a target app_zephyr and point to the source tree and optionally, provide extra arguments to add to the Makefile invocation:
@tcfl.tc.target("zephyr_board", app_zephyr = 'path/to/app/source') class my_test(tc.tc_c): ...
If extra makefile arguments are needed, a tuple that starts with the path and contains multiple strings can be used:
@tcfl.tc.target("zephyr_board", app_zephyr = ( 'path/to/app/source', 'ZEPHYR_TOOLCHAIN_VARIANT=zephyr', 'ZEPHYR_BASE=some/path', 'OTHEREXTRAARGSTOZEPHYRMAKE')) class my_test(tc.tc_c): ...
to build multiple BSPs of the same target:
@tcfl.tc.target("type == 'arduino101'", app_zephyr = { 'x86': ( 'path/to/app/source/for/x86', 'ZEPHYR_TOOLCHAIN_VARIANT=zephyr', 'ZEPHYR_BASE=some/path', 'OTHEREXTRAARGSTOZEPHYRMAKE' ), 'arc': ( 'path/to/app/source/for/arc', 'ZEPHYR_TOOLCHAIN_VARIANT=zephyr', 'ZEPHYR_BASE=some/path', 'OTHEREXTRAARGSTOZEPHYRMAKE' ) }) class my_test(tc.tc_c): ...
furthermore, common options can be specified in app_zephyr_options (note this is just a string versus a tuple), so the previous example can be simplified as:
@tcfl.tc.target("type == 'arduino101'", app_zephyr = { 'x86': ( 'path/to/app/source/for/x86', 'OTHER-X86-EXTRAS' ), 'arc': ( 'path/to/app/source/for/arc', 'OTHER-ARC-EXTRAS' ) }, app_zephyr_options = \ 'ZEPHYR_TOOLCHAIN_VARIANT=zephyr' \ 'ZEPHYR_BASE=some/path' \ 'OTHER-COMMON-EXTRAS') class my_test(tc.tc_c): ...
The test creator can set the attributes (in the test class or in the target object):
zephyr_filter
zephyr_filter_origin
(optional)
to indicate a Zephyr Sanity Check style filter to apply before building, to be able to skip a test case if a logical expression on the Zephyr build configuration is not satisfied. Example:
@tcfl.tc.target("zephyr_board", app_zephyr = ...) class my_test(tc.tc_c): zephyr_filter = "CONFIG_VALUE_X == 2000 and CONFIG_SOMETHING != 'foo'" zephyr_filter_origin = __file__
-
static
configure
(testcase, target, app_src)¶
-
static
build
(testcase, target, app_src)¶ Build a Zephyr App whichever active BSP is active on a target
-
static
deploy
(images, testcase, target, app_src)¶
-
static
setup
(testcase, target, app_src)¶
-
static
clean
(testcase, target, app_src)¶
-
class
tcfl.app_zephyr.
zephyr
(target)¶ Extension to
tcfl.tc.target_c
to add Zephyr specific APIs; this extension is activated only if any BSP in the target is to be loaded with Zephyr.-
static
sdk_keys
(arch, variant)¶ Figure out the architecture, calling convention and SDK prefixes for this target’s current BSP.
-
config_file_read
(name=None, bsp=None)¶ Open a config file and return its values as a dictionary
Parameters: - name (str) – (optional) name of the configuration file, default to %(zephyr_objdir)s/.config.
- bsp (str) –
(optional) BSP on which to operate; when the target is configured for a BSP model which contains multiple Zephyr BSPs, you will need to specify which one to modify.
This parameter can be omitted if only one BSP is available in the current BSP Model.
Returns: dictionary keyed by CONFIG_ name with its value.
-
config_file_write
(name, data, bsp=None)¶ Write an extra config file called NAME.conf in the Zephyr’s App build directory.
Note this takes care to only write it if the data is new or the file is unexistant, to avoid unnecesary rebuilds.
Parameters: - name (str) – Name for the configuration file; this has to be a valid filename; .conf will be added by the function.
- data (str) –
Data to include in the configuration file; this is (currently) valid kconfig data, which are lines of text with # acting as comment character; for example:
CONFIG_UART_CONSOLE_ON_DEV_NAME="UART_1"
- bsp (str) –
(optional) BSP on which to operate; when the target is configured for a BSP model which contains multiple Zephyr BSPs, you will need to specify which one to modify.
This parameter can be omitted if only one BSP is available in the current BSP Model.
Example
>>> if something: >>> target.zephyr.conf_file_write("mytweaks", >>> 'CONFIG_SOMEVAR=1\n' >>> 'CONFIG_ANOTHER="VALUE"\n')
-
check_filter
(_objdir, arch, board, _filter, origin=None)¶ This is going to be called by the App Builder’s build function to evaluate if we need to filter out a build of a testcase. In any other case, it will be ignored.
Parameters:
-
static
-
class
tcfl.app_sketch.
app_sketch
¶ Driver to build Arduino Sketch applications for flashing into MCU’s BSPs.
Note the setup instructions.
-
static
configure
(testcase, target, app_src)¶
-
static
build
(testcase, target, app_src)¶ Build an Sketh App whichever active BSP is active on a target
-
static
deploy
(images, testcase, target, app_src)¶
-
static
clean
(testcase, target, app_src)¶
-
static
-
class
tcfl.app_manual.
app_manual
¶ This is an App Builder that tells the system the testcase will provide instructions to configure/build/deploy/eval/clean in the testcase methods.
It is used when we are combining App Builders to build for some BSPs with manual methods. Note it can also be used to manually adding stubbing information with:
>>> for bsp_stub in 'BSP1', 'BSP2', 'BSP3': >>> target.stub_app_add(bsp_stub, app_manual, "nothing")
8.1.6. TCF run report drivers¶
8.1.6.1. Report infrastructure¶
Infrastructure for reporting test case results and progress in a modular way.
-
tcfl.report.
jinja2_xml_escape
(data)¶ Lame filter to XML-escape any characters that are allowed in XML according to https://www.w3.org/TR/xml/#charsets
That’d be:
#x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]
The rest need to be escaped as &#HHHH;
-
class
tcfl.report.
report_c
(verbosity=1)¶ Report driver to write to stdout (for human consumption) and to a log file
Parameters: verbosity (int) – (optional) maximum verbosity to report; defaults to 1, for failures only -
verbosity_set
(verbosity)¶
-
classmethod
driver_add
(obj, origin=None)¶ Add a driver to handle other report mechanisms
A report driver is used by tcf run, the meta test runner, to report information about the execution of testcases.
A driver implements the reporting in whichever way it decides it needs to suit the application, uploading information to a server, writing it to files, printing it to screen, etc.
>>> import tcfl.report >>> class my_report_driver(tcfl.report.report_c) >>> tcfl.report.report_c.driver_add(my_report_driver)
Parameters: - obj (tcfl.report.report_c) – object subclasss of :class:tcfl.report.report_c that implements the reporting.
- origin (str) – (optional) where is this being registered; defaults to the caller of this function.
-
classmethod
driver_rm
(obj)¶ Add a driver to handle other report mechanisms
Parameters: origin (str) –
-
classmethod
report
(level, alevel, ulevel, _tc, tag, message, attachments)¶ Low level reporting
Parameters: - level (int) – report level for the main, one liner message; note report levels greater or equal to 1000 are using to pass control messages, so they might not be subject to normal verbosity control (for example, for a log file you might want to always include them).
- alevel (int) – report level for the attachments, or extra messages
- ulevel (int) – report level for unabridged attachments
- obj (str) – string identifying the reporting object
- prefix (str) – prefix for the message
- tag (str) – tag for this message (PASS, ERRR, FAIL, BLCK, INFO), all same length
- message (str) – message string
- dict – dictionary of attachments; either an open file, to report the contents of the file or list of strings or strings or whatever
When a testcase completes execution, it will issue a report with a message COMPLETION <result> (where result is passed, error, failed, blocked or skipped) and a very high verbosity level which is meant for the driver do to any synchronization tasks it might need to do (eg: uploading the data to a database).
Likewise, when all the testcases are run, the global testcase reporter will use a COMPLETION <result> report. The global testcase reporter can be identified because it has an attribute skip_reports set to True and thus can be identified with:
if getattr(_tc, "skip_reports", False) == True: do_somethin_for_the_global_reporter
-
-
class
tcfl.report.
report_console_c
(verbosity, log_dir, log_file=None, verbosity_logf=999)¶ Report driver to write to stdout (for human consumption) and to a log file
Parameters: verbosity_logf (int) – (optional) maximum verbosity to report to the logfile; defaults to all of them, but on some cases you might want to limit to cut on disk consumption.
-
class
tcfl.report.
file_c
(log_dir)¶ Report driver to write report files with information about a testcase.
The Jinja2 templating engine is used to gather templates and fill them out with information, so it can create text, HTML, Junit XML, etc using templates.
This driver saves log messages to separate files based on their
tcfl.msgid_c
code (which is unique to each testcase running separately on a thread). When it detects a “COMPLETION” message (a top level conclussion), it will generate a report for each configured template, using data from the testcase metadata and the output it saved to those separate files.The default configuraton (the text template) will generate files called
report-[RUNID:]ID.txt
files for each error/failure/blockage. To enable it for passed or skipped test cases:>>> tcfl.report.file_c.templates['junit']['report_pass'] = False >>> tcfl.report.file_c.templates['junit']['report_skip'] = False
The junit template (disabled by default) will generate
junit-[RUNID:]ID.xml
files with information from all the testcases executed based on the configuration settings below.To enable it for all conditions (or disable any replacing True with False):
>>> tcfl.report.file_c.templates['junit']['report_pass'] = True >>> tcfl.report.file_c.templates['junit']['report_skip'] = True >>> tcfl.report.file_c.templates['junit']['report_error'] = True >>> tcfl.report.file_c.templates['junit']['report_fail'] = True >>> tcfl.report.file_c.templates['junit']['report_block'] = True
See
templates
for more information.-
templates
= {'junit': {'report_fail': False, 'name': 'junit.j2.xml', 'report_skip': False, 'report_error': False, 'output_file_name': 'junit-%(runid)s:%(tc_hash)s.xml', 'report_block': False, 'report_pass': False}, 'text': {'report_fail': True, 'name': 'report.j2.txt', 'report_skip': False, 'report_error': True, 'output_file_name': 'report-%(runid)s:%(tc_hash)s.txt', 'report_block': True, 'report_pass': False}}¶ To create more templates, add a new dictionary:
>>> tcfl.report.file_c.templates['MYTEMPLATE'] = dict( >>> #: name = 'report.j2.txt', >>> #: output_file_name = 'report-%(runid)s:%(tc_hash)s.txt', >>> #: report_pass = False, >>> #: report_skip = False, >>> #: report_error = True, >>> #: report_fail = True, >>> #: report_block = True >>> )
- ‘name`` (str): name of a Jinja2 template file available on
.tcf
,~/.tcf
,/etc/tcf
or/usr/share/tcf/
(this is the configuration path and will change if it has other configuration prefix FIXME: reference). output_file_name
(str): Pyhton template that defines the name of the output file. The fields are the testcase keywords and those described below for templates.report_pass
(bool): report (or not) if the testcase passesreport_fail
(bool): report (or not) if the testcase failsreport_error
(bool): report (or not) if the testcase errorsreport_block
(bool): report (or not) if the testcase blocksreport_skip
(bool): report (or not) if the testcase skips
Creating templates
The Jinja2 templating mechanism allows for extensions to create any kind of file you might need.
To create templates, as described above, define a dictionary that describes it and create a template file.
To quickly change the existing ones, you can use Jinja2 template inheriting; for example, the default
report.j2.txt
:{% extends "report-base.j2.txt" %}
which uses all the default settings in
report-base.j2.txt
and can use things like block replacing to add more information:{% extends "report-base.j2.txt" %} {%- block HEADER_PREFIX -%} Add some more information here that will end up in the final report. {%- endblock -%}
Jinja2 will replace that in the final report in the placeholder for
{%= block HEADER_PREFIX -%}
.Fields available (to be used as Jinja2 variables with
{{ FIELD }}
and other Jinja2 operators:{{ msg_tag }}: testcase’s result (
PASS
,FAIL
,ERRR
,SKIP
,BLCK
), also as{{ result }}
and{{ result_past }}
formatted for text in present and past tense (eg: pass vs passed){{ message }}: message that came with the top level report (
COMPLETION passed|failed|error|failed|skip|block
)any variable defined in the the
tcfl.config
space is mapped totcfl_config_
; for example {{ tcfl_config_urls }} which maps totcfl.config.urls
Only variables of the following types are exported: integers, strings, lists, dictionaries and tuples.
{{ t_option }} a value that indicates what has to be given to tcf to select the targets where the testcase was run.
{{ log }}: an iterator to the contents of log files that returns three fields: - message identifier - target group name - message itself
Can be used as:
{% for ident, tgname, message in log -%} {{ "%-10s %-25s %s" | format(ident, tgname, message) }} {% endfor %}
Depending on the destination format, you can pipe this through Jinja2 filters to escape certain characters. For example, there is:
escape
which escapes suitable for HTMLxml_escape
which escapes suitable for XML
which can be used as:
{% for ident, tgname, message in log -%} {{ "%-10s %-25s %s" | format(ident, tgname, message) | xml_escape }} {% endfor %}
{{ targets }}: list of targets used for the testcases, with fields:
want_name
(str): name the testcase gave to this target (e.g.: target)fullid
(str): name of the actual target at the server (e.g.: SERVERNAME/qz43i-x86)type
(str): type of the target (e.g.: qemu-linux-XYZ)
Extending and modifying keywords
Hook
functions can be configured to execute before the testcase is launched, they can be used to extend the keywords available to the templates or any other things.- ‘name`` (str): name of a Jinja2 template file available on
-
hooks
= []¶ List of hook functions to call before generating a report
For example:
def my_hook(obj, testcase, kws): assert isinstance(tcfl.report.file_c, obj) assert isinstance(tcfl.tc.tc_c, testcase) assert isinstance(dict, kws) kws['some new keyword'] = SOMEVALUE tcfl.report.file_c.hooks.append(my_hook)
Note these is done for all the templates; do not use global variables, as these function might be called from multiple threads.
-
-
class
tcfl.report_mongodb.
report_mongodb_c
¶ Report results of testcase execution into a MongoDB database
The database used is pointed to by MongoDB URL
url
and namedb_name
.Testcase execution results
The results of execution (pass/errr/fail/skip/block) are stored in a collection called results.
For each testcase and the targets where it is ran on (identified by a hashid) we generate a document; each report done for that this hashid is a record in said document with any attachments stored.
Each result document is keyed by runid:hashid and structured as:
- result:
- runid:
- hashid
- tc_name
- target_name
- target_types
- target_server
- timestamp
- targest: dict of keyed by target name
- TARGETNAME:
- id
- server
- type
- bsp_model
- TARGETNAME:
- results: list of
- timestamp
- ident
- level
- tag
- message
- attachments
Notes:
- When a field is missing we don’t insert it to save space, it has to be considered an empty string (if we expected one) or none present
Usage:
Ensure you have a access to a MongoDB in
HOST:PORT
, where you can create (or there is already) a database calledDATABASENAME
.Create a TCF configuration file
{/etc/tcf,~,.}/.tcf/conf_mongodb.py
with:import tcfl.report import tcfl.report_mongodb m = tcfl.report_mongodb.report_mongodb_c() m.url = "mongodb://HOST:PORT" # Or a more complex mongodb URL m.db_name = "DATABASENAME" m.collection_name = "COLLECTIONNAME" # Optional: modify the record before going in m.complete_hooks.append(SOMEHOOKFUNCTION) tcfl.report.report_c.driver_add(m)
Troubleshooting
When giving SSL and passwords in the URL, the connection fails with messages such as ConfigurationError: command SON(…) failed: auth failed
The installation of PyMongo in your system might be too old, we need > v3.
-
url
= None¶ MongoDB URL where to connect to
-
extra_params
= {}¶ MongoDB client extra params, as described in
pymongo.mongo_client.MongoClient
; this you want to use to configure SSL, such as:tcfl.report_mongodb.report_mongodb_c.extra_params = dict( ssl_keyfile = PATH_TO_KEY_FILE, ssl_certfile = PATH_TO_CERT_FILE, ssl_ca_certs = PATH_TO_CA_FILE, )
-
db_name
= None¶ Name of the database to which to connect
-
collection_name
= None¶ Name of the collection in the database to fill out
-
complete_hooks
= []¶
8.2. TCF client configuration¶
8.2.1. Configuration API for tcf¶
-
tcfl.config.
path
= []¶ The list of paths where we find configuration information
Path where shared files are stored
-
tcfl.config.
urls
= []¶ List of URLs to servers we are working with
each entry is a tuple of:
- URL (str): the location of the server
- SSL verification (bool): if we are obeying SSL certificate verification
- aka (str): short name for the server
- ca_path (str): path to certificates
-
tcfl.config.
url_add
(url, ssl_ignore=False, aka=None, ca_path=None)¶ Add a TTBD server
Parameters:
-
tcfl.config.
load
(config_path=None, config_files=None, state_path='~/.tcf', ignore_ssl=True)¶ Load the TCF Library configuration
This is needed before you can access from your client program any other module.
Parameters: - config_path – list of strings containing UNIX-style paths (DIR:DIR) to look for config files (conf_*.py) that will be loaded in alphabetical order. An empty path clears the current list.
- config_files – list of extra config files to load
- state_path (str) – (optional) path where to store state
- ignore_ssl (bool) – (optional) wether to ignore SSL verification or not (useful for self-signed certs)
8.3. TCF client internals¶
-
class
tcfl.
msgid_c
(s=None, s_encode=None, l=4, root=None, phase=None, depth=None, parent=None)¶ Accumulate data local to the current running thread.
This is used to generate a random ID (four chars) at the beginning of the testcase run in a thread by instantiating a local object of this class. As we call deeper into functions to do different parts, we instantiate more objects that will add random characters to said ID just for that call (as when the object created goes out of scope, the ID is returned to what it was.
So thus, as the call chain gets deeper, the message IDs go:
abcd abcdef abcdefgh abcdefghij
this allows for easy identification / lookup on a log file or classification.
Note we also keep a depth (usefuly for increasing the verbosity of log messages) and a phase, which we use it to set the phase in which we are running, so log messages don’t have to specify it.
Note this is to be used as:
with msgid_c(ARGS): do stuff... msgid_c.ident() msgid_c.phase() msgid_c.depth()
-
tls
= <thread._local object>¶
-
classmethod
cls_init
()¶
-
classmethod
encode
(s, l)¶
-
classmethod
generate
(l=4)¶
-
classmethod
depth
()¶
-
classmethod
phase
()¶
-
classmethod
ident
()¶
-
classmethod
current
()¶
-
classmethod
parent
()¶
-
-
tcfl.
origin_get
(depth=1)¶
-
tcfl.
origin_get_object
(o)¶
-
tcfl.
origin_get_object_path
(o)¶
8.3.1. Expecting things that have to happen¶
This module implements an expecter object: something that is told to expect things to happen, what to do when they happen (or not).
It is a combination of a poor man’s select() and Tk/Tcl Expect.
We cannot use select() or Tk/TCL Expect or Python’s PyExpect because:
- we need to listen to many things over HTTP connections and the library is quite very simplistic in that sense, so there is maybe no point on hooking up a pure event system.
- we need to be able to listen to poll for data and evaluate it from one or more sources (like serial port, sensor, network data, whatever) in one or more targets all at the same time.
- it is simple, and works quite well
Any given testcase has an expecter object associated with it that can be used to wait for a list of events to happen in one or more targets. This allows, for example, to during the execution of a testcase with multiple nodes, to always have pollers reading (eg) their serial consoles and evaluators making sure no kernel panics are happening in none while at the same time checking for the output that should be coming from them.
The ‘expecter’ object can be also associated just to a single target for a more simple interface when only access to one target is needed.
-
class
tcfl.expecter.
expecter_c
(log, testcase, poll_period=0.25, timeout=30)¶ Object that is told to expect things to happen and what to do when they happen (or
When calling
run()
, a loop is called by that will run repeatedly, waitingpoll_period
seconds in between polling periods until a giventimeout
ellapses.On each loop run, a bunch of functions are run. Functions are added with
add()
and removed withremove()
.Each function polls and stores data, evals said data or both. It can then end the loop by raising an exception. It is also possible that nothing of the interest happened and thus it won’t cause the loop to end. thus it will evaluate nothing. See
add()
for more detailsSome of those functions can be considered ‘expectations’ that have to pass for the full loop to be considered succesful. An boolean to
add()
clarifies that. All those ‘expectations’ have to pass before the run can be considered succesful.The loop will timeout if no evaluating function raises an exception to get out of it and fail with a timeout.
Rationale
This allows to implement simple usages like waiting for something to come off any console with a default
>>> target.wait('STRING', TIMEOUT, console = None)
which also check for other things we know that can come from the OS in a console, like abort strings, kernel panics or dumps for which new know we should abort inmediately with an specific message.
FIXME:
it has to be easy to use and still providing things like
>>> target.wait('STRING', TIMEOUT, console = None) -> { True | False } >>> target.expecter.add(console_rx, (STRING, console),) >>> target.expect('STRING', TIMEOUT) -> raise error/fail/block >>> target.on_rx('STRING', raise failure function)
-
buffers
= None¶ dictionary for poll/eval functions to store data from run to run of the loop that can be examined for evaluation; will be cleared every time the
run()
function is called.
-
active_period
= None¶ Each this many seconds, touch the targets to indicate the server we are actively using them (in case the pollers are not polling every target)
-
buffers_persistent
= None¶ dictionary for poll/eval functions to store data from run to run of the loop that can be examined for evaluation; will NOT be cleared every time the
run()
function is called.
-
have_to_pass
= None¶ Number of expectations that have to pass for a run to be successful
-
ts0
= None¶ Time base, to calculate relative timestamps; when we call run(), we reinitialized it, but we also set it here for when we call the poller outside of run()
-
timeout
¶ Time in seconds the
run()
will consider we have timed out if no polling/evaluation function raises an exception to complete the loop
-
poll_period
¶ Time in seconds the
run()
function waits before calling all the polling/evaluation functions
-
console_get_file
(target, console=None)¶ Returns: file descriptor for the file that contains the currently read console. Note the pointer in this file descriptor shall not be modified as it might be being used by expectations. If you need to read from the file, dup it:
>>> f_existing = self.tls.expecter.console_get_file(target, console_id) >>> f = open(f_existing.name)
-
add
(has_to_pass, functor, arguments, origin=None)¶ Add a function to the list of things to poll/evaluate
These functions shall either poll, evaluate or both:
- poll data and store it in the dictionary or anywhere else
where it can be accessed later. Use a unique key into the
dictorionary
buffers
. - evaluate some previously polled data or whichever system
condition and raise an exception to indicate what happened
(from the set
tcfl.tc.pass_e
,tcfl.tc.blocked_e
,tcfl.tc.error_e
,tcfl.tc.failed_e
,tcfl.tc.skip_e
).
Eval functions can check their own timeouts and raise an exception to signal it (normally
tcfl.tc.error_e
)It is also possible that nothing of the interest of this evaluation function happened and thus it will evaluate nothing.
Parameters: has_to_pass (bool) – In order to consider the whole expect sequence a pass, this functor has to declare its evaluation passes by returning anything but None or by raising tcfl.tc.pass_e
.Raises: to stop the run()
loop, raisetcfl.tc.pass_e
,tcfl.tc.blocked_e
,tcfl.tc.error_e
ortcfl.tc.skip_e
.Returns: ignored - poll data and store it in the dictionary or anywhere else
where it can be accessed later. Use a unique key into the
dictorionary
-
remove
(functor, arguments)¶
-
log
(msg, attachments=None)¶
-
run
(timeout=None)¶ Run the expectation loop on the testcase until all expectations pass or the timeout is exceeded.
Parameters: timeout (int) – (optional) maximum time to wait for all expectations to be met (defaults to tcfl.expecter.expecter_c.timeout
)
-
power_on_post
(target=None)¶ Reinitialize things that need flushing for a new power on
-
-
tcfl.expecter.
console_mk_code
(target, console)¶
-
tcfl.expecter.
console_mk_uid
(target, what, console, _timeout, result)¶
-
tcfl.expecter.
console_rx_poller
(expecter, target, console=None)¶ Poll a console
-
tcfl.expecter.
console_rx_flush
(expecter, target, console=None, truncate=False)¶ Reset all the console read markers to 0
When we (for example) power cycle, we start capturing from zero, so we need to reset all the buffers of what we read.
8.3.2. Client API for accessing ttbd’s REST API¶
This API provides a way to access teh REST API exposed by the ttbd daemon; it is divided in two main blocks:
rest_target_broker
: abstracts a remote ttbd server and provides methods to run stuff on targets and and connect/disconnect things on/from targets.rest_*() methods that take a namespace of arguments, lookup the object target, map it to a remote server, execute the method and then print the result to console.
This breakup is a wee arbitrary, it can use some cleanup
-
tcfl.ttb_client.
import_mp_pathos
()¶
-
tcfl.ttb_client.
import_mp_std
()¶
-
tcfl.ttb_client.
tls_var
(name, factory, *args, **kwargs)¶
-
class
tcfl.ttb_client.
rest_target_broker
(state_path, url, ignore_ssl=False, aka=None, ca_path=None)¶ Create a proxy for a target broker, optionally loading state (like cookies) previously saved.
Parameters: - state_path (str) – Path prefix where to load state from
- url (str) – URL for which we are loading state
- ignore_ssl (bool) – Ignore server’s SSL certificate validation (use for self-signed certs).
- aka (str) – Short name for this server; defaults to the hostname (sans domain) of the URL.
- ca_path (str) – Path to SSL certificate or chain-of-trust bundle
Returns: True if information was loaded for the URL, False otherwise
-
API_VERSION
= 1¶
-
API_PREFIX
= '/ttb-v1/'¶
-
classmethod
rts_cache_flush
()¶
-
tb_state_save
(filepath)¶ Save cookies in path so they can be loaded by when the object is created.
Parameters: path (str) – Filename where to save to
-
send_request
(method, url, data=None, files=None, stream=False, raw=False, timeout=480)¶ Send request to server using url and data, save the cookies generated from request, search for issues on connection and raise and exception or return the response object.
Parameters: Returns: response object
Return type:
-
login
(email, password)¶
-
logout
()¶
-
validate_session
(validate=False)¶
-
rest_tb_target_list
(all_targets=False, target_id=None)¶ List targets in this server
Parameters:
-
rest_tb_target_update
(target_id)¶ Update information about a target
Parameters: target_id (str) – ID of the target to operate on Returns: updated target tags
-
rest_tb_target_acquire
(rt, ticket='')¶
-
rest_tb_target_active
(rt, ticket='')¶
-
rest_tb_target_enable
(rt, ticket='')¶
-
rest_tb_target_disable
(rt, ticket='')¶
-
rest_tb_thing_plug
(rt, thing, ticket='')¶
-
rest_tb_thing_list
(rt, ticket='')¶
-
rest_tb_thing_unplug
(rt, thing, ticket='')¶
-
rest_tb_target_release
(rt, ticket='', force=False)¶
-
rest_tb_property_set
(rt, prop, value, ticket='')¶
-
rest_tb_property_get
(rt, prop, ticket='')¶
-
rest_tb_target_ip_tunnel_add
(rt, ip_addr, port, proto, ticket='')¶
-
rest_tb_target_ip_tunnel_remove
(rt, ip_addr, port, proto, ticket='')¶
-
rest_tb_target_ip_tunnel_list
(rt, ticket='')¶
-
rest_tb_target_power_on
(rt, ticket='')¶
-
rest_tb_target_power_off
(rt, ticket='')¶
-
rest_tb_target_reset
(rt, ticket='')¶
-
rest_tb_target_power_cycle
(rt, ticket='', wait=None)¶
-
rest_tb_target_power_get
(rt)¶
-
rest_tb_target_images_set
(rt, images, ticket='')¶ Write/configure images to the targets (depending on the target)
Parameters: images (dict) – Dictionary of image types and filenames, like in ttbl.test_target_images_mixin.images_set()
.Raises: Exception in case of errors
-
rest_tb_file_upload
(remote_filename, local_filename)¶
-
rest_tb_file_dnload
(remote_filename, local_filename)¶ Download a remote file from the broker to a local file
Parameters: remote_filename (str) – filename in the broker’s user storage area Params str local_filename: local filename where to download it
-
rest_tb_file_dnload_to_fd
(fd, remote_filename)¶ Download a remote file from the broker to a local file
Parameters: remote_filename (str) – filename in the broker’s user storage area Params int fd: file descriptor where to write the data to
-
rest_tb_file_delete
(remote_filename)¶
-
rest_tb_file_list
()¶ Return a dictionary of files names available to the user in the broker and their sha256 hash.
-
rest_tb_target_console_read
(rt, console, offset, ticket='')¶
-
rest_tb_target_console_size
(rt, console, ticket='')¶
-
rest_tb_target_console_read_to_fd
(fd, rt, console, offset, max_size=0, ticket='')¶
-
rest_tb_target_console_write
(rt, console, data, ticket='')¶
-
rest_tb_target_debug_info
(rt, ticket='')¶
-
rest_tb_target_debug_start
(rt, ticket='')¶
-
rest_tb_target_debug_stop
(rt, ticket='')¶
-
rest_tb_target_debug_halt
(rt, ticket='')¶
-
rest_tb_target_debug_reset
(rt, ticket='')¶
-
rest_tb_target_debug_reset_halt
(rt, ticket='')¶
-
rest_tb_target_debug_resume
(rt, ticket='')¶
-
rest_tb_target_debug_openocd
(rt, command, ticket='')¶
-
tcfl.ttb_client.
rest_init
(path, url, ignore_ssl=False, aka=None)¶ Initialize access to a remote target broker.
Parameters: Returns: True if information was loaded for the URL, False otherwise
-
tcfl.ttb_client.
rest_shutdown
(path)¶ Shutdown REST API, saving state in path.
Parameters: path (str) – Path to where to save state information
-
tcfl.ttb_client.
rest_login
(args)¶ Login into remote servers.
Parameters: args (argparse.Namespace) – login arguments like -q (quiet) or userid. Returns: True if it can be logged into at least 1 remote server.
-
tcfl.ttb_client.
rest_logout
(args)¶
-
tcfl.ttb_client.
rest_target_print
(rt, verbosity=0)¶ Print information about a REST target taking into account the verbosity level from the logging module
Parameters: rt (dict) – object describing the REST target to print
-
tcfl.ttb_client.
rest_target_list_table
(args, spec)¶ List all the targets in a table format, appending * if powered up, ! if owned.
-
tcfl.ttb_client.
rest_target_list
(args)¶
-
tcfl.ttb_client.
rest_target_find_all
(all_targets=False)¶ Return descriptors for all the known remote targets
Parameters: all_targets (bool) – Include or not disabled targets Returns: list of remote target descriptors (each being a dictionary).
-
tcfl.ttb_client.
rest_target_acquire
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Returns: dictionary of tags Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_enable
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_disable
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_property_set
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_property_get
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_release
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_power_on
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_power_off
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_reset
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_debug_halt
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_debug_reset
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_debug_reset_halt
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_debug_resume
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_debug_openocd
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_power_cycle
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_images_set
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_tb_target_images_upload
(rtb, _images)¶ Upload images from a list images
Parameters: - rtb (dict) – Remote Target Broker
- _images –
list of images, which can be specified as:
- string with
"IMAGE1:FILE1 IMAGE2:FILE2..."
- list or set of strings
["IMAGE1:FILE1", "IMAGE2:FILE2", ...]
- list or set of tuples
[("IMAGE1", "FILE1"), ("IMAGE2", "FILE2"), ...]
- string with
Returns: list of remote images (that can be fed straight to
tcfl.ttb_client.rest_target_broker.rest_tb_target_images_set()
)
-
tcfl.ttb_client.
rest_tb_target_images_upload_set
(rtb, rt, _images, ticket='')¶ Parameters: - rtb – Remote Target Broker
- rt – Remote Target descriptor
- _images –
list of images, which can be specified as:
- string with
"IMAGE1:FILE1 IMAGE2:FILE2..."
- list or set of strings
["IMAGE1:FILE1", "IMAGE2:FILE2", ...]
- list or set of tuples
[("IMAGE1", "FILE1"), ("IMAGE2", "FILE2"), ...]
- string with
-
tcfl.ttb_client.
rest_target_images_upload_set
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_power_get
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_broker_file_upload
(args)¶ FIXME
-
tcfl.ttb_client.
rest_broker_file_dnload
(args)¶ Download a file from a target broker
-
tcfl.ttb_client.
rest_broker_file_delete
(args)¶ FIXME
-
tcfl.ttb_client.
rest_broker_file_list
(args)¶ Print a list of files names available to the user in the broker and their sha256 hash.
-
tcfl.ttb_client.
rest_target_console_read
(args)¶
-
tcfl.ttb_client.
rest_target_console_write
(args)¶
-
tcfl.ttb_client.
rest_target_debug_info
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_debug_start
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_debug_stop
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_thing_plug
(args)¶
-
tcfl.ttb_client.
rest_target_thing_unplug
(args)¶
-
tcfl.ttb_client.
rest_target_thing_list
(args)¶
-
tcfl.util.
healthcheck_power
(rtb, rt)¶
-
tcfl.util.
healthcheck
(args)¶
-
tcfl.util.
argp_setup
(arg_subparsers)¶
8.3.3. Zephyr’s SanityCheck testcase.ini driver for testcase integration¶
This implements a driver to run Zephyr’s Sanity Check testcases
(described with a testcase.ini file) without having to implement any
new descriptions. Details are explained in
tc_zephyr_sanity_c
.
-
exception
tcfl.tc_zephyr_sanity.
ConfigurationError
¶
-
class
tcfl.tc_zephyr_sanity.
SanityConfigParser
(filename)¶ Class to read architecture and test case .ini files with semantic checking
This is only used for the old, .ini support
Instantiate a new SanityConfigParser object
Parameters: filename (str) – Source .ini file to read -
sections
()¶ Get the set of sections within the .ini file
Returns: a list of string section names
-
get_section
(section, valid_keys)¶ Get a dictionary representing the keys/values within a section
Parameters: - section (str) – The section in the .ini file to retrieve data from
- valid_keys (dict) –
A dictionary representing the intended semantics for this section. Each key in this dictionary is a key that could be specified, if a key is given in the .ini file which isn’t in here, it will generate an error. Each value in this dictionary is another dictionary containing metadata:
”default” - Default value if not given- ”type” - Data type to convert the text value to. Simple types
- supported are “str”, “float”, “int”, “bool” which will get converted to respective Python data types. “set” and “list” may also be specified which will split the value by whitespace (but keep the elements as strings). finally, “list:<type>” and “set:<type>” may be given which will perform a type conversion after splitting the value up.
- ”required” - If true, raise an error if not defined. If false
- and “default” isn’t specified, a type conversion will be done on an empty string
Returns: A dictionary containing the section key-value pairs with type conversion and default values filled in per valid_keys
-
-
class
tcfl.tc_zephyr_sanity.
harness_c
¶ A test harness for a Zephyr test
In the Zephyr SanityCheck environment, a harness is a set of steps to verify that a testcase did the right thing.
The default harness just verifies if PROJECT EXECUTION FAILED or PROJECT EXECUTION SUCCESFUL was printed (which is done in
tc_zephyr_sanity_c.eval_50()
).However, if a harness is specified in the testcase/sample YAML with:
harness: HARNESSNAME harness_config: field1: value1 field2: value2 ...
then tc_zephyr_sanity_c._dict_init() will create a harness object of class _harness_HARNESSNAME_c and set it to
tc_zephyr_sanity_c.harness
. Then, during the evaluation phase, we’ll run it intc_zephyr_sanity_c.eval_50()
.The harness object has
evaluate()
which is called to implement the harness on the testcase and target it is running on.For each type of harness, there is a class for it implementing the details of it.
-
evaluate
(_testcase)¶
-
-
class
tcfl.tc_zephyr_sanity.
tc_zephyr_subsanity_c
(name, tc_file_path, origin, zephyr_name, parent, attachments=None)¶ Subtestcase of a Zephyr Sanity Check
A Zephyr Sanity Check testcase might be composed of one or more subtestcases.
We run them all in a single shot using
tc_zephyr_sanity_c
and when done, we parse the output (tc_zephyr_sanity_c._subtestcases_grok) and for each subtestcase, we create one of this sub testcase objects and queue it to be executed in the same target where the main testcase was ran.This is only a construct to ensure they are reported as separate testcases. We already know if they passed or errored or failed, so all we do is report as such.
-
configure_50
()¶
-
eval_50
()¶
-
static
clean
()¶
-
class_result
= 0 (0 0 0 0 0)¶
-
-
class
tcfl.tc_zephyr_sanity.
tc_zephyr_sanity_c
(name, tc_file_path, origin, zephyr_name, subcases)¶ Test case driver specific to Zephyr project testcases
This will generate test actions based on Zephyr project testcase.ini files.
See Zephyr sanitycheck –help for details on the format on these testcase configuration files. A single testcase.ini may specify one or more test cases.
This rides on top of
tcfl.tc.tc_c
driver; tags are translated, whitelist/excludes are translated to target selection language and and a single target is declared (for cases that are not unit tests).is_testcase()
looks fortestcase.ini
files, parses up usingSanityConfigParser
to load it up into memory and calls_dict_init()
to set values and generate the target (when needed) and setup the App Zephyr builder.This is how we map the different testcase.ini sections/concepts to
tcfl.tc.tc_c
data:extra_args = VALUES
: handled asapp_zephyr_options
, passed to the Zephyr App Builder.extra_configs = LIST
: list of extra configuration settingstestcase source is assumed to be in the same directory as the
testcase.ini
file. Passed to the Zephyr App Builder withapp_zephyr
.timeout = VALUE
: use to set the timeout in the testcase expect loop.tags = TAGS
: added to the tags list, with an originskip
: skipped right away with antcfl.tc.skip_e
exceptionslow
: coverted to tagbuild_only
: added asself.build_only
(arch,platform)_(whitelist,exclude)
: what testcase.ini calls arch is a bsp in TCF parlance and platform maps to the zerphyr_board parameter the Zephyr test targets export on their BSP specific tags. Thus, our spec becomes something like:bsp == "ARCH1' or bsp == "ARCH2" ) and not ( bsp == "ARCH3" or bsp == "ARCH4")
arch_whitelist = ARCH1 ARCH2
mapped to@targets += bsp:^(ARCH1|ARCH2)$
arch_exclude = ARCH1 ARCH2
mapped to@targets += bsp:(?!^(ARCH1|ARCH2)$)
platform_whitelist = PLAT1 PLAT2
mapped to@targets += board:^(PLAT1|PLAT2)$
platform_exclude = PLAT1 PLAT2
mapped to@targets += board:(?!^(PLAT1|PLAT2)$)
config_whitelist
andfilter
: filled into the args stored in the testcase as which then gets passed as part of the kws[config_whitelist] dictionary… The build process then calls the action_eval_skip() method to test if the TC has to be skipped after creating the base config.
-
harness
= None¶ Harness to run
-
subtestcases
= None¶ Subtestcases that are identified as part of this (possibly) a container testcase.
-
unit_test_output
= None¶ Filename of the output of the unit test case; when we run a unit testcase, the output does not come from the console system, as it runs local, but from a local file.
-
configure_00
()¶
Dictionary of tags that we want to add to given test cases; the key is the name of the testcase – if the testcase name ends with the same value as in here, then the given list of boolean tags will be patched as True; eg:
{ "dir1/subdir2/testcase.ini#testname" : [ 'ignore_faults', 'slow' ] }
usually this will be setup in a
{/etc/tc,~/.tcf.tcf}/conf_zephy.py
configuration file as:tcfl.tc_zephyr_sanity.tc_zephyr_sanity_c.patch_tags = { "tests/legacy/kernel/test_static_idt/testcase.ini#test": [ 'ignore_faults' ], ... }
-
patch_hw_requires
= {}¶ Dictionary of hw_requires values that we want to add to given test cases; the key is the name of the testcase – if the testcase name ends with the same value as in here, then the given list of hw_requires will be appended as requirements to the target; eg:
{ "dir1/subdir2/testcase.ini#testname" : [ 'fixture_1' ], "dir1/subdir2/testcase2.ini#testname" : [ 'fixture_2' ] }
usually this will be setup in a
{/etc/tc,~/.tcf.tcf}/conf_zephy.py
configuration file as:tcfl.tc_zephyr_sanity.tc_zephyr_sanity_c.patch_hw_requires = { "dir1/subdir2/testcase.ini#testname" : [ 'fixture_1' ], "dir1/subdir2/testcase2.ini#testname" : [ 'fixture_2' ], ... }
-
classmethod
schema_get_file
(path)¶
-
classmethod
schema_get
(filename)¶
-
build_00_tc_zephyr
()¶
-
build_unit_test
()¶ Build a Zephyr Unit Test in the local machine
-
eval_50
()¶
-
classmethod
data_harvest
(domain, name, regex, main_trigger_regex=None, trigger_regex=None, origin=None)¶ Configure a data harverster
After a Zephyr sanity check is executed succesfully, the output of each target is examined by the data harvesting engine to extract data to store in the database with
tcfl.tc.tc_c.report_data()
.The harvester is a very simple state machine controlled by up to three regular expressions whose objective is to extract a value, that will be reported to the datase as a domain/name/value triad.
A domain groups together multiple name/value pairs that are related (for example, latency measurements).
Each line of output will be matched by each of the entries registered with this function.
All arguments (except for origin) will expand ‘%(FIELD)s’ with values taken from the target’s keywords (
tcfl.tc.target_c.kws
).Parameters: - domain (str) – to which domain this measurement applies (eg: “Latency Benchmark %(type)s”); It is recommended this is used to aggregate values to different types of targets.
- name (str) – name of the value (eg: “context switch (microseconds)”)
- regex (str) – regular expression to match against each line of the target’s output. A Python regex ‘(?P<value>SOMETHING)` has to be used to point to the value that has to be extracted (eg: “context switch time (?P<value>[0-9]+) usec”).
- main_trigger_regex (str) – (optional) only look for regex if this regex has already been found. This trigger is then considered active for the rest of the output. This is used to enable searching this if there is a banner in the output that indicates that the measurements are about to follow (eg: “Latency Benchmark starts here).
- trigger_regex (str) –
(optional) only look for regex if this regex has already been found. However, once regex is found, then this trigger is deactivated. This is useful when the measurements are reported in two lines:
measuring context switch like this measurement is X usecs
and thus the regex could catch multiple lines because another measurement is:
measuring context switch like that measurement is X usecs
the regex measurement is (?P<value>[0-9]) usecs would catch both, but by giving it a trigger_regex of measuring context switch like this, then it will catch only the first, as once it is found, the trigger is removed.
- origin (str) – (optional) where this values are coming from; if not specified, it will be the call site for the function.
-
subtc_results_valid
= ('PASS', 'FAIL', 'SKIP')¶
-
subtc_regex
= <_sre.SRE_Pattern object>¶
-
teardown_subtestcases
()¶ Given the output of the testcases, parse subtestcases for each target
-
teardown
()¶
-
clean
()¶
-
filename_regex
= <_sre.SRE_Pattern object>¶
-
filename_yaml_regex
= <_sre.SRE_Pattern object>¶
-
classmethod
is_testcase
(path, _from_path)¶
-
class_result
= 0 (0 0 0 0 0)¶
Driver to run Clear Linux BBT test suite
The main TCF testcase scanner walks files looking for automation
scripts / testcase scripts and will call
tc_clear_bbt_c.is_testcase()
for each *.t
files on a
directory. The driver will generate one testcase per directory which
will execute all the .t
in there and then execute all the .t
in the any-bundle subdirectory.
The testcases created are instances of tc_clear_bbt_c
; this
class will allocate one interconnect/network and one
*pos_capable* target. In said target it will
install Clear OS (from an image server in the interconnect) during the
deploy phase.
Once then installation is done, it will install any required bundles
and execute all the .t
files in the directory followed by all the
.t
in the any-bundle top level directory.
The output of each .t
execution is parsed with
tap_parse_output()
to generate for each a subcase (an instance
of subcases
) which will report the
individual result of that subcase execution.
Setup steps
To improve the deployment of the BBT tree, a copy can be kept in the server’s rsync image area for initial seeding; to setup, execute in the server:
$ mkdir -p /home/ttbd/images/misc
$ git clone URL/bbt.git /home/ttbd/images/misc/bbt.git
-
tcfl.tc_clear_bbt.
tap_parse_output
(output)¶ Parse TAP into a dictionary
Parameters: output (str) – TAP formatted output Returns: dictionary keyed by test subject containing a dictionary of key/values: - lines: list of line numbers in the output where data was found - plan_count: test case number according to the TAP plan - result: result of the testcase (ok or not ok) - directive: if any directive was found, the text for it - output: output specific to this testcase
-
class
tcfl.tc_clear_bbt.
tc_taps_subcase_c_base
(name, tc_file_path, origin, parent)¶ Report each subcase result of running a list of TAP testcases
Given an entry of data from the output of
tap_parse_output()
, create a fake testcase that is just used to report results of the subcase.This is used by
tc_clear_bbt_c
to report each TAP subcase individually for reporting control.-
update
(result, data)¶
-
configure_50
()¶
-
eval_50
()¶
-
static
clean
()¶
-
class_result
= 0 (0 0 0 0 0)¶
-
-
tcfl.tc_clear_bbt.
ignore_ts
= []¶ Ignore t files
List of individual .t files to ignore, since we can’t filter those on the command line; this can be done in a config file:
>>> tcfl.tc_clear_bbt.ignore_ts = [ >>> 'bundles/XYZ/somefile.t', >>> 'bundles/ABC/someother.t', >>> '.*/any#somefile.sometestcase", >>> ]
or from the command line, byt setting the BBT_IGNORE_TS environment variable:
$ export BBT_IGNORE_TS="bundles/XYZ/somefile.t #bundles/ABC/someother.t .*/any#somefile.sometestcase" $ tcf run bbt.git/bundles/XYZ bbt.git/bundles/ABC
Note all entries will be compiled as Python regular expressions that have to match from the beginning. A whole .t file can be excluded with:
>>> 'bundles/XYZ/somefile.t'
where as a particular testcase in said file:
>>> 'bundles/XYZ/somefile.subcasename'
note those subcases still will be executed (there is no way for the bats tool to be told to ignore) but their results will be ignored.
-
tcfl.tc_clear_bbt.
bundle_run_pre_sh
= {'bat-perl-basic-perl-use.t': ['export PERL_CANARY_STABILITY_NOPROMPT=1']}¶ Commands to execute before running bats on each .t file (key by .t file name or bundle-under-test name).
Note these will be executed in the bundle directory and templated with
STR % testcase.kws
.
-
tcfl.tc_clear_bbt.
bundle_t_map
= {'bat-dev-tooling.t.autospec_nano': 'build-package.t.autospec_nano'}¶ Sometime this works in conjunction with
bundle_path_map
above, when a .t file is actually calling another one (maybe in another directory, then you need an entry inbundle_path_map
) to rename the directory to match the entry of this one.
-
class
tcfl.tc_clear_bbt.
tc_clear_bbt_c
(path, t_file_path)¶ Driver to load Clear Linux BBT test cases
A BBT test case is specified in bats <https://github.com/sstephenson/bats>_ format in a
FILENAME.t
This driver gets called by the core testcase scanning system through the entry point
is_testcase()
–in quite a simplistic way, if it detects the file isFILENAME.t
, it decides it is valid and creates a class instance off the file path.The class instance serves as a testcase script that will:
in the deployment phase (deploy method):
Request a Clear Linux image to be installed in the target system using the provisioning OS.
Deploy the BBT tree to the target’s
/opt/bbt.git
so testcases have all the dependencies they need to run (at this point we assume the git tree is available).Assumes the BBT tree has an specific layout:
DIR/SUBDIR/SUBSUBDIR[/...]/NAME/*.t any-bundles/*.t
on the start phase:
- power cycle the target machine to boot and login into Clear
- install the software-testing bundle and any others specified in an optional ‘requirements’ file. Maybe use a mirror for swupd.
on the evaluation phase:
- run bats on the
FILENAME.t
which we have copied to/opt/bbt.git
. parse the output
into subcases to report their results individually usingtc_taps_subcase_c_base
- run bats on the
-
capture_boot_video_source
= 'screen_stream'¶ Shall we capture a boot video if possible?
-
configure_00_set_relpath_set
(target)¶
-
configure_10
()¶
-
image
= 'clear'¶ Specification of image to install
default to whatever is configured on the environment (if any) for quick setup; otherwise it can be configured in a TCF configuration file by adding:
>>> tcfl.tc_clear_bbt.tc_clear_bbt_c.image = "clear::24800"
-
swupd_url
= None¶ swupd mirror to use
>>> tcfl.tc_clear_bbt.tc_clear_bbt_c.swupd_url = \ >>> "http://someupdateserver.com/update/"
Note this can use keywords exported by the interconnect, eg:
>>> tcfl.tc_clear_bbt.tc_clear_bbt_c.swupd_url = \ >>> "http://%(MYFIELD)s/update/"
where:
$ tcf list -vv nwa | grep MYFIELD MYFIELD: someupdateserver.com
-
image_tree
= None¶
-
swupd_debug
= False¶ Do we add debug output to swupd?
-
mapping
= {'not ok': 1 (0 0 1 0 0), 'ok': 1 (1 0 0 0 0), 'skip': 1 (0 0 0 0 1), 'todo': 1 (0 1 0 0 0)}¶ Mapping from TAPS output to TCF conditions
This can be adjusted globally for all testcases or per testcase:
>>> tcfl.tc_clear_bbt.tc_clear_bbt_c.mapping['skip'] \ >>> = tcfl.tc.result_c(1, 0, 0, 0, 0) # pass
or for an specific testcase:
>>> tcobject.mapping['skip'] = 'BLCK'
-
boot_mgr_disable
= False¶ Disable efibootmgr and clr-boot-manager
-
fix_time
= None¶ if environ SWUPD_FIX_TIME is defined, set the target’s time to the client’s time
-
deploy
(ic, target)¶
-
start
(ic, target)¶
-
eval
(ic, target)¶
-
teardown_50
()¶
-
static
clean
()¶
-
ignore_stress
= True¶ (bool) ignores stress testcases
-
paths
= {}¶
-
filename_regex
= <_sre.SRE_Pattern object>¶
-
classmethod
is_testcase
(path, _from_path)¶
-
class_result
= 0 (0 0 0 0 0)¶
8.4. Target metadata¶
Each target has associated a list of metadata, some of them common to
all targets, some of them driver or target type specific that you can
get on the command line with tcf list -vvv TARGETNAME
or in a test
script in the dictionary tcfl.tc.target_c.rt
(for Remote
Target), or more generally in the keywor dictionary
tcfl.tc.target_c.kws
.
Metada is specified:
in the server’s read only configuration by setting tags to the target during creation of the
ttbl.test_target
object, by passing a dictionary tottbl.config.target_add()
>>> ttbl.config.target_add( >>> ttbl.tt.tt_serial(....), >>> tags = { >>> 'linux': True, >>> ... >>> 'pos_capable': True, >>> 'pos_boot_interconnect': "nwb", >>> 'pos_boot_dev': "sda", >>> 'pos_partsizes': "1:20:50:15", >>> 'linux_serial_console_default': 'ttyUSB0' >>> }, >>> target_type = "Intel NUC5i5425OU")
or by calling
ttbl.test_target.tags_update()
on an already created target>>> ttbl.config.targets['nwb'].tags_update({ >>> 'mac_addr': '00:50:b6:27:4b:77' >>> })
during runtime, from the client with tcf property-set:
$ tcf property-set TARGETNAME PROPERTY VALUE
or calling
tcfl.tc.target_c.property_set()
:>>> target.property_set("PROPERTY", "VALUE")
8.4.1. Common metadata¶
bios_boot_time (int): approx time in seconds the system takes to boot before it can be half useful (like BIOS can interact, etc).
Considered as zero if missing.
id (str): name of the target
fullid (str): Full name of the target that includes the server’s short name (AKA); SERVERAKA/ID.
TARGETNAME (bool) True
bsp_models (list of str): ways in which the BSPs in a target (described in the bsps dictionary) can be used.
If a target has more than one BSP, how can they be combined? e.g:
- BSP1
- BSP2
- BSP1+2
- BSP1+3
would describe that in a target with three BSPs, 1 and 2 can be used individually or the target can operate using 1+2 or 1+3 together (but not 3+2 or 1+2+3).
bsps (dictionary of dictionaries keyed by BSP name): describes each BSP the target contains
A target that is capable of computing (eg: an MCU board vs let’s say, a toaster) would describe a BSP; each BSP dictionary contains the following keys:
- cmdline (str): [QEMU driver] command line used to boot a QEMU target
- zephyr_board (str): [Zephyr capable targets] identifier to use for building Zephyr OS applications for this board as the BOARD parameter to the Zephyr build process.
- zephyr_kernelname (str): [Zephyr capable targets] name of the file to use as Zephyr image resulting from the Zephyr OS build process.
- sketch_fqbn (str): [Sketch capable targets] identifier to use for building Arduino applications for this board.
- sketch_kernelname (str): [Sketch capable targets] name of the file to use as image resulting from the Sketch build process.
disabled (bool): True if the target is disabled, False otherwise.
fixture_XYZ (bool): when present and True, the target exposes feature (or a test fixture) named XYZ
interconnects (dictionary of dictionaries keyed by interconnect name):
When a target belongs to an interconnect, there will be an entry here naming the interconnect. Note the interconnect might be in another server, not necessarily in the same server as the target is.
Each interconnect might have the following (or other fields) with address assignments, etc:
- bt_addr (str): Bluetooth Address (48bits HH:HH:HH:HH:HH:HH, where HH are two hex digits) that will be assigned to this target in this interconnect (when describing a Bluetooth interconnect)
- mac_addr (str): Ethernet Address (48bits HH:HH:HH:HH:HH:HH, where HH are two hex digits) that will be assigned to this target in this interconnect (when describing ethernet or similar interconnects)
- ipv4_addr (str): IPv4 Address (32bits, DDD.DDD.DDD.DDD, where DDD are decimal integers 0-255) that will be assigned to this target in this interconnect
- ipv4_prefix_len (int): length in bits of the network portion of the IPv4 address
- ipv6_addr (str): IPv6 Address (128bits, standard ipv6 colon format) that will be assigned to this target in this interconnect
- ipv4_prefix_len (int): length in bits of the network portion of the IPv6 address
idle_poweroff (int): seconds the target will be idle before the system will automatically power it off (if 0, it will never be powered off).
interfaces (list of str): list of interface names
interfaces_names (str): list of interface names as a single string separated by spaces
mutex (str): who is the current owner of the target
owner (str): who is the current owner of the target
path (str): path where the target state is maintained
things (list of str): list of names of targets that can be plugged/unplugged to/from this target.
type (str): type of the target
8.4.2. Interface specific metadata¶
- consoles (list of str): [console interface] names of serial consoles supported by the target
- debug-BSP-gdb-tcp-port (int): [debug interface] TCF port on which to reach a GDB remote stub for the given BSP (depending on target capability).
- images-TYPE-QUALIFIER (str): [imaging interface] File name of image that was flashed of a given type and qualifier; eg images-kernel-arc with a value of /var/cache/ttbd-production/USERNAME/somefile.elf was an image flashed as a kernel for architecture ARC).
- openocd.path (str): [imaging interface] path of the OpenOCD implementation being used
- openocd.pid (unsigned): [imaging interface] PID of the OpenOCD process driving this target
- openocd.port (unsigned): [imaging interface] Base TCP port where we can connect to the OpenOCD process driving this target
- powered (bool): [power control interface] True if the target is powered up, False otherwise.
- power_state (bool): [power control interface] ‘on’ if the target is powered up, ‘off’ otherwise. (FIXME: this has to be unified with powered)
8.4.3. Driver / targe type specific metadata¶
hard_recover_rest_time (unsigned): [ttbl.tt.tt_flasher driver, OpenOCD targets] time the target has to be kept off when power-cycling to recover after a failed reset, reset halt or reset after power-cycle when flashing.
When the flasher (usually OpenOCD) cannot make the target comply, the driver will power cycle it to try to get it to a well known state.
linux (bool): True if this is a target that runs linux
quark_se_stub (bool): FIXME: DEPRECATED
qemu_bios_image (str): [QEMU driver] file name used for the target’s BIOS (depending on configuration)
qemu_ro_image (str): [QEMU driver] file name used for the target’s read-only image (depending on configuration)
qemu-image-kernel-ARCH (str): [QEMU driver] file used as a kernel to boot a QEMU target (depending on configuration)
qemu-cmdline-ARCH (str): [QEMU driver] command line used to launch the QEMU process implementing the target (depending on configuration)
ifname (str): [QEMU driver / SLIP] interface created to hookup the SLIP networking tun/tap into the vlan to connect to external networks or other VMs [FIXME: make internal]
slow_flash_factor (int): [[ttbl.tt.tt_flasher driver, OpenOCD targets] amount to scale up the timeout to flash into an OpenOCD capable target. Some targets have a slower flashing interface and need more time.
tunslip-ARCH-pid (int): [QEMU driver] PID of the process implementing tunslip for a QEMU target.
ram_megs (int): Megs of RAM supported by the target
ssh_client (bool): True if the target supports SSH
8.4.4. Provisioning OS specific metadata¶
linux_serial_console_default: which device is the system’s serial console connected to TCF’s first console.
If DEVICE (eg: ttyS0) is given, Linux will be booted with the argument console=DEVICE,115200.
linux_options_append: string describing options to append to a Linux kernel boot command line.
pos_capable: dictionary describing a target as able to boot into a Provisioning OS to perform target provisioning.
Keys are the same as described in
tcfl.pos.capability_fns
(e.g: boot_to_pos, boot_config, etc)Values are only one of each of each second level keys in the
tcfl.pos.capability_fns
dictionary (e.g.: pxe, uefi…).This indicates the system which different methodologies have to be used for the target to get into Provisioning OS mode, configure bootloader, etc.
pos_http_url_prefix: string describing the prefix to send for loading a Provisoning OS kernel/initramfs. See here.
Python’s
%(NAME)s
codes can be used to substitute values from the target’s tags or the interconnect’s.Example:
pos_http_url_prefix = "http://192.168.97.1/ttbd-pos/%(bsp)s/"
bsp
is common to use as the images for an architecture won’t work for another.bsp
is taken from the target’s tagbsp
. If not present, the first BSP (in alphabetical order) declared in the target tagsbsps
will be used.
pos_image: string describing the image used to boot the target in POS mode; defaults to tcf-live.
For each image, in the server,
ttbl.dhcp.pos_cmdline_opts
describes the kernel options to append to the kernel image, which is expected to be found in http://:data:`POS_HTTP_URL_PREFIX <pos_http_url_prefix>`/vmlinuz-POS_IMAGE
uefi_boot_manager_ipv4_regex: allows specifying a Python regular expression that describes the format/name of the UEFI boot entry that will PXE boot off the network. For example:
>>> ttbl.config.targets['PC-43j'].tags_update({ >>> 'uefi_boot_manager_ipv4_regex': 'UEFI Network' >>> })
Function (tcfl.pos_uefi._efibootmgr_setup()* can use this if the defaults do not work
target.pos.deploy_image()
reports:Cannot find IPv4 boot entry, enable manually
even after the PXE boot entry has been enabled manually.
Note this will be compiled into a Python regex.
8.5. ttbd Configuration API for targets¶
-
conf_00_lib.
arduino101_add
(name=None, fs2_serial=None, serial_port=None, ykush_url=None, ykush_serial=None, variant=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False, build_only=False)¶ Configure an Arduino 101 for the fixture described below
This Arduino101 fixture includes a Flyswatter2 JTAG which allows flashing, debugging and a YKUSH power switch for power control.
Add to a server configuration file:
arduino101_add( name = "arduino101-NN", fs2_serial = "arduino101-NN-fs2", serial_port = "/dev/tty-arduino101-NN", ykush_url = "http://USER:PASSWORD@HOST/SOCKET", ykush_serial = "YKXXXXX")
restart the server and it yields:
$ tcf list local/arduino101-NN
Parameters: - name (str) – name of the target
- fs2_serial (str) – USB serial number for the FlySwatter2 (defaults to TARGETNAME-fs2
- serial_port (str) – name of the serial port (defaults to
/dev/tty-TARGETNAME
) - ykush_serial (str) – USB serial number of the YKUSH hub.
- ykush_url (str) –
(optional) URL for the DLWPS7 power controller to the YKUSH. If None, the YKUSH is considered always on. See
dlwps7_add()
.FIXME: take a PC object so something different than a DLWPS7 can be used.
Overview
To power on the target, first we power the YKUSH, then the Flyswatter, then the serial port and then the board itself. And thus we need to wait for each part to correctly show up in the system after we power it up (or power off). Then the system starts OpenOCD to connect it (via the JTAG) to the board.
Powering on/off the YKUSH is optional, but highly recommended.
See the rationale for this complicated setup.
Bill of materials
- an available port on a DLWPS7 power switch (optional)
- a Yepkit YKUSH power-switching hub (see bill of materials in
ykush_targets_add()
- an Arduino101 (note it must have original firmware; if you need to reset it, follow these instructions).
- a USB A-Male to B-female for power to the Arduino 101
- a USB-to-TTL serial cable for the console (power)
- three M/M jumper cables
- A Flyswatter2 for flashing and debugging
- Flash a new serial number on the Flyswatter2 following the instructions.
- a USB A-Male to B-female for connecting the Flyswatter to the YKush (power and data)
- An ARM-JTAG 20-10 adapter miniboard and flat ribbon cable (https://www.olimex.com/Products/ARM/JTAG/ARM-JTAG-20-10/) to connect the JTAG to the Arduino101’s jtag port.
Connecting the test target fixture
- connect the Arduino’s USB port to the YKUSH downstream port 3
- Flyswatter2 JTAG:
connect the USB port to the YKUSH downstream port 1
flash a new serial number on the Flyswatter2 following the instructions.
This is needed to distinguish multiple Flyswatter2 JTAGs connected in the same system, as they all come flashed with the same number (FS20000).
connect the ARM-JTAG 20-10 adapter cable to the FlySwatter2 and to the Arduino101.
Note the flat ribbon cable has to be properly aligned; the red cable indicates pin #1. The board connectors might have a dot, a number 1 or some sort of marking indicating where pin #1 is.
If your ribbon cable has no red cable, just choose one end as one and align it on both boards to be pin #1.
- connect the USB-to-TTY serial adapter to the YKUSH downstream port 2
- connect the USB-to-TTY serial adapter to the Arduino 101 with the
M/M jumper cables:
- USB FTDI Black (ground) to Arduino101’s serial ground pin
- USB FTDI White (RX) to the Arduino101’ TX
- USB FTDI Green (TX) to Arduino101’s RX.
- USB FTDI Red (power) is left open, it has 5V.
- connect the YKUSH to the server system and to power as
described in
ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: arduino101-NN (where NN is a number)
- Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg
for a quick find], see
ykush_targets_add()
. - Configure udev to add a name for the serial device that
represents the USB-to-TTY dongle connected to the target so we can
easily find it at
/dev/tty-TARGETNAME
. Different options for USB-to-TTY dongles with or without a USB serial number.
-
conf_00_lib.
a101_dfu_add
(name, serial_number, ykush_serial, ykush_port_board, ykush_port_serial=None, serial_port=None)¶ Configure an Arduino 101
This is an Arduino101 fixture that uses an YKUSH hub for power control, with or without a serial port (via external USB-to-TTY serial adapter) and requires no JTAG, using DFU mode for flashing. It allows flashing the BLE core.
Add to a server configuration file (eg:
/etc/ttbd-production/conf_10_targets.py:
):a101_dfu_add("a101-NN", "SERIALNUMBER", "YKNNNNN", PORTNUMBER, [ykush_port_serial = PORTNUMBER2,] [serial_port = "/dev/tty-a101-NN"])
restart the server and it yields:
$ tcf list local/arduino101-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the Arduino 101
- ykush_serial (str) – USB serial number of the YKUSH hub used for power control
- ykush_port_board (int) – number of the YKUSH downstream port where the board is connected.
- ykush_port_serial (int) – (optional) number of the YKUSH downstream port where the board’s serial port is connected. If not specified, it will be considered there is no serial port.
- serial_port (str) – (optional) name of the serial port
(defaults to
/dev/tty-NAME
)
Overview
The Arduino 101 is powered via the USB connector. The Arduino 101 does not export a serial port over the USB connector–applications loaded onto it might create a USB serial port, but this is not necessarily so all the time.
Thus, for ease of use this fixture connects an optional external USB-to-TTY dongle to the TX/RX/GND lines of the Arduino 101 that allows a reliable serial console to be present.
When the serial dongle is in use, the power rail needs to first power up the serial dongle and then the board.
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
This fixture uses
ttbl.tt.tt_dfu
to implement the target; refer to it for implementation details.Bill of materials
- two available ports on an YKUSH power switching hub (serial YKNNNNN); only one if the serial console will not be used.
- an Arduino 101 board
- a USB A-Male to micro-B male cable (for board power)
- (optional) a USB-to-TTY serial port dongle
- (optional) three M/M jumper cables
Connecting the test target fixture
- (if not yet connected), connect the YKUSH to the server system
and to power as described in
ykush_targets_add()
- connect the Arduino 101’s USB port to the YKUSH downstream port PORTNUMBER
- (if a serial console will be connected) connect the USB-to-TTY serial adapter to the YKUSH downstream port PORTNUMBER2
- (if a serial console will be connected) connect the USB-to-TTY
serial adapter to the Arduino 101 with the M/M jumper cables:
- USB FTDI Black (ground) to Arduino 101’s serial ground pin (fourth pin from the bottom)
- USB FTDI White (RX) to the Arduino 101’s TX.
- USB FTDI Green (TX) to Arduino 101’s RX.
- USB FTDI Red (power) is left open, it has 5V.
Configuring the system for the fixture
Choose a name for the target: a101-NN (where NN is a number)
(if needed) Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg for a quick find], see
ykush_targets_add()
.Find the board’s serial number.
Note these boards, when freshly plugged in, will only stay in DFU mode for five seconds and then boot Zephyr (or whichever OS they have), so the USB device will dissapear. You need to run the lsusb or whichever command you are using quick (or monitor the kernel output with dmesg -w).
Configure udev to add a name for the serial device that represents the USB-to-TTY dongle connected to the target so we can easily find it at
/dev/tty-a101-NN
. Different options for USB-to-TTY dongles with or without a USB serial number.Add to the configuration file (eg:
/etc/ttbd-production/conf_10_targets.py
):a101_dfu_add("a101-NN", "SERIALNUMBER", "YKNNNNN", PORTNUMBER, ykush_port_serial = PORTNUMBER2, serial_port = "/dev/tty-a101-NN")
-
conf_00_lib.
esp32_add
(name, serial_number, ykush_serial, ykush_port_board, serial_port=None)¶ Configure an ESP-32 MCU board
The ESP-32 is an Tensillica based MCU, implementing two Xtensa CPUs. This fixture uses an YKUSH hub for power control with a serial power over the USB cable which is also used to flash using
esptool.py
from the ESP-IDF framework.See instructions in
ttbl.tt.tt_esp32
to install and configure prerequisites in the server.Add to a server configuration file (eg:
/etc/ttbd-production/conf_10_targets.py:
):esp32_add("esp32-NN", "SERIALNUMBER", "YKNNNNN", PORTNUMBER)
restart the server and it yields:
$ tcf list local/esp32-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the esp32
- ykush_serial (str) – USB serial number of the YKUSH hub used for power control
- ykush_port_board (int) – number of the YKUSH downstream port where the board is connected.
- serial_port (str) – (optional) name of the serial port
(defaults to
/dev/tty-NAME
)
Overview
The ESP32 offers the same USB connector for serial port and flashing.
Bill of materials
- one available port on an YKUSH power switching hub (serial YKNNNNN)
- an ESP32 board
- a USB A-Male to micro-B male cable
Connecting the test target fixture
- (if not yet connected), connect the YKUSH to the server system
and to power as described in
ykush_targets_add()
- connect the esp32’s USB port to the YKUSH downstream port PORTNUMBER
Configuring the system for the fixture
See instructions in
ttbl.tt.tt_esp32
to install and configure prerequisites in the server.Choose a name for the target: esp32-NN (where NN is a number)
(if needed) Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg for a quick find], see
ykush_targets_add()
.Find the board’s serial number.
Note these boards usually have a serial number of 001; it can be updated easily to a unique serial number following these steps.
Configure udev to add a name for the serial device for the board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
-
conf_00_lib.
mv_add
(name=None, fs2_serial=None, serial_port=None, ykush_serial=None, ykush_port_board=None, ykush_port_serial=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False)¶ Configure a Quark D2000 for the fixture described below.
The Quark D2000 development board includes a Flyswatter2 JTAG which allows flashing, debugging; it requires two upstream connections to a YKUSH power-switching hub for power and JTAG and another for serial console.
Add to a server configuration file:
mv_add(name = "mv-NN", fs2_serial = "mv-NN-fs2", serial_port = "/dev/tty-mv-NN", ykush_serial = "YKXXXXX", ykush_port_board = N1, ykush_port_serial = N2)
restart the server and it yields:
$ tcf list local/mv-NN
Parameters: - name (str) – name of the target
- fs2_serial (str) – USB serial number for the FlySwatter2 (should be TARGETNAME-fs2 [FIXME: default to that]
- serial_port (str) – name of the serial port [FIXME: default to /dev/tty-TARGETNAME]
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
- ykush_port_serial (int) – number of the YKUSH downstream port where the board’s serial port is connected.
Overview
The Quark D2000 board comes with a builtin JTAG / Flyswatter, whose port can be programmed. The serial port is externally provided via a USB-to-TTY dongle.
However, because of this, to power the test target up the power rail needs to first power up the serial dongle and then the board. There is a delay until the internal JTAG device we can access it, thus we need a delay before the system starts OpenOCD to connect it (via the JTAG) to the board.
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- two available ports on an YKUSH power switching hub (serial YKNNNNN)
- a Quark D2000 reference board
- a USB A-Male to micro-B male cable (for board power)
- a USB-to-TTY serial port dongle
- three M/M jumper cables
Connecting the test target fixture
- connect the Quark D2000’s USB-ATP port with the USB A-male to B-micro to YKUSH downstream port N1 for powering the board
- connect the USB-to-TTY serial adapter to the YKUSH downstream port N2
- connect the USB-to-TTY serial adapter to the Quark D2000 with the
M/M jumper cables:
- USB FTDI Black (ground) to board’s serial ground pin
- USB FTDI White (RX) to the board’s serial TX ping
- USB FTDI Green (TX) to board’s serial RX pin
- USB FTDI Red (power) is left open, it has 5V.
- connect the YKUSH to the server system and to power as
described in
ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: mv-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
ykush_targets_add()
. - Flash a new serial number on the Flyswatter2 following the instructions.
- Configure udev to add a name for the serial device that
represents the USB-to-TTY dongle connected to the target so we can
easily find it at
/dev/tty-TARGETNAME
. Different options for USB-to-TTY dongles with or without a USB serial number.
- Ensure the board is flashed with the Quark D2000 ROM (as described here).
-
conf_00_lib.
nrf5x_add
(name, serial_number, family, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', debug=False)¶ Configure a NRF51 board for the fixture described below
The NRF51 is an ARM M0-based development board. Includes a builting JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
nrf5x_add(name = "nrf51-NN", serial_number = "SERIALNUMBER", ykush_serial = "YKXXXXX", ykush_port_board = N)
restart the server and it yields:
$ tcf list local/nrf51-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the board
- family (str) – Family of the board (nrf51_blenano, nrf51_pca10028, nrf52840_pca10056, nrf52_blenano2, nrf52_pca10040)
- serial_port (str) – (optional) name of the serial port, which
defaults to
/dev/tty-TARGETNAME
. - ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- a nrf51 board
- a USB A-Male to micro-B male cable (for board power, JTAG and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the FRDM’s USB port with the USB A-male to B-micro to YKUSH downstream port N
- ensure the battery is disconnected
- connect the YKUSH to the server system and to power as
described in
ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: nrf51-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
-
conf_00_lib.
qemu_pos_add
(target_name, nw_name, mac_addr, ipv4_addr, ipv6_addr, consoles=None, disk_size='30G', mr_partsizes='1:4:5:5', sd_iftype='virtio', extra_cmdline='', ram_megs=2048)¶ Add a QEMU virtual machine capable of booting over Provisioning OS.
This target supports a serial console (ttyS0) and a single hard drive that gets fully reinitialized every time the server is restarted.
Note this target uses a UEFI bios and defines UEFI storage space; this is needed so the right boot order is maintained.
Add to a server configuration file
/etc/ttbd-*/conf_*.py
>>> target = qemu_pos_add("qemu-x86-64-05a" >>> "nwa", >>> mac_addr = "02:61:00:00:00:05", >>> ipv4_addr = "192.168.95.5", >>> ipv6_addr = "fc00::61x:05")
Extra paramenters can be added by using the extra_cmdline arguments, such as for example, to add VNC display:
>>> extra_cmdline = "-display vnc=0.0.0.0:0",
Adding to other networks:
>>> ttbl.config.targets['nuc-43'].add_to_interconnect( >>> 'nwb', dict( >>> mac_addr = "02:62:00:00:00:05", >>> ipv4_addr = "192.168.98.5", ipv4_prefix_len = 24, >>> ipv6_addr = "fc00::62:05", ipv6_prefix_len = 112)
Parameters: - target_name (str) – name of the target to create
- nw_name (str) – name of the network to which this target will be connected that provides Provisioning OS services.
- mac_addr (str) –
MAC address for this target (fake one). Will be given to the virtual device created and can’t be the same as any other MAC address in the system or the networks. It is recommended to be in the format:
>>> 02:HX:00:00:00:HY
where HX and HY are two hex digits
- disk_size (str) – (optional) size specification for the target’s hard drive, as understood by QEMU’s qemu-img create program.
- consoles (list(str)) – serial consoles to create (defaults to just one, which is also the minimum).
- ram_megs (int) – (optional) size of memory in megabytes
- mr_partsizes (str) – (optional) specification for partition sizes for the multiroot Provisoning OS environment. FIXME: document link
- extra_cmdline (str) – a string with extra command line to add; %(FIELD)s supported (target tags).
-
conf_00_lib.
frdm_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', debug=False)¶ Configure a FRDM board for the fixture described below
The FRDM k64f is an ARM-based development board. Includes a builting JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
frdm_add(name = "frdm-NN", serial_number = "SERIALNUMBER", serial_port = "/dev/tty-frdm-NN", ykush_serial = "YKXXXXX", ykush_port_board = N)
restart the server and it yields:
$ tcf list local/frdm-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the FRDM board
- serial_port (str) – name of the serial port [FIXME: default to /dev/tty-TARGETNAME]
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- a FRDM k64f board
- a USB A-Male to micro-B male cable (for board power, JTAG and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the FRDM’s OpenSDA port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: frdm-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
Warning
Ugly magic here. The FRDMs sometimes boot into some bootloader upload mode (with a different USB serial number) from which the only way to get them out is by power-cycling it.
So the power rail for this thing is set with a Power Controller object that does the power cycle itself (pc_board) and then another that looks for a USB device with the right serial number (serial_number). If it fails to find it, it executes an action and waits for it to show up. The action is power cycling the USB device with the pc_board power controller. Lastly, in the power rail, we have the glue that opens the serial ports to the device and the flasher object that start/stops OpenOCD.
Yup, I dislike computers too.
-
conf_00_lib.
arduino2_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None)¶ Configure an Arduino Due board for the fixture described below
The Arduino Due an ARM-based development board. Includes a builtin flasher that requires the bossac tool. Single wire is used for flashing, serial console and power.
Add to a server configuration file:
arduino2_add(name = "arduino2-NN", serial_number = "SERIALNUMBER", serial_port = "/dev/tty-arduino2-NN", ykush_serial = "YKXXXXX", ykush_port_board = N)
restart the server and it yields:
$ tcf list local/arduino2-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the board
- serial_port (str) – name of the serial port (defaults to /dev/tty-TARGETNAME).
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- an Arduino Due board
- a USB A-Male to micro-B male cable (for board power, flashing and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the Arduino Due’s OpenSDA (?) port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: arduino2-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
-
conf_00_lib.
ma_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False)¶
-
conf_00_lib.
quark_c1000_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False, variant='qc10000_crb', target_type='ma')¶ Configure a Quark C1000 for the fixture described below
The Quark C1000 development board has a built-in JTAG which allows flashing, debugging, thus it only requires an upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
This board has a USB serial number and should not require any flashing of the USB descriptors for setup.
Add to a server configuration file:
quark_c1000_add(name = "qc10000-NN", serial_number = "SERIALNUMBER", ykush_serial = "YKXXXXX", ykush_port_board = N)
restart the server and it yields:
$ tcf list local/qc10000-NN
earlier versions of these boards can be added with the ma_add() and ah_add() versions of this function.
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the Quark C1000 board
- serial_port (str) – name of the serial port [FIXME: default to /dev/tty-TARGETNAME]
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
- variant (str) – variant of ROM version and address map as defined in (FIXME) flasher configuration.
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- one available port on an YKUSH power switching hub (serial YKNNNNN)
- a Quark C1000 reference board
- a USB A-Male to micro-B male cable (for board power, JTAG and console)
Connecting the test target fixture
- connect the Quark C1000’s FTD_USB port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
ykush_targets_add()
Configuring the system for the fixture
Choose a name for the target: qc10000-NN (where NN is a number)
(if needed) Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg for a quick find], see
ykush_targets_add()
.Find the board’s serial number
Configure udev to add a name for the serial device for the board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.Note, however that these boards might present two serial ports to the system, one of which later converts to another interface. So in order to avoid configuration issues, the right port has to be explicitly specified with ENV{ID_PATH} == “*:1.1”:
# Force second interface, first is for JTAG/update SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "IN0521621", ENV{ID_PATH} == "*:1.1", SYMLINK += "tty-TARGETNAME"
-
conf_00_lib.
nios2_max10_add
(name, device_id, serial_port_serial_number, pc_board, serial_port=None)¶ Configure an Altera MAX10 NIOS-II
The Altera MAX10 is used to implement a NIOS-II CPU; it has a serial port, JTAG for flashing and power control.
The USB serial port is based on a FTDI chipset with a serial number, so it requires no modification. However, the JTAG connector has no serial number and can be addressed only path.
Add to a server configuration file:
nios2_max10_add("max10-NN", "CABLEID", "SERIALNUMBER", ttbl.pc.dlwps7("http://admin:1234@HOST/PORT"))
restart the server and it yields:
$ tcf list local/max10-NN
Parameters: - name (str) – name of the target
- cableid (str) –
identification of the JTAG for the board; this can be determined using the jtagconfig tool from the Quartus Programming Tools; make sure only a single board is connected to the system and powered on and run:
$ jtagconfig 1) USB-BlasterII [2-2.1] 031050DD 10M50DA(.|ES)/10M50DC
Note USB-BlasterII [2-2.1] is the cable ID for said board.
Warning
this cable ID is path dependent. Moving any of the USB cables (including the upstream hubs), including changing the ports to which the cables are connected, will change the cableid and will require re-configuration.
- serial_number (str) – USB serial number for the serial port of the MAX10 board.
- serial_port (str) – name of the serial port [defaults to /dev/tty-TARGETNAME]
- pc (ttbl.tt_power_control_impl) – power controller to switch on/off the MAX10 board.
Bill of materials
- Altera MAX10 reference board
- Altera MAX10 power brick
- a USB A-Male to mini-B male cable (for JTAG)
- a USB A-Male to mini-B male cable (for UART)
- an available power socket in a power controller like the
Digital Loggers Web Power Switch
- two USB ports leading to the server
Connecting the test target fixture
- connect the power brick to the MAX10 board
- connect the power plug to port N of the power controller POWERCONTROLLER
- connect a USB cable to the UART connector in the MAX10; connect to the server
- connect a USB cable to the JTAG connector in the MAX10; connect to the server
- ensure the DIP SW2 (back of board) are all OFF except for 3 that has to be on and that J7 (front of board next to coaxial connectors) is open.
Configuring the system for the fixture
Ensure the system is setup for MAX10 boards:
Choose a name for the target: max10-NN (where NN is a number)
Configure udev to add a name for the serial device for the board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number; e.g.:SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "AC0054PT", SYMLINK += "tty-max10-46"
-
conf_00_lib.
stm32_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', model=None, zephyr_board=None, debug=False)¶ Configure an Nucleo/STM32 board
The Nucleo / STM32 are ARM-based development board. Includes a builting JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
stm32_add(name = "stm32f746-67", serial_number = "066DFF575251717867114355", ykush_serial = "YK23406", ykush_port_board = 3, model = "stm32f746")
restart the server and it yields:
$ tcf list local/stm32f746-67
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the board
- serial_port (str) – (optional) name of the serial port (defaults to /dev/tty-TARGETNAME).
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board is connected.
- openocd_path (str) –
(optional) path to where the OpenOCD binary is installed (defaults to system’s).
Warning
Zephyr SDK 0.9.5’s version of OpenOCD is not able to flash some of these boards.
- openocd_scripts (str) – (optional) path to where the OpenOCD scripts are installed (defaults to system’s).
- model (str) –
String which describes this model to the OpenOCD configuration. This matches the model of the board in the packaging. E.g:
- stm32f746
- stm32f103
see below for the mechanism to add more via configuration
- zephyr_board (str) – (optional) string to configure as the board model used for Zephyr builds. In most cases it will be inferred automatically.
- debug (bool) – (optional) operate in debug mode (more verbose log from OpenOCD) (defaults to false)
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- one STM32* board
- a USB A-Male to micro-B male cable (for board power, flashing and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the STM32 micro USB port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: stm32MODEL-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number. - Add the configuration block described at the top of this documentation and restart the server
Extending configuration for new models
Models not supported by current configuration can be expanded by adding a configuration block such as:
import ttbl.flasher ttbl.flasher.openocd_c._addrmaps['stm32f7'] = dict( arm = dict(load_addr = 0x08000000) ) ttbl.flasher.openocd_c._boards['stm32f746'] = dict( addrmap = 'stm32f7', targets = [ 'arm' ], target_id_names = { 0: 'stm32f7x.cpu' }, write_command = "flash write_image erase %(file)s %(address)s", config = """ # # openocd.cfg configuration from zephyr.git/boards/arm/stm32f746g_disco/support/openocd.cfg # source [find board/stm32f7discovery.cfg] $_TARGETNAME configure -event gdb-attach { echo "Debugger attaching: halting execution" reset halt gdb_breakpoint_override hard } $_TARGETNAME configure -event gdb-detach { echo "Debugger detaching: resuming execution" resume } """ ) stm32_models['stm32f746'] = dict(zephyr = "stm32f746g_disco")
-
conf_00_lib.
nucleo_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', debug=False)¶ Configure an Nucleo F10 board
This is a backwards compatiblity function, please use
stm32_add()
.
-
conf_00_lib.
ykush_targets_add
(ykush_serial, pc_url, powered_on_start=None)¶ Given the serial number for an YKUSH hub connected to the system, set up a number of targets to manually control it.
- (maybe) one target to control the whole hub
- One target per port YKNNNNN-1 to YKNNNNN-3 to control the three ports individually; this is used to debug powering up different parts of a target.
ykush_targets_add("YK34567", "http://USER:PASSWD@HOST/4")
yields:
$ tcf list local/YK34567 local/YK34567-base local/YK34567-1 local/YK34567-2 local/YK34567-3
To use then the YKUSH hubs as power controllers, create instances of
ttbl.pc_ykush.ykush
:ttbl.pc_ykush.ykush("YK34567", PORT)
where PORT is 1, 2 or 3.
Parameters: - ykush_serial (str) – USB Serial Number of the hub (finding).
- pc_url (str) –
Power Control URL
- A DLPWS7 URL (
ttbl.pc.dlwps7
), if given, will create a target YKNNNNN to power on or off the whole hub and wait for it to connect to the system.It will also create one called YKNNNN-base that allows to power it off or on, but will not wait for the USB device to show up in the system (useful for poking the power control to the hub when it is failing to connect to the system)
- If None, no power control targets for the whole hub will be created. It will just be expected the hub is connected permanently to the system.
- A DLPWS7 URL (
- powered_on_start (bool) –
what to do with the power on the downstream ports:
- None: leave them as they are
- False: power them off
- True: power them on
Bill of materials
YKUSH hub and it’s serial number
Note the hub itself has no serial number, but an internal device connected to its downstream port number 4 does have the YK34567 serial number.
a male to mini-B male cable for power
a USB brick for power
- (optional) a DLWPS7 power switch to control the hub’s power
- or an always-on connection to a power plug
a male to micro-B male cable for upstream USB connectivity
an upstream USB B-female port to the server (in a hub or root hub)
Note the YKNNNNN targets are always tagged idle_poweroff = 0 (so they are never automatically powered off) but not skip_cleanup; the later would never release them when idle and if a recovery fails somewhere, then none would be able to re-acquire it to recover.
-
conf_00_lib.
usbrly08b_targets_add
(serial_number, target_name_prefix=None, power=False)¶ Set up individual power control targets for each relay of a Devantech USB-RLY08B
See below for configuration steps
Parameters: Bill of materials
- A Devantech USB-RLY08B USB relay controller (https://www.robot-electronics.co.uk/htm/usb_rly08btech.htm)
- a USB A-Male to B-female to connect it to the server
- an upstream USB A-female port to the server (in a hub or root hub)
Connecting the relay board to the system
- Connect the USB A-Male to the free server USB port
- Connect the USB B-Male to the relay board
Configuring the system for the fixture
- Choose a prefix name for the target (eg: re00) or let it be the default (usbrly08b-SERIALNUMBER).
- Find the relay board’s serial number (more methods)
- Ensure the device node for the board is accessible by the user
or groups running the daemon. See
ttbl.usbrly08b.pc
for details.
To create individual targets to control each individual relay, add in a configuration file such as
/etc/ttbd-production/conf_10_targets.py
:usbrly08b_targets_add("00023456")
which yields, after restarting the server:
$ tcf list -a local/usbrly08b-00023456-01 local/usbrly08b-00023456-02 local/usbrly08b-00023456-03 local/usbrly08b-00023456-04 local/usbrly08b-00023456-05 local/usbrly08b-00023456-06 local/usbrly08b-00023456-07
To use the relays as power controllers on a power rail for another target, create instances of
ttbl.usbrly08b.pc
:ttbl.usbrly08b.pc("0023456", RELAYNUMBER)
where RELAYNUMBER is 1 - 8, which matches the number of the relay etched on the board.
-
conf_00_lib.
emsk_add
(name=None, serial_number=None, serial_port=None, brick_url=None, ykush_serial=None, ykush_port=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False, model=None)¶ Configure a Synposis EM Starter Kit (EMSK) board configured for a EM* SOC architecture, with a power brick and a YKUSH USB port providing power control.
The board includes a builting JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
emsk_add(name = "emsk-NN", serial_number = "SERIALNUMBER", ykush_serial = "YKXXXXX", ykush_port_board = N, model = "emsk7d")
restart the server and it yields:
$ tcf list local/emsk-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the board
- serial_port (str) – name of the serial port (defaults to /dev/tty-TARGETNAME).
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
- brick_url (str) – URL for the power switch to which the EMSK’s power brick is connected (this assumes for now you are using a DLWPS7 for power, so the url witll be in the form http://user:password@hostname/port.
- model (str) –
SOC model configured in the board with the blue DIP switches (from emsk7d [default], emsk9d, emsk11d).
DIP1 DIP2 DIP3 DIP4 Model off off em7d on off em9d off on em11d (on means DIP down, towards the board)
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- a EM Starter Kit board and its power brick
- a USB A-Male to micro-B male cable (for board power, flashing and console)
- one available port on a switchable power hub
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the EMSK’s micro USB port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
ykush_targets_add()
- Connect the power brick to the EMSK’s power barrel
- Connect the power brick to the available power in the power switch
Configuring the system for the fixture
- Choose a name for the target: emsk-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
-
conf_00_lib.
dlwps7_add
(hostname, powered_on_start=None, user='admin', password='1234')¶ Add test targets to individually control each of a DLWPS7’s sockets
The DLWPS7 needs to be setup and configured; this function exposes the different targets for to expose the individual sockets for debug.
Add to a configuration file
/etc/ttbd-production/conf_10_targets.py
(or similar):dlwps7_add("sp6")
yields:
$ tcf list local/sp6-1 local/sp6-2 local/sp6-3 local/sp6-4 local/sp6-5 local/sp6-6 local/sp6-7 local/sp6-8
Power controllers for targets can be implemented instantiating an
ttbl.pc.dlwps7
:pc = ttbl.pc.dlwps7("http://admin:1234@spM/O")
where O is the outlet number as it shows in the physical unit and spM is the name of the power switch.
Parameters: Overview
Bill of materials
- a DLWPS7 unit and power cable connected to power plug
- a network cable
- a connection to a network switch to which the server is also connected (nsN)
Connecting the power switch
Ensure you have configured an class C 192.168.X.0/24, configured with static IP addresses, to which maybe only this server has access to connect IP-controlled power switches.
Follow these instructions to create a network.
You might need a new Ethernet adaptor to connect to said network (might be PCI, USB, etc).
connect the power switch to said network
assign a name to the power switch and add it along its IP address in
/etc/hosts
; convention is to call them spY, where X is a number and sp stands for Switch; Power.Warning
if your system uses proxies, you need to add spY also to the no_proxy environment varible in
/etc/bashrc
to avoid the daemon trying to access the power switch through the proxy, which will not work.with the names
/etc/hosts
, refer to the switches by name rather than by IP address.
Configuring the system
Choose a name for the power switch (spM), where M is a number
The power switch starts with IP address 192.168.0.100; it needs to be changed to 192.168.X.M:
Connect to nsN
Ensure the server access to 192.168.0.100 by adding this routing hack:
# ifconfig nsN:2 192.168.0.0/24
With lynx or a web browser, from the server, access the switch’s web control interface:
$ lynx http://192.168.0.100
Enter the default user admin, password 1234, select ok and indicate A to always accept cookies
Hit enter to refresh link redirecting to 192.168.0.100/index.htm, scroll down to Setup, select. On all this steps, make sure to hit submit for each individual change.
Lookup setup of IP address, change to 192.168.N.M (where x matches spM), gateway 192.168.N.1; hit the submit next to it.
Disable the security lockout in section Delay
Set Wrong password lockout set to zero minutes
Turn on setting power after power loss:
Power Loss Recovery Mode > When recovering after power loss select Turn all outlets on
Extra steps needed for newer units (https://dlidirect.com/products/new-pro-switch)
The new refreshed unit looks the same, but has wifi connectivity and pleny of new features, some of which need tweaking; login to the setup page again and for each of this, set the value/s and hit submit before going to the next one:
Access setings (quite important, as this allows the driver to access the same way for the previous generation of the product too):
ENABLE: allow legacy plaintext login methods
Note in (3) below it is explained why this is not a security problem in this kind of deployments.
remove the routing hack:
# ifconfig nsN:2 down
The unit’s default admin username and password are kept per original (admin, 1234):
- They are deployed in a dedicated network switch that is internal to the server; none has access but the server users (targets run on another switch).
- they use HTTP Basic Auth, they might as well not use authentication
Add an entry in
/etc/hosts
for spM so we can refer to the DLWPS7 by name instead of IP address:192.168.4.X spM
-
class
conf_00_lib.
vlan_pci
¶ Power controller to implement networks on the server side.
Supports:
connecting the server to a physical net physical networks with physical devices (normal or VLAN networks)
creating internal virtual networks with macvtap http://virt.kernelnewbies.org/MacVTap so VMs running in the host can get into said networks.
When a physical device is also present, it is used as the upper device (instead of a bridge) so traffic can flow from physical targets to the virtual machines in the network.
tcpdump capture of network traffic
This behaves as a power control implementation that when turned:
- on: sets up the interfaces, brings them up, start capturing
- off: stops all the network devices, making communication impossible.
Capturing with tcpdump
Can be enabled setting the target’s property tcpdump:
$ tcf property-set TARGETNAME tcpdump FILENAME
this will have the target dump all traffic capture to a file called FILENAME in the daemon file storage area for the user who owns the target. The file can then be recovered with:
$ tcf broker-file-download FILENAME
FILENAME must be a valid file name, with no directory components.
Note
Note this requires the property tcpdump being registered in the configuration with
>>> ttbl.test_target.properties_user.add('tcpdump')
so normal users can set/unset it.
Example configuration (see naming networks):
>>> ttbl.config.interconnect_add( >>> ttbl.tt.tt_power("nwa", vlan_pci()), >>> tags = { >>> 'ipv4_addr': '192.168.97.1', >>> 'ipv4_prefix_len': 24, >>> 'ipv6_addr': 'fc00::61:1', >>> 'ipv6_prefix_len': 112, >>> 'mac_addr': '02:61:00:00:00:01:', >>> })
Now QEMU targets (for example), can declare they are part of this network and upon start, create a tap interface for themselves:
$ ip link add link _bnwa name tnwaTARGET type macvtap mode bridge $ ip link set tnwaTARGET address 02:01:00:00:00:IC_INDEX up
which then is given to QEMU as an open file descriptor:
-net nic,model=virtio,macaddr=02:01:00:00:00:IC_INDEX -net tap,fd=FD
(
ttbl.tt_qemu2.tt_qemu
andZephyr
VMs already implement this behaviour).Notes:
- keep target names short, as they will be used to generate network interface names and those are limited in size (usually to about 12 chars?), eg tnwaTARGET comes from nwa being the name of the network target/interconnect, TARGET being the target connected to said interconnect.
- IC_INDEX: is the index of the TARGET in the interconnect/network;
it is recommended, for simplicty to make them match with the mac
address, IP address and target name, so for example:
- targetname: pc-04
- ic_index: 04
- ipv4_addr: 192.168.1.4
- ipv6_addr: fc00::1:4
- mac_addr: 02:01:00:00:00:04
If a tag named mac_addr is given, containing the MAC address of a physical interface in the system, then it will be taken over as the point of connection to external targets. Connectivity from any virtual machine in this network will be extended to said network interface, effectively connecting the physical and virtual targets.
Warning
DISABLE Network Manager’s (or any other network manager) control of this interface, otherwise it will interfere with it and network will not operate.
Follow these steps
System setup:
- ttbd must be ran with CAP_NET_ADMIN so it can create network
interfaces. For that, either add to systemd’s
/etc/systemd/system/ttbd@.service
:CapabilityBoundingSet = CAP_NET_ADMIN AmbientCapabilities = CAP_NET_ADMIN
or as root, give ttbd the capability:
# setcap cap_net_admin+pie /usr/bin/ttbd
udev’s /etc/udev/rules.d/ttbd-vlan:
SUBSYSTEM == "macvtap", ACTION == "add", DEVNAME == "/dev/tap*", GROUP = "ttbd", MODE = "0660"
This is needed so the tap devices can be accessed by user ttbd, which is the user that runs the daemon.
Remember to reload udev’s configuration with udevadm control –reload-rules.
This is already taken care by the RPM installation.
Fixture setup
Select a network interface to use (it can be a USB or PCI interface); find out it’s MAC address with ip link show.
add the tag mac_addr with said address to the tags of the target object that represents the network to which which said interface is to be connected; for example, for a network called nwc
ttbl.config.target_add( ttbl.tt.tt_power('nwc', vlan_pci()), tags = dict( mac_addr = "a0:ce:c8:00:18:73", ipv6_addr = 'fc00::13:1', ipv6_prefix_len = 112, ipv4_addr = '192.168.13.1', ipv4_prefix_len = 24, ) ) ttbl.config.targets['NAME'].tags['interfaces'].append('interconnect_c')
or for an existing network (such as the configuration’s default nwa):
# eth dongle mac 00:e0:4c:36:40:b8 is assigned to NWA ttbl.config.targets['nwa'].tags_update(dict(mac_addr = '00:e0:4c:36:40:b8'))
Furthermore, default networks nwa, nwb and nwc are defined to have a power control rail (versus an individual power controller), so it is possible to add another power controller to, for example, power on or off a network switch:
ttbl.config.targets['nwa'].pc_impl.append( ttbl.pc.dlwps7("http://USER:PASSWORD@sp5/8"))
This creates a power controller to switch on or off plug #8 on a Digital Loggers Web Power Switch named sp5 and makes it part of the nwa power control rail. Thus, when powered on, it will bring the network up up and also turn on the network switch.
add the tag vlan to also be a member of an ethernet VLAN network (requires also a mac_addr):
ttbl.config.inteconnect_add( ttbl.tt.tt_power('nwc', vlan_pci()), tags = dict( mac_addr = "a0:ce:c8:00:18:73", vlan = 30, ipv6_addr = 'fc00::13:1', ipv6_prefix_len = 112, ipv4_addr = '192.168.13.1', ipv4_prefix_len = 24))
in this case, all packets in the interface described by MAC addr a0:ce:c8:00:18:73 with tag 30.
lastly, for each target connected to that network, update it’s tags to indicate it:
ttbl.config.targets['TARGETNAME-NN'].tags_update( { 'ipv4_addr': "192.168.10.30", 'ipv4_prefix_len': 24, 'ipv6_addr': "fc00::10:30", 'ipv4_prefix_len': 112, }, ic = 'nwc')
By convention, the server is .1, the QEMU Linux virtual machines are set from .2 to .10 and the QEMU Zephyr virtual machines from .30 to .45. Physical targets are set to start at 100.
Note the networks for targets and infrastructure have to be kept separated.
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
class
conf_00_lib.
tt_qemu_zephyr
(id, bsps, tags={})¶ Implement a QEMU test target that can run Zephyr kernels and display the output over a serial port.
Supports power control, serial console and image flashing interfaces.
-
conf_00_lib.
sam_xplained_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', debug=False, target_type='sam_e70_xplained')¶ Configure a SAM E70/V71 boards for the fixture described below
The SAM E70/V71 xplained is an ARM-based development board. Includes a builtin JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
sam_xplained_add( name = "sam-e70-NN", serial_number = "SERIALNUMBER", serial_port = "/dev/tty-same70-NN", ykush_serial = "YKXXXXX", ykush_port_board = N, target_type = "sam_e70_xplained") # or sam_v71_xplained
restart the server and it yields:
$ tcf list local/sam-e70-NN local/sam-v71-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the SAM board
- serial_port (str) – (optional) name of the serial port (defaults to
/dev/tty-TARGETNAME
) - ykush_serial (str) – USB serial number of the YKUSH hub where it is connected to for power control.
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
- target_type (str) – the target type “sam_e70_xplained” or “sam_v71_xplained”
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- a SAM E70 or V71 xplained board
- a USB A-Male to micro-B male cable (for board power, JTAG and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
Ensure the SAM E70 is properly setup:
Using Atmel’s SAM-BA In-system programmer, change the boot sequence and reset the board in case there is a bad image; this utility can be also used to recover the board in case it gets stuck.
Download from Atmel’s website (registration needed) and install.
Note
This is not open source software
Close the erase jumper erase (in SAMEv70 that’s J200 and in SAMEv71 it is J202; in both cases, it is located above the CPU when you rotate the board so you can read the CPU’s labeling in a normal orientation).
Connect the USB cable to the taget’s target USB port (the one next to the Ethernet connector) and to a USB port that is known to be powered on.
Ensure power is on by verifying the orange led lights on on the Ethernet RJ-45 connector.
Wait 10 seconds
Open the erase jumper J202 to stop erasing
Open SAM-BA 2.16
Note on Fedora 25 you need to run sam-ba_64 from the SAM-BA package.
Select which serial port is that of the SAM e70 connected to the system. Use lsusb.py -ciu to locate the tty/ttyACM device assigned to your board:
$ lsusb.py -ciu ... 2-1 03eb:6124 02 2.00 480MBit/s 100mA 2IFs (Atmel Corp. at91sam SAMBA bootloader) 2-1:1.0 (IF) 02:02:00 1EP (Communications:Abstract (modem):None) cdc_acm tty/ttyACM2 2-1:1.1 (IF) 0a:00:00 2EPs (CDC Data:) cdc_acm ...
(in this example
/dev/tty/ttyACM2
).Select board at91same70-explained, click connect.
chose the flash tab and in the scripts drop down menu, choose boot from Flash (GPNVM1) and then execute.
Exit SAM-BA
connect the SAM E70/V71’s Debug USB port with the USB A-male to B-micro to YKUSH downstream port N
connect the YKUSH to the server system and to power as described in
ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: sam-e70-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the board’s serial number.
-
conf_00_lib.
simics_zephyr_cmds
= '$disk_image = "%(simics_hd0)s"\n$cpu_class = "pentium-pro"\n$text_console = TRUE\nrun-command-file "%%simics%%/targets/x86-440bx/x86-440bx-pci-system.include"\ncreate-telnet-console-comp $system.serconsole %(simics_console_port)d\nconnect system.serconsole.serial cnt1 = system.motherboard.sio.com[0]\ninstantiate-components\nsystem.serconsole.con.capture-start "%(simics_console)s"\nc\n'¶ Commmands to configure Simics to run a simulation for Zephyr by default
Fields available
via string formatting%(FIELD)L
-
conf_00_lib.
simics_zephyr_add
(name, simics_cmds='$disk_image = "%(simics_hd0)s"\n$cpu_class = "pentium-pro"\n$text_console = TRUE\nrun-command-file "%%simics%%/targets/x86-440bx/x86-440bx-pci-system.include"\ncreate-telnet-console-comp $system.serconsole %(simics_console_port)d\nconnect system.serconsole.serial cnt1 = system.motherboard.sio.com[0]\ninstantiate-components\nsystem.serconsole.con.capture-start "%(simics_console)s"\nc\n')¶ Configure a virtual Zephyr target running inside Simics
Simics is a platform simulator available from Wind River Systems; it can be used to implement a virtual machine environment that will be treated as a target.
Add to your configuration file
/etc/ttbd-production/conf_10_targets.py
:simics_zephyr_add("szNN")
restart the server and it yields:
$ tcf list local/szNN
Parameters: name (str) – name of the target (naming best practices). Overview
A Simics invocation in a standalone workspace will be created by the server to run for earch target when it is powered on. This driver currently supports only booting an ELF target and console output support (no console input or debugging). For more details, see
ttbl.tt.simics
.Note the default Simics settings for Zephyr are defined in
simics_zephyr_cmds
and you can create target which use a different Simics configuration by specifying it as a string in parameter simics_cmd.Bill of materials
Simics installed in your server machine
ttbl.tt.simics
expects a global environment variable SIMICS_BASE_PACKAGE defined to point to where Simics (and its extension packages) have been installed; e.g.:SIMICS_BASE_PACKAGE=/opt/simics/5.0/simics-5.0.136
-
conf_00_lib.
tinytile_add
(name, serial_number, ykush_serial, ykush_port_board, ykush_port_serial=None, serial_port=None)¶ Configure a tinyTILE for the fixture described below.
The tinyTILE is a miniaturization of the Arduino/Genuino 101 (see https://www.zephyrproject.org/doc/boards/x86/tinytile/doc/board.html).
The fixture used by this configuration uses a YKUSH hub for power switching, no debug/JTAG interface and allows for an optional external serial port using an USB-to-TTY serial adapter.
Add to a server configuration file:
tinytile_add("ti-NN", "SERIALNUMBER", "YKNNNNN", PORTNUMBER, [ykush_port_serial = N2,] [serial_port = "/dev/tty-NAME"])
restart the server and it yields:
$ tcf list local/ti-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the tinyTILE
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board is connected.
- ykush_port_serial (int) – (optional) number of the YKUSH downstream port where the board’s serial port is connected.
- serial_port (str) – (optional) name of the serial port
(defaults to
/dev/tty-NAME
)
Overview
The tinyTILE is powered via the USB connector. The tinyTILE does not export a serial port over the USB connector–applications loaded onto it might create a USB serial port, but this is not necessarily so all the time.
Thus, for ease of use this fixture connects an optional external USB-to-TTY dongle to the TX/RX/GND lines of the tinyTILE that allows a reliable serial console to be present. To allow for proper MCU board reset, this serial port has to be also power switched on the same YKUSH hub (to avoid ground derivations).
For the serial console output to be usableq, the Zephyr Apps configuration has to be altered to change the console to said UART. The client side needs to be aware of that (via configuration, for example, to the Zephyr App Builder).
When the serial dongle is in use, the power rail needs to first power up the serial dongle and then the board.
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
This fixture uses
ttbl.tt.tt_dfu
to implement the target; refer to it for implementation details.Bill of materials
- two available ports on an YKUSH power switching hub (serial YKNNNNN); only one if the serial console will not be used.
- a tinyTILE board
- a USB A-Male to micro-B male cable (for board power)
- a USB-to-TTY serial port dongle
- three M/M jumper cables
Connecting the test target fixture
- (if not yet connected), connect the YKUSH to the server system
and to power as described in
ykush_targets_add()
- connect the Tiny Tile’s USB port to the YKUSH downstream port N1
- (if a serial console will be connected) connect the USB-to-TTY serial adapter to the YKUSH downstream port N2
- (if a serial console will be connected) connect the USB-to-TTY
serial adapter to the Tiny Tile with the M/M jumper cables:
- USB FTDI Black (ground) to Tiny Tile’s serial ground pin (fourth pin from the bottom)
- USB FTDI White (RX) to the Tiny Tile’s TX.
- USB FTDI Green (TX) to Tiny Tile’s RX.
- USB FTDI Red (power) is left open, it has 5V.
Configuring the system for the fixture
Choose a name for the target: ti-NN (where NN is a number)
(if needed) Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg for a quick find], see
ykush_targets_add()
.Find the board’s serial number.
Note these boards, when freshly plugged in, will only stay in DFU mode for five seconds and then boot Zephyr (or whichever OS they have), so the USB device will dissapear. You need to run the lsusb or whichever command you are using quick (or monitor the kernel output with dmesg -w).
Configure udev to add a name for the serial device that represents the USB-to-TTY dongle connected to the target so we can easily find it at
/dev/tty-TARGETNAME
. Different options for USB-to-TTY dongles with or without a USB serial number.
-
conf_00_lib.
capture_screenshot_vnc
= <ttbl.capture.generic_snapshot object>¶ A capturer to take screenshots from VNC
Note the fields are target’s tags and others specified in
ttbl.capture.generic_snapshot
andttbl.capture.generic_stream
.
-
conf_00_lib.
nw_default_targets_add
(letter, pairs=5)¶ Add the default targets to a configuration
This adds a configuration which consists of a network and @pairs pairs of QEMU Linux VMs (one without upstream NAT connection, one with).
The network index nw_idx will be used to assign IP addresses (192.168.IDX.x and fc00::IDX:x)
IP address assignment: - .1 is the server (this machine) - .2 - 10 Virtual Linux machines - .30 - 45 Virtual Zephyr machines - .100- 255 Real HW targets
-
class
conf_00_lib.
minnowboard_EFI_boot_grub_pc
(console_name=None)¶ A power control interface that directs EFI to boot grub
When something (with a serial console that can access EFI) is powering up, this looks at the output. If it takes us to the EFI shell, then it runs fs0:EFIBOOTootx64 manually, which shall launch the automatic grub process.
It relies on
../ttbd/setup-efi-grub2-elf.sh
making grub2 print a bannerTCF Booting kernel-HEXID.elf
.Intended for Minnowboard and to be placed in the power rail of anything right after powering up the anything.
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(_target)¶ Flip the power off
-
reset_do
(_target)¶ Do a reset
This would ideally trigger a hard reset without a power cycle; but it can default to a power-cycle. The power has to be on for this to be usable.
-
power_get_do
(_target)¶ Return the power state
-
-
conf_00_lib.
minnowboard_add
(name, power_controller, usb_drive_serial, usbrly08b_serial, usbrly08b_bank, serial_port=None)¶ Configure a Minnowboard for use with Zephyr
The Minnowboard is an open hardware board that can be used to run Linux, Zephyr and other OSes. This configuration supports power control, a serial console and image flashing.
Add to a server configuration file (note the serial numbers and paths are examples that you need to adapt to your configuration):
ttbl.config.target_add(ttbl.tt.tt_power( "minnowboard-NN-disk", power_control = [ ttbl.usbrly08b.plugger("00023456", 0) ],), tags = { 'skip_cleanup': True } ) ttbl.config.targets['minnowboard-56-disk'].disable('') minnowboard_add("minnowboard-NN", power_controller = ttbl.pc.dlwps7("http://admin:1234@sp06/6"), usb_drive_serial = "76508A8E", usbrly08b_serial = "00023456", usbrly08b_bank = 0)
restart the server and it yields:
$ tcf list local/minnowboard-NN
Parameters: - name (str) – name of the target
- power_controller (ttbl.tt_power_control_impl) –
an implementation of a power controller than can power off or on the Minnowboard, for example a DLWPS7:
ttbl.pc.dlwps7("http://admin:1234@sp06/6")
- usb_drive_serial (str) – USB Serial number for the USB boot drive that is multiplexed to the Minnowboard and the server host as per the Overview below.
- usbrly08b_serial (str) – USB Serial number for the USBRLY8b board that is going to be used to multiplex the USB boot drive from the minnowboard to the server host.
- usbrly08b_bank (int) – relay bank number (#0 will use relays 1, 2, 3 and 4, #1 will use 5, 6, 7 and 8).
- serial_port (str) – (optional) name of the serial port
(defaults to
/dev/tty-NAME
)
Overview
The Minnowboard provides a serial port which is used to control the BIOS (when needed) and to access the OS. Any AC power controller can be used to power on/off the Minnowboard’s power brick.
The target type implemented here can only boot ELF kernels and is implemented using the
grub2 loader
. In summary, a USB drive is used as a boot drive that is multiplexed using a USBRLY8b relay bank from the Minnowboard to the server:- when the Minnowboard is off, the USB drive is connected to the server, so it can be setup / partitioned / formatted / flashed
- when Minnowboard is on, the USB drive is connected to it.
Bill of materials
- A Minnowboard and its power brick
- An open socket on an AC power switch, like the
Digital Logger Web Power Switch 7
- A USB serial cable terminated with 6 way header (eg: https://www.amazon.com/Converter-Terminated-Galileo-BeagleBone-Minnowboard/dp/B06ZYPLFNB) preferibly with a serial number (easier to configure)
- a USB drive (any size will do)
- four relays on a USBRLY08b USB relay bank (https://www.robot-electronics.co.uk/htm/usb_rly08btech.htm) [either 1, 2, 3 and 4 or 5, 6, 7 and 8]
- One USB Type A female to male cable, one USB Type A male cable
- Two USB ports into the server
Connecting the test target fixture
connect the Minnowboard’s power trick to the socket in the AC power switch and to the board DC input
connect the USB serial cable to the Minnowboard’s TTY connection and to the server
Cut and the USB-A male-to-female cable and separate the four lines on each end; likewise, cut and separate the four lines on the other USB-A male cable; follow the detailed instructions in
ttbl.usbrly08b.plugger
where:- Ensure the USBRLY8B is properly connected and setup as per
ttbl.usbrly08b.rly08b
- DUT is the USB-A female where we’ll connect the USB drive, plug the USB drive to it. Label as minnowboard-NN boot
- Host A1/ON/NO is the USB-A male connector we’ll connect to the Minnowboard’s USB 2.0 port – label as minnowboard-NN ON and plug to the board
- Host A2/OFF/NC is the USB-A make connector we’ll connect to the server’s USB port – label as minnowboard-NN OFF and plug to the server.
Note tinning the cables for better contact will reduce the chance of the USB device misbehaving.
It is critical to get this part right; the cable connected to the NC terminals has to be what is connected to the server when the target is off.
It is recommended to test this thoroughly in a separate system first.
- Ensure the USBRLY8B is properly connected and setup as per
Ensure the Minnowboard MAX is flashed with 64 bit firmware, otherwise it will fail to boot.
To update it, connect it to a solid power (so TCF doesn’t power it off in the middle), download the images from https://firmware.intel.com/projects/minnowboard-max (0.97 as of writing this) and follow the instructions.
Configuring the system for the fixture
Choose a name for the target: minnowboard-NNx (see naming best practices).
Find the serial number of the USB drive, blank it to ensure it is properly initialized; example, in this case being /dev/sdb:
$ lsblk -nro NAME,TRAN,SERIAL | grep USB-DRIVE-SERIAL sdb usb USB-DRIVE-SERIAL $ dd if=/dev/zero of=/dev/sdb
Find the serial number of the USBRLY8b and determine the relay bank to use; bank 0 for relays 1, 2, 3 and bank 1 for relays 4 or 5, 6, 7 and 8.
Configure udev to add a name for the serial device for the board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the serial dongle’s serial number; e.g.:SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "AC0054PT", \ SYMLINK += "tty-minnowboard-NN"
Connect the Minnowboard to a display [vs the serial port, to make it easier] and with a keyboard, plug the USB drive directly and boot into the BIOS.
Enter into the Boot Options Manager, ensure EFI USB Device is enabled; otherwise, add the option.
Depending on the BIOS revision, the save/commit mechanism might be tricky to get right, so double check it by rebooting and entering the BIOS again to verify that EFI booting from USB is enabled.
In the same Boot Options Manager, change the boot order to consider the EFI booting from USB option to be the first thing booted.
Note that the EFI Bios tends to reset the boot drive order if at some point it fails to detect it, so a boot coercer like
minnowboard_EFI_boot_grub_pc
is needed. This will workaound the issue.
- FIXME:
- FIXME: need to have it re-plug the dongle if the server doesn’t see it
Troubleshooting:
UEFI keeps printing:
map: Cannot find required map name.
Make sure:
the USB relay plugger is properly connected and the drive is re-directed to the target when powered on, to the server when off
the drive is not DOS formatted, it needs a GPT partition table. Wipe it hard and re-deploy (eg: running tcf run) so it will be re-flashed from the ground up:
# dd if=/dev/zero of=/dev/DEVICENODE bs=$((1024 * 1024)) count=100
Minnowboard is picky and some drives are faulty for it, even if they work ok in any other machine; replace the drive?
UEFI will do nothing when BOOTX64 is executed:
Shell> fs0:\EFI\BOOTootx64 Shell>
Double check the Minnowboard is flashed with a 64 bit firmware; see above in Connecting the test target fixture.
8.6. ttbd Configuration API¶
Configuration API for ttbd
-
ttbl.config.
defaults_enabled
= True¶ Parse defaults configuration blocks protected by:
if ttbl.config.defaults_enabled:
This is done so that a sensible configuration can be shipped by default that is easy to deactivate in a local configuration file.
This is important as the default configuration includes the definition for three networks (nwa, nwb and nwc) that if spread around multiple servers will lead the clients to think they are the same network but spread around multiple servers, when they are in truth different networks.
-
ttbl.config.
processes
= 20¶ Number of processes to start
How many servers shall be started, each being able to run a request in parallel. Defaults to 20, but can be increased if HW is not being very cooperative.
(this is currently a hack, we plan to switch to a server that can spawn them more dynamically).
-
ttbl.config.
instance
= ''¶ Name of the current ttbd instance
Multiple separate instances of the daemon can be started, each named differently (or nothing).
-
ttbl.config.
instance_suffix
= ''¶ Filename suffix for the current ttbd instance
Per
instance
, this defines the string that is appended to different configuration files/paths that have to be instance specific but cannot be some sort of directory. Normally this is -INSTANCE (unless INSTANCE is empty).
-
ttbl.config.
target_add
(target, _id=None, tags=None, target_type=None)¶ Add a target to the list of managed targets
Parameters: - target (ttbl.test_target) – target to add
- tags (dict) – Dictionary of tags that apply to the target (all tags are strings)
- name (str) – name of the target, by default taken from the target object
- target_type (str) – string describing type of the target; by default it’s taken from the object’s type.
-
ttbl.config.
interconnect_add
(ic, _id=None, tags=None, ic_type=None)¶ Add a target interconnect
An interconnect is just another target that offers interconnection services to other targets.
Parameters: - ic (ttbl.interconnect_c) – interconnect to add
- _id (str) – name of the interconnect, by default taken from the object itself.
- _tags (dict) – Dictionary of tags that apply to the target (all tags are strings)
- ic_type (str) – string describing type of the interconnect; by default it’s taken from the object’s type.
-
ttbl.config.
add_authenticator
(a)¶ Add an authentication methodology, eg:
Parameters: a (ttbl.authenticator_c) – authentication engine >>> add_authentication(ttbl.ldap_auth.ldap_user_authenticator("ldap://" ...))
-
ttbl.config.
target_max_idle
= 30¶ Maximum time a target is idle before it is powered off (seconds)
-
ttbl.config.
target_owned_max_idle
= 300¶ Maximum time an acquired target is idle before it is released (seconds)
-
ttbl.config.
cleanup_files_period
= 60¶ Time gap after which call the function to perform clean-up
-
ttbl.config.
cleanup_files_maxage
= 86400¶ Age of the file after which it will be deleted
-
ttbl.config.
tcp_port_range
= (1025, 65530)¶ Which TCP port range we can use
8.7. ttbd internals¶
Internal API for ttbd
Note classes names defining general interfaces are expected to end in
_mixin, to be recognized by the auto-lister in
ttbl.config.target_add()
-
exception
ttbl.
test_target_e
¶ A base for all operations regarding test targets.
-
exception
ttbl.
test_target_busy_e
(target)¶
-
exception
ttbl.
test_target_not_acquired_e
(target)¶
-
exception
ttbl.
test_target_release_denied_e
(target)¶
-
exception
ttbl.
test_target_not_admin_e
(target)¶
-
class
ttbl.
test_target_logadapter_c
(logger, extra)¶ Prefix to test target logging the name of the target and if acquired, the current owner.
This is useful to correlate logs in server in client when diagnosing issues.
Initialize the adapter with a logger and a dict-like object which provides contextual information. This constructor signature allows easy stacking of LoggerAdapters, if so desired.
You can effectively pass keyword arguments as shown in the following example:
adapter = LoggerAdapter(someLogger, dict(p1=v1, p2=”v2”))
-
process
(msg, kwargs)¶ Process the logging message and keyword arguments passed in to a logging call to insert contextual information. You can either manipulate the message itself, the keyword args or both. Return the message and kwargs modified (or not) to suit your needs.
Normally, you’ll only need to override this one method in a LoggerAdapter subclass for your specific needs.
-
-
ttbl.
who_split
(who)¶ Returns a tuple with target owner specification split in two parts, the userid and the ticket. The ticket will be None if the orders specification doesn’t contain it.
-
class
ttbl.
thing_plugger_mixin
¶ Define how to plug things (targets) into other targets
A thing is a target that can be, in any form, connected to another target. For example, a USB device to a host, where both the US device and host are targets. This is so that we can make sure they are owned by someone before plugging, as it can alter state.
-
plug
(target, thing)¶ Plug thing into target
Caller must own both target and thing
Parameters: - target (ttbl.test_target) – target where to plug
- thing (ttbl.test_target) – thing to plug into target
-
unplug
(target, thing)¶ Unplug thing from target
Caller must own target (not thing necessarily)
Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
-
-
class
ttbl.
tt_interface
¶ -
request_process
(target, who, method, call, args, user_path)¶ Process a request into this interface from a proxy / brokerage
When the ttbd daemon is exporting access to a target via any interface (e.g: REST over Flask or D-Bus or whatever), this implements a brige to pipe those requests in to this interface.
Parameters: - target (test_target) – target upon which we are operating
- who (str) – user who is making the request
- method (str) – ‘POST’, ‘GET’, ‘DELETE’ or ‘PUT’ (mapping to HTTP requests)
- call (str) – interface’s operation to perform (it’d map to the different methods the interface exposes)
- args (dict) – dictionary of key/value with the arguments to the call, some might be JSON encoded.
- user_path (str) – Path to where user files are located
Returns: dictionary of results, call specific e.g.:
>>> dict( >>> output = "something", >>> value = 43 >>> )
For an example, see
ttbl.buttons.interface
.
-
-
class
ttbl.
test_target
(_test_target__id, _tags=None, _type=None)¶ -
state_path
= '/var/run/ttbd'¶
-
files_path
= '__undefined__'¶ Path where files are stored
-
properties_user
= set(['pos_mode', <_sre.SRE_Pattern object>, 'pos_repartition', 'pos_reinitialize', 'tcpdump'])¶ Properties that normal users (non-admins) can set when owning a target and that will be reset when releasing a target (except if listed in
properties_keep_on_release
)Note this is a global variable that can be speciazed to each class/target.
-
properties_keep_on_release
= set([<_sre.SRE_Pattern object>, 'linux_options_append'])¶ A test target base class
-
id
= None¶ Target name/identifier
-
things
= None¶ references to the targets that implement things that can be plugged to this target.
-
thing_to
= None¶ List of targets this target is a thing to
-
fsdb
= None¶ filesystem database of target state; the multiple daemon processes use this to store information that reflect’s the target’s state.
-
kws
= None¶ Keywords that can be used to substitute values in commands, messages. Target’s tags are translated to keywords here.
ttbl.config.target_add()
will update this with the final list of tags.
-
release_hooks
= None¶ Functions to call when the target is released (things like removing tunnels the user created, resetting debug state, etc); this is meant to leave the target’s state pristine so that it does not affect the next user that acquires it. Each interface will add as needed, so it gets executed upon
release()
, under the owned lock.
-
thing_methods
= None¶ Methods used to plug/unplug things to/from targets, keyed by method name and value being a tuple with two functions (the plug function and the unplug function).
Said functions take as arguments the thing name and the thing desciptor from the target’s tags.
-
interface_origin
= None¶ Keep places where interfaces were registered from
-
type
¶
-
get_id
()¶
-
add_to_interconnect
(ic_id, ic_tags=None)¶ Add a target to an interconnect
Parameters: - ic_id (str) –
name of the interconnect; might be present in this server or another one.
If named
IC_ID#INSTANCE
, this is understood as this target has multiple connections to the same interconnect (via multiple physical or virtual network interfaces).No instance name (no
#INSTANCE
) means the default, primary connection.Thus, a target that can instantiate multiple virtual machines, for example, might want to declare them here if we need to pre-determine and pre-assign those IP addresses.
- ic_tags (dict) – (optional) dictionary of tags describing the tags for this target on this interconnect.
- ic_id (str) –
Update the tags assigned to a target
This will ensure the tags described in d are given to the target and all the values associated to them updated (such as interconnect descriptions, addresses, etc).
Parameters: It can be used to add tags to a target after it is added to the configuration, such as with:
>>> arduino101_add("a101-03", ...) >>> tclf.config.targets["a101-03"].tags_update(dict(val = 34))
-
owner_get
()¶ Return who the current owner of this target is
Returns: object describing the owner
-
timestamp_get
()¶
-
timestamp
()¶ Update the timestamp on the target to record last activity tme
-
acquire
(who)¶ Assign the test target to user who unless it is already taken by someone else.
Parameters: who (str) – User that is claiming the target Raises: test_target_busy_e
if already taken
-
enable
(who=None)¶ Enable the target (so it will be regularly used)
Parameters: who (str) – Deprecated
-
disable
(who=None)¶ Disable the target (so it will not be regularly used)
It still can be used, but it will be filtered out by the client regular listings.
Parameters: who (str) – Deprecated
-
property_set
(prop, value)¶ Set a target’s property
Parameters:
-
property_set_locked
(who, prop, value)¶ Set a target’s property (must be locked by the user)
Parameters:
-
property_get
(prop, default=None)¶ Get a target’s property
Parameters:
-
property_get_locked
(who, prop, default=None)¶ Get a target’s property
Parameters:
-
property_is_user
(name)¶ Return True if a property is considered a user property (no admin rights are needed to set it or read it).
Returns: bool
-
property_keep_value
(name)¶ Return True if a user property’s value needs to be kept.
-
thing_add
(name, plugger)¶ Define a thing that can be un/plugged to this target
Parameters: - name (str) – name of an existing target in this server that is considered to be a thing to this target
- plugger (ttbl.thing_plugger_mixin) –
object that has methods to do the physical action of plugging/unplugging the thing to the target.
For example, this can be an instance of
ttbl.usbrly08b.plugger
.
-
thing_plug
(who, thing_name)¶ Connect a thing to the target
Parameters: The user who is plugging must own this target and the thing.
-
thing_unplug
(who, thing_name)¶ Disconnect a thing from the target.
Parameters: The user who is unplugging must own this target, but don’t necessary need to own the thing.
Note that when you release the target, all the things connected to it are released, even if you don’t own the things.
-
thing_list
(who)¶ List the things available for connection and their current connection state
-
ip_tunnel_list
(who)¶ List existing IP tunnels
Returns: list of tuples (protocol, target-ip-address, port, port-in-server)
-
ip_tunnel_add
(who, ip_addr, port, proto)¶ Setup a TCP/UDP/SCTP v4 or v5 tunnel to the target
A local port of the given protocol in the server is fowarded to the target’s port. Stop with
ip_tunnel_remove()
.If the tunnel already exists, it is not recreated, but the port it uses is returned.
Parameters: Returns int local_port: port in the server where to connect to in order to access the target.
-
ip_tunnel_remove
(who, ip_addr, port, proto='tcp')¶ Teardown a TCP/UDP/SCTP v4 or v5 tunnel to the target previously created with
ip_tunnel_add()
.Parameters:
-
release
(who, force=False)¶ Release the ownership of this target.
If the target is not owned by anyone, it does nothing.
Parameters: Raises: test_target_not_acquired_e
if not taken
-
target_owned_and_locked
(**kwds)¶ Ensure the target is locked and owned for an operation that requires exclusivity
Parameters: who – User that is calling the operation Raises: test_target_not_acquired_e
if the target is not acquired by anyone,test_target_busy_e
if the target is owned by someone else.
-
target_is_owned_and_locked
(who)¶ Returns if a target is locked and owned for an operation that requires exclusivity
Parameters: who – User that is calling the operation Returns: True if @who owns the target or is admin, False otherwise or if the target is not owned
-
interface_add
(name, obj)¶ Adds object as an interface to the target accessible as
self.name
Parameters: - name (str) – interface name, must be not existing already
and a valid Python identifier as we’ll be calling functions
as
target.name.function()
- obj (tt_interface) – interface implementation, an instance
of
tt_interface
which provides the details and methods to call plusttbl.tt_interface.request_process()
to handle calls from proxy/brokerage layers.
- name (str) – interface name, must be not existing already
and a valid Python identifier as we’ll be calling functions
as
-
-
class
ttbl.
interconnect_impl_c
¶
-
class
ttbl.
interconnect_c
(name, ic_impl=None, _tags=None, _type=None)¶ Define an interconnect as a target that provides connectivity services to other targets.
-
ic_impl
= None¶ Interconnect implementation
-
-
class
ttbl.
tt_power_control_impl
¶ -
exception
retry_all_e
(wait=None)¶ Exception raised for a power control implementation operation wants the whole power-rail reinitialized
-
power_cycle_raw
(target, wait=2)¶ Do a raw power cycle
This does no pre/post actions, just power cycles this implementation; used for recovery strategies.
This is called by the likes of
ttbl.pc.delay_til_usb_device
.Parameters: - target (test_target) – target on which to act
- wait (int) – time to wait between power off and on
-
power_on_do
(target)¶ Flip the power on
-
reset_do
(target)¶ Do a reset
This would ideally trigger a hard reset without a power cycle; but it can default to a power-cycle. The power has to be on for this to be usable.
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
exception
-
class
ttbl.
tt_power_control_mixin
(impl=None)¶ This is the power control interface
This allows a target to be fully powered off, on, power cycled or reset.
To run functions before power-off or after power-on, add functions that take the target object as self to the power_(on|off)_(post|pre)_fns lists.
Power control is implemented with
ttbl.tt_power_control_mixin
, which can be subclassed or given a subcless of an implementation object (ttbl.tt_power_control_impl
) or a list of them (to create a power rail).Power rails allow to create complex power configurations where a target requires many things to be done in an specific sequence to power up (and viceversa to power down). For example, for the Arduino101 boards, the following happens:
power_control = [ ttbl.pc_ykush.ykush("YK20954", 2), # Flyswatter2 # delay power-on until the flyswatter2 powers up as a USB device ttbl.pc.delay_til_usb_device(serial = "FS20000"), # delay on power off until the serial console device is gone, # to make sure it re-opens properly later. It also helps the # main board reset properly. ttbl.pc.delay_til_file_gone(off = "/dev/tty-arduino101-02"), ttbl.pc_ykush.ykush("YK20954", 0), # serial port ttbl.cm_serial.pc(), # plug serial ports ttbl.pc_ykush.ykush("YK20954", 1), # board ],
this is a six-component power rail, which:
- powers a port in a USB hub to power up the Flyswatter2
(firmware flasher) (with a
ttbl.pc_ykush.ykush
) - waits for the USB device representing said device to show up in
the system (with
ttbl.pc.delay_til_usb_device
). - (only used during power off) delay until said file is not
present in the system anymore
(
ttbl.pc.delay_til_file_gone
); this is used when we power off something (like a USB serial device) and we know that as a consequence, udev will remove a device node from the system. - powers up a serial port
- connects the serial ports of the target (once it has been powered up in the previous step)
- powers up the target itself
The power off sequence runs in the inverse, first powering off the target and then the rest of the components.
These power rails are often necessary for very small, low power devices, that can get resideual power leaks from anywhere, and thus anything connected to it has to be powered off (in specific sequences) to ensure the board fully powers off.
The optional tag idle_poweroff can be given to the target to control how long the target has to be idle before it is powered off. If 0, it will never be automatically powered off upon iddleness. Defaults to
ttbl.config.target_max_idle
.-
power_on
(who)¶
-
reset
(who)¶ Reset the target (or power it on if off)
-
power_off
(who)¶
-
power_cycle
(who, wait=None)¶ Power cycle the target, guaranteeing that at the end, it is powered on.
Parameters:
-
power_state
= []¶
-
power_rail_get
()¶ Return the state of each item of the power rail which powers this target.
Returns list(bool): list of power states for the power rail that powers this target.
-
power_rail_get_any
()¶ Return True if any power rail element is on, False otherwise.
-
power_get
()¶ Return True if all power rail elements are turned on and thus the target is on, False otherwise.
- powers a port in a USB hub to power up the Flyswatter2
(firmware flasher) (with a
-
class
ttbl.
test_target_console_mixin
¶ bidirectional input/output channel
FIXME:
- has to allow us to control a serial port (escape sequences?)
- buffering? shall it read everything since power on?
- serial port not necessarily the endpoint, but ability to set baud rate and such must be considered
- more than one channel? console_id defaults to None, the first one defined by the target, which is always available
-
exception
test_target_console_e
¶
-
console_do_read
(console_id=None, offset=0)¶ Parameters: offset – Try to read output from given offset Returns: an iterator with the output that HAS TO BE DISPOSED OF with del once done. Raises: IndexError if console_id is not implemented, ValueError if all is not supported.
-
console_do_size
(console_id=None)¶
-
console_do_write
(data, console_id=None)¶ Parameters: data – byte string to write to the console
-
console_do_setup
(console_id=None, **kwargs)¶ Set console-specific parameters
-
console_do_list
()¶ Return list of available console_ids
-
console_read
(who, console_id=None, offset=0)¶ Parameters: offset (int) – Try to read output from given offset Returns: an iterator with the output that HAS TO BE DISPOSED OF with del once done. Raises: IndexError if console_id is not implemented, ValueError if all is not supported. Note the target does not have to be acquired to read off it’s console. FIXME: makes sense?
-
console_size
(_who, console_id=None)¶ Return how many bytes have been read from the console so far
Parameters: console_id (str) – (optional) name of the console Returns: number of bytes read
-
console_write
(who, data, console_id=None)¶ Parameters: data – byte string to write to the console
-
console_setup
(who, console_id=None, **kwargs)¶ Set console-specific parameters
-
console_list
()¶ Return list of available console_ids
-
exception
expect_e
¶
-
exception
expect_failed_e
¶
-
exception
expect_timeout_e
¶
-
expect
(expectations, timeout=20, console_id=None, offset=0, max_buffering=4096, poll_period=0.5, what='')¶ Wait for any of a list of things to come off the given console
Waits for a maximum timeout to receive from a given console any of the list of things provided as expectations, checking with a poll_period.
Parameters: - expectations – list of strings and/or compiled regular
expressions for which to wait. It can also be the constant
EXPECT_TIMEOUT
, in which case, a timeout is a valid event to expect (instead of causing aexception
. - timeout (int) – (optional, default 20) maximum time in seconds to wait for any of the expectations.
- offset (int) – (optional, default 0) offset into the console from which to read.
- max_buffering (int) – (optional, default 4k) how much to look back into the read console data.
- poll_period (float) – (optional) how often to poll for new data
- what (str) – (optional) string describing what we are waiting for (for error messages/logs).
- expectations – list of strings and/or compiled regular
expressions for which to wait. It can also be the constant
-
expect_sequence
(sequence, timeout=20, offset=0, console_id=None)¶ Execute a list of expect/send commands to a console
Each step in the list can first send something, then expect to recevive something to pass (or to fail) and then move on to the next step until the whole sequence is completed.
Parameters: - sequence –
List of dictionaries with the parameters:
- receive: string/regex of what is expected
- fail: (optional) strings/regex of something that if received will cause a failure
- send: (optional) string to send after receiving pass
- wait: (optional) integer of time to wait before sending send
- delay: (optional) when sending send, delay delay seconds in between characters (useful for slow readers)
e.g.:
[ dict(receive = re.compile("Expecting number [0-9]+"), fail = re.compile("Error reading"), send = "I am good", wait = 1.3, delay = 0.1), dict(receive = re.compile("Expecting number 1[0-9]+"), fail = re.compile("Error reading")) ]
- timeout (int) – (optional, default 20s) maximum time in seconds the operation has to conclude
- offset (int) – (optional, default 0) offset from which to read in the console.
- console_id (int) – (optional) console to use to read/write
Returns: Nothing if everything went well, otherwise raises exceptions
expect_timeout_e
orexpect_failed_e
.- sequence –
-
class
ttbl.
test_target_images_mixin
¶ -
exception
error
¶
-
exception
unsupported_image_e
¶
-
image_type_check
(image_type)¶
-
image_do_set
(image_type, image_name)¶ Take file image_name from target-broker storage for the current user and write it to the target as image-type.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
images_do_set
(images)¶ Called once image_do_set() has been called for every image.
This is for targets might need to flash all at the same time, or some post flash steps.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
images_set
(who, images)¶ Set a series of images in the target so it can boot
Parameters: images (dict) – dictionary of image type names and image file names. The file names are names of files uploaded to the broker. Raises: Exception on failure
-
image_get
(image_type)¶
-
exception
-
class
ttbl.
tt_debug_impl
¶ Debug object implementation
-
debug_do_start
(tt)¶ Start the debugging support for the target
-
debug_do_stop
(tt)¶ Stop the debugging support for the target
-
debug_do_halt
(tt)¶
-
debug_do_reset
(tt)¶
-
debug_do_reset_halt
(tt)¶
-
debug_do_resume
(tt)¶
-
debug_do_info
(tt)¶ Returns a string with information on how to connect to the debugging target
-
debug_do_openocd
(tt, command)¶ Send a command to OpenOCD and return its output (if the target supports it).
-
-
class
ttbl.
tt_debug_mixin
(impl=None)¶ Generic debug interface to start and stop debugging on a target.
When debug is started before the target is powered up, then upon power up, the debugger stub shall wait for a debugger to connect before continuing execution.
When debug is started while the target is executing, the target shall not be stopped and the debugging stub shall permit a debugger to connect and interrupt the target upon connection.
Each target provides its own debug methodolody; to find out how to connect, issue a debug-info command to find out where to connect to.
-
debug_start
(who)¶ Start debugging the target
If called before powering, the target will wait for the debugger to connect before starting the kernel.
-
debug_halt
(who)¶ Resume the target’s CPUs after a breakpoint (or similar) stop
-
debug_reset
(who)¶ Reset the target’s CPUs
-
debug_reset_halt
(who)¶ Reset the target’s CPUs
-
debug_resume
(who)¶ Resume the target
This is called to instruct the target to resume execution, following any kind of breakpoint or stop that halted it.
-
debug_info
(who)¶ Return information about how to connect to the target to debug it
-
debug_stop
(who)¶ Stop debugging the target
This might not do anything on the target until power off, or it might disconnect the debugger currently connected.
-
debug_openocd
(who, command)¶ Run an OpenOCD command on the target’s controller (if the target supports it).
-
-
ttbl.
open_close
(*args, **kwds)¶
-
class
ttbl.
authenticator_c
¶ Base class that defines the interface for an authentication system
Upone calling the constructor, it defines a set of roles that will be returned by the
login()
if the tokens to be authenticated are valid-
static
login
(token, password, **kwargs)¶ Validate an authentication token exists and the password is valid.
If it is, extract whichever information from the authentication system is needed to determine if the user represented by the token is allowed to use the infrastructure and which with category (as determined by the role mapping)
Returns: None if user is not allowed to log in, otherwise a dictionary with user’s information: - roles: set of strings describing roles the user has
FIXME: left as a dictionary so we can add more information later
-
exception
error_e
¶
-
exception
unknown_user_e
¶
-
exception
invalid_credentials_e
¶
-
static
-
ttbl.
daemon_pid_add
(pid)¶
-
ttbl.
daemon_pid_check
(pid)¶
-
ttbl.
daemon_pid_rm
(pid)¶
-
ttbl.
usb_find_dev_by_serial
(cache, log, device_name, serial_number)¶
-
ttbl.
usb_find_sibling_by_serial
(serial, port, log=None)¶ Given a USB device A (with a serial number), find if there is a USB device B that is connected to the same hub as A on a given port and return it’s bus number and address
-
ttbl.
usb_find_by_bus_address
(bus, address)¶ Return a USB device descriptor given it’s bus and address
Parameters:
-
class
ttbl.fsdb.
fsdb
(location)¶ This is a veeeery simple file-system based ‘DB’, atomic access
- Atomic access is implemented by storing values in the target of symlinks
- the data stored is strings
- the amount of data stored is thus limited (to 1k in OSX, 4k in Linux/ext3, maybe others dep on the FS).
Why? Because to create a symlink, there is only a system call needed and is atomic. Same to read it. Thus, for small values, it is very efficient.
Initialize the database to be saved in the give location directory
Parameters: location (str) – Directory where the database will be kept -
exception
exception
¶
-
uuid_ns
= UUID('28bf148d-2b92-460c-b5af-4a087eeeceaf')¶
-
keys
(pattern=None)¶ List the fields/keys available in the database
Parameters: pattern (str) – (optional) pattern against the key names must match, in the style of fnmatch
. By default, all keys are listed.
-
set
(field, value)¶ Set a field in the database
Parameters:
-
get
(field, default=None)¶
8.7.1. Target types drivers¶
-
class
ttbl.tt.
tt_serial
(id, power_control, serial_ports, _tags=None, target_type=None)¶ A generic test target, power switched with a pluggable power control implementation and with one or more serial ports.
Example configuration:
>>> ttbl.config.target_add( >>> tt_serial( >>> "minnow-01", >>> power_control = ttbl.pc.dlwps7("http://URL"), >>> serial_ports = [ >>> { "port": "/dev/tty-minnow-01", "baudrate": 115200 } >>> ]), >>> tags = { >>> 'build_only': True, >>> 'bsp_models': { 'x86': None }, >>> 'bsps': { >>> 'x86': dict(board = 'minnowboard', >>> console = "") >>> } >>> }, >>> target_type = "minnow_max")
With a udev configuration that generated the
/dev/tty-minnow-01
name such as/etc/udev/rules.d/SOMETHING.rules
:SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "SERIALNUMBER", GROUP = "SOMEGROUP", MODE = "0660", SYMLINK += "tty-minnow-01"
Parameters: - power_control – an instance of an implementation of the power_control_mixin used to implement power control for the target. Use ttbl.pc.manual() for manual power control that requires user interaction.
- serial_ports – list of serial port dictionaries, specified
as for
serial.serial_for_url()
with a couple of extras as specified inttbl.cm_serial
.
-
class
ttbl.tt.
tt_power
(id, power_control, power=None)¶ A generic test target for just power control
>>> ttbl.config.target_add( >>> ttbl.tt.tt_power(name, ttbl.pc.dlwps7(URL), power = None), >>> tags = dict(idle_poweroff = 0))
Parameters: power (bool) – if specified, switch the power of the target upon initialization; True powers it on, False powers it off, None does nothing.
-
class
ttbl.tt.
tt_power_lc
(id, power_control, power=None, consoles=None)¶ A generic test target for just power control and fake loopback consoles
>>> ttbl.config.target_add( >>> ttbl.tt.tt_power(name, ttbl.pc.dlwps7(URL), power = None))
Parameters: - power (bool) – if specified, switch the power of the target upon initialization; True powers it on, False powers it off, None does nothing.
- consoles – see
ttbl.cm_loopback.cm_loopback
.
-
class
ttbl.tt.
tt_arduino2
(_id, serial_port, power_control=None, bossac_cmd=None)¶ Test target for a target flashable with the bossac tool (mostly Arduino Due)
Requirements
Needs a connection to the USB programming port
Uses the bossac utility built on the arduino branch from https://github.com/shumatech/BOSSA/tree/arduino; requires it to be installed in the path
bossac_cmd
(defaults to sytem path). Supportskernel{,-arm}
images:$ git clone https://github.com/shumatech/BOSSA.git bossac.git $ cd bossac.git $ make -k $ sudo install -o root -g root bin/bossac /usr/local/bin
TTY devices need to be properly configured permission wise for bossac and serial console to work; for such, choose a Unix group which can get access to said devices and add udev rules such as:
# Arduino2 boards: allow reading USB descriptors SUBSYSTEM=="usb", ATTR{idVendor}=="2a03", ATTR{idProduct}=="003d", GROUP="GROUPNAME", MODE = "660" # Arduino2 boards: allow reading serial port SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "SERIALNUMBER", GROUP = "GROUPNAME", MODE = "0660", SYMLINK += "tty-TARGETNAME"
The theory of operation is quite simple. According to https://www.arduino.cc/en/Guide/ArduinoDue#toc4, the Due will erase the flash if you open the programming port at 1200bps and then start a reset process and launch the flash when you open the port at 115200. This is not so clear in the URL above, but this is what expermientation found.
So for flashing, we’ll take over the console, set the serial port to 1200bps, wait a wee bit and then call bossac.
We need power control to fully reset the Arduino Due when it gets in a tight spot (and to save power when not using it). There is no reset, we just power cycle – found no way to do a reset in SW without erasing the flash.
Parameters: - _id (str) – name identifying the target
- serial_port (str) – File name of the device node representing the serial port this device is connected to.
- power_control (ttbl.tt_power_control_impl) – power controller (if any)
- bossac_cmd – (optional) path and file where to find the bossac utility.
-
bossac_cmd
= 'bossac'¶ Command to call to execute the BOSSA command line flasher
-
image_do_set
(image_type, image_name)¶ Just validates the image types are ok. The flashing happens in images_do_set().
Parameters: Raises: Any exception on failure
-
images_do_set
(images)¶ Called once image_do_set() has been called for every image.
This is for targets might need to flash all at the same time, or some post flash steps.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
class
ttbl.tt.
tt_esp32
(_id, serial_number, power_control, serial_port)¶ Test target ESP32 Tensilica based MCUs that use the ESP-IDF framework
Parameters: - _id (str) – name identifying the target
- serial_number (str) – Unique USB serial number of the device (can be updated with http://cp210x-program.sourceforge.net/)
- power_control – Power control implementation or rail
(
ttbl.tt_power_control_impl
or list of such) - serial_port (str) – Device name of the serial port where the console will be found. This can be set with udev to be a constant name.
The base code will convert the ELF image to the required bin image using the
esptool.py
script. Then it will flash it via the serial port.Requirements
The ESP-IDK framework, of which
esptool.py
is used to flash the target; to install:$ cd /opt $ git clone --recursive https://github.com/espressif/esp-idf.git
(note the
--recursive
!! it is needed so all the submodules are picked up)configure path to it globally by setting
esptool_path
in a /etc/ttbd-production/conf_*.py file:import ttbl.tt ttbl.tt.tt_esp32.esptool_path = "/opt/esp-idf/components/esptool_py/esptool/esptool.py"
Note you will also most likely need this in the client to compile code for the board.
Permissions to use USB devices in /dev/bus/usb are needed; ttbd usually roots with group root, which shall be enough.
Needs power control for proper operation; FIXME: pending to make it operate without power control, using
esptool.py
.
-
esptool_path
= '__unconfigured__tt_esp32.esptool_path__'¶
-
images_do_set
(images)¶ Called once image_do_set() has been called for every image.
This is for targets might need to flash all at the same time, or some post flash steps.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
class
ttbl.tt.
tt_flasher
(_id, serial_ports, flasher, power_control)¶ Test target flashable, power switchable with debuggin
Any target which supports the
ttbl.flasher.flasher_c
interface can be used, mostly OpenOCD targets.How we use this, is for example:
>>> flasher_openocd = ttbl.flasher.openocd_c("frdm_k64f", FRDM_SERIAL, >>> openocd10_path, openocd10_scripts) >>> ttbl.config.target_add( >>> ttbl.tt.tt_flasher( >>> NAME, >>> serial_ports = [ >>> "pc", >>> dict(port = "/dev/tty-NAME", baudrate = 115200) >>> ], >>> flasher = flasher_obj, >>> power_control = [ >>> ttbl.pc_ykush.ykush(YKUSH_SERIAL, YKUSH_PORT) >>> # delay until device comes up >>> ttbl.pc.delay_til_usb_device(FRDM_SERIAL), >>> ttbl.cm_serial.pc(), # Connect serial ports >>> flasher_openocd, # Start / stop OpenOCD >>> ] >>> ), >>> tags = { >>> 'bsp_models' : { 'arm': None }, >>> 'bsps' : { >>> "arm": dict(board = "frdm_k64f", kernelname = 'zephyr.bin', >>> kernel = [ "micro", "nano" ], >>> console = "", quark_se_stub = "no"), >>> }, >>> 'slow_flash_factor': 5, # Flash verification slow >>> 'flash_verify': 'False', # Or disable it ... >>> }, >>> target_type = "frdm_k64f")
Parameters: - _id (str) – target name
- serial_ports – list of serial port dictionaries,
specified as for
serial.serial_for_url()
with a couple of extras as specified inttbl.cm_serial
. - flasher (ttbl.flasher.flasher_c) – flashing object that provides access to deploy images and debug control
- power_control – an instance of an implementation of the power_control_mixin used to implement power control for the target. Use ttbl.pc.manual() for manual power control that requires user interaction.
-
exception
error
¶
-
debug_do_start
(tt_ignored)¶ Start the debugging support for the target
-
debug_do_halt
(_)¶
-
debug_do_reset
(_)¶
-
debug_do_reset_halt
(_)¶
-
debug_do_resume
(_)¶
-
debug_do_stop
(_)¶ Stop the debugging support for the target
-
debug_do_info
(_)¶ Returns a string with information on how to connect to the debugging target
-
debug_do_openocd
(_, command)¶ Send a command to OpenOCD and return its output (if the target supports it).
-
target_reset_halt
(for_what='')¶
-
target_reset
(for_what='')¶
-
power_on_do_post
()¶
-
power_off_do_pre
()¶
-
reset_do
(_)¶
-
image_do_set
(image_type, image_name)¶ Take file image_name from target-broker storage for the current user and write it to the target as image-type.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
images_do_set
(images)¶ Called once image_do_set() has been called for every image.
This is for targets might need to flash all at the same time, or some post flash steps.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
class
ttbl.tt.
tt_dfu
(_id, serial_number, power_control, power_control_board, serial_ports=None)¶ Test target for a flashable with DFU Utils
Requirements
- Needs a connection to the USB port that exposes a DFU interface upon boot
- Uses the dfu-utils utility, available for most (if not all) Linux distributions
- Permissions to use USB devices in /dev/bus/usb are needed; ttbd usually roots with group root, which shall be enough.
- Needs power control for proper operation
Parameters: - _id (str) – name identifying the target
- power_control (ttbl.tt_power_control_impl) – Power control implementation or rail
(
ttbl.tt_power_control_impl
or list of such) - power_control – power controller just for the board–this is the component in the power control rail that controls the board only (versus other parts such as serial ports or pseudo-power-controllers that wait for the USB device to pop up.
Note the tags to the target must include, on each supported BSP, a tag named dfu_interface_name listing the name of the altsetting of the DFU interface to which the image for said BSP needs to be flashed.
This can be found, when the device exposes the DFU interfaces with the lsusb -v command; for example, for a tinyTILE (output summarized for clarity):
$ lsusb -v ... Bus 002 Device 110: ID 8087:0aba Intel Corp. Device Descriptor: bLength 18 bDescriptorType 1 ... Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update... iInterface 4 x86_rom Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update... iInterface 5 x86_boot Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 6 x86_app Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 7 config Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 8 panic Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 9 events Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 10 logs Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 11 sensor_core Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 12 ble_core
In this case, the three cores available are x86 (x86_app), arc (sensor_core) and ARM (ble_core).
Example
A Tiny Tile can be connected, without exposing a serial console:
>>> pc_board = ttbl.pc_ykush.ykush("YK22909", 1) >>> >>> ttbl.config.target_add( >>> tt_dfu("ti-01", >>> serial_number = "5614010001031629", >>> power_control = [ >>> pc_board, >>> ttbl.pc.delay_til_usb_device("5614010001031629"), >>> ], >>> power_control_board = pc_board), >>> tags = { >>> 'bsp_models': { 'x86+arc': ['x86', 'arc'], 'x86': None, 'arc': None}, >>> 'bsps' : { >>> "x86": dict(zephyr_board = "tinytile", >>> zephyr_kernelname = 'zephyr.bin', >>> dfu_interface_name = "x86_app", >>> console = ""), >>> "arm": dict(zephyr_board = "arduino_101_ble", >>> zephyr_kernelname = 'zephyr.bin', >>> dfu_interface_name = "ble_core", >>> console = ""), >>> "arc": dict(zephyr_board = "arduino_101_sss", >>> zephyr_kernelname = 'zephyr.bin', >>> dfu_interface_name = 'sensor_core', >>> console = "") >>> }, >>> >>> }, >>> target_type = "tile" >>> )
-
images_do_set
(images)¶ Just validates the image types are ok. The flashing happens in images_do_set().
Parameters: Raises: Any exception on failure
-
image_do_set
(t, n)¶ Take file image_name from target-broker storage for the current user and write it to the target as image-type.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
class
ttbl.tt.
tt_max10
(_id, device_id, power_control, serial_port=None)¶ Test target for an Altera MAX10
This allows to flash images to an Altera MAX10, using the Quartus tools, freely downloadable from http://dl.altera.com.
Exports the following interfaces:
- power control (using any AC power switch, such as the
Digital Web Power Switch 7
) - serial console
- image (in hex format) flashing (using the Quartus Prime tools package)
Multiple instances at the same time are supported; however, due to the JTAG interface not exporting a serial number, addressing has to be done by USB path, which is risky (as it will change when the cable is plugged to another port or might be enumerated in a different number).
Note that:
- when flashing LED1 blinks green/blue
- the blue power switch must be pressed, to ensure the board is ON when we switch the AC power to the power brick on
- SW2 DIP bank on the back of the board has to be all OFF (down) except for 3, that has to be ON (this comes from the Zephyr Altera MAX10 configuration)
- J7 (at the front of the board, next to the coaxial connectors) has to be open
Pending:
- CPU design hardcoded to use Zephyr’s – it shall be possible to flash it
-
quartus_path
= '__unconfigured__tt_max10.quartus_path__'¶ Path where the Quartus Programmer binaries have been installed
Download Quartus Prime Programmer and Tools from http://dl.altera.com/17.1/?edition=lite&platform=linux&download_manager=direct
Install to e.g /opt/intelFPGA/17.1/qprogrammer/bin.
Configure in /etc/ttbd-production/conf_00_max10.py:
.. code-block: python
import ttbl.tt ttbl.tt.tt_max10.quartus_path = “/opt/intelFPGA/17.1/qprogrammer/bin”
-
input_sof
= '__unconfigured__tt_max10.input_sof__'¶ Path to where the NIOS Zephyr CPU image has been installed
Download the CPU image to /var/lib/ttbd:
$ wget -O /var/lib/ttbd/ghrd_10m50da.sof \ https://github.com/zephyrproject-rtos/zephyr/raw/master/arch/nios2/soc/nios2f-zephyr/cpu/ghrd_10m50da.sof
- Configure in /etc/ttbd-production/conf_00_max10.py:
-
quartus_cpf_template
= '<?xml version="1.0" encoding="US-ASCII" standalone="yes"?>\n<cof>\n\t<output_filename>${OUTPUT_FILENAME}</output_filename>\n\t<n_pages>1</n_pages>\n\t<width>1</width>\n\t<mode>14</mode>\n\t<sof_data>\n\t\t<user_name>Page_0</user_name>\n\t\t<page_flags>1</page_flags>\n\t\t<bit0>\n\t\t\t<sof_filename>${SOF_FILENAME}<compress_bitstream>1</compress_bitstream></sof_filename>\n\t\t</bit0>\n\t</sof_data>\n\t<version>10</version>\n\t<create_cvp_file>0</create_cvp_file>\n\t<create_hps_iocsr>0</create_hps_iocsr>\n\t<auto_create_rpd>0</auto_create_rpd>\n\t<rpd_little_endian>1</rpd_little_endian>\n\t<options>\n\t\t<map_file>1</map_file>\n\t</options>\n\t<MAX10_device_options>\n\t\t<por>0</por>\n\t\t<io_pullup>1</io_pullup>\n\t\t<config_from_cfm0_only>0</config_from_cfm0_only>\n\t\t<isp_source>0</isp_source>\n\t\t<verify_protect>0</verify_protect>\n\t\t<epof>0</epof>\n\t\t<ufm_source>2</ufm_source>\n\t\t<ufm_filepath>${KERNEL_FILENAME}</ufm_filepath>\n\t</MAX10_device_options>\n\t<advanced_options>\n\t\t<ignore_epcs_id_check>2</ignore_epcs_id_check>\n\t\t<ignore_condone_check>2</ignore_condone_check>\n\t\t<plc_adjustment>0</plc_adjustment>\n\t\t<post_chain_bitstream_pad_bytes>-1</post_chain_bitstream_pad_bytes>\n\t\t<post_device_bitstream_pad_bytes>-1</post_device_bitstream_pad_bytes>\n\t\t<bitslice_pre_padding>1</bitslice_pre_padding>\n\t</advanced_options>\n</cof>\n'¶
-
quartus_pgm_template
= '/* Quartus Prime Version 16.0.0 Build 211 04/27/2016 SJ Lite Edition */\nJedecChain;\n\tFileRevision(JESD32A);\n\tDefaultMfr(6E);\n\n\tP ActionCode(Cfg)\n\t\tDevice PartName(10M50DAF484ES) Path("${POF_DIR}/") File("${POF_FILE}") MfrSpec(OpMask(1));\n\nChainEnd;\n\nAlteraBegin;\n\tChainType(JTAG);\nAlteraEnd;'¶
-
images_do_set
(images)¶ Called once image_do_set() has been called for every image.
This is for targets might need to flash all at the same time, or some post flash steps.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
image_do_set
(image_type, image_name)¶ Take file image_name from target-broker storage for the current user and write it to the target as image-type.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
- power control (using any AC power switch, such as the
-
class
ttbl.tt.
grub2elf
(_id, power_controller, usb_drive_serial, usbrly08b_serial, usbrly08b_bank, serial_port, boot_coercer=None)¶ Boot anything that can take an ELF image with grub2
Overview
A platform that can EFI boot off a multiplexed boot USB drive; this drive:
- when connected to the target, acts as boot drive which boots into grub2 which multiboots into whatever ELF binary we gave it
- when connected to the server, we partition, format, install grub2 and the ELF kernel to be booted.
An eight-port USBRLY8 relay bank acting as a USB switcher, each relay switching one of the four USB lines from target to server, using
ttbl.usbrly08b.plugger
:- the USB-A female cable is connected to the C relay terminals
- the USB-A male cable for the server is connected to the NC relay terminals
- the USB-A male cable for the client is connected to the NO relay terminal
- a target that EFI/boots and can boot off a USB drive
Limitations:
- kinda hardcoded x86-64, shall be easy to fix
Methodology
The power rail for the target ensures that when the target is powered on, the USB boot drive is connected to the target by the USB multiplexor. When the target is off, the USB boot drive is connected to the server.
The imaging process in
image_do_set()
will make sure the USB drive is connected to the server (by powering off the target) and then use the helper script/usr/share/tcf/setup-efi-grub2-elf.sh
to flash the ELF kernel to the drive (as well, will create the grub2 boot structure)–for this we need the drive’s USB serial number and the ELF file to boot.Upon boot, the boot drive will be detected and booted by default, as the grub configuration is set to just boot that ELF kernel.
For cases where BIOS interaction with the console might be necessary, a boot coercer can be implemented in the form of a power control implementation that in its power_on_do() method talks to the serial port to do whatever is needed. See for example
conf_00_lib.minnowboard_EFI_boot_grub_pc
which does so for Minnowboards.Setup
the helper script
/usr/share/tcf/setup-efi-grub2-elf.sh
is used to partition, configure and setup the USB drive–it is run with sudo (via the sudo configurations script/etc/sudoers.d/ttbd_sudo
)The daemon will require specific capabilities for being able to run sudo (CAP_SETGID, CAP_SETUID, CAP_SYS_ADMIN, CAP_FOWNER, CAP_DAC_OVERRIDE) setup in
/etc/systemd/system/ttbd@.service
.Ensure the following packages are available in the system:
- parted
- dosfstools
- grub2-efi-x64-cdboot and grub2-efi-x64-modules
- util-linux
Identify the serial number for the USB drive; plug it to a machine and issue:
$ lsblk -o "NAME,SERIAL,VENDOR,MODEL" NAME SERIAL VENDOR MODEL sdb AOJROZB8 JetFlash Transcend 8GB sdj 76508A8E JetFlash Transcend 8GB ...
(for this example, ours is 76508A8E, /dev/sdj)
blank the USB drive (NOTE!!! This will destroy the drive’s contents):
$ dd if=/dev/zero of=/dev/sdj
Create a power controller
Setup the target’s BIOS to boot by default off the USB drive
See
conf_00_lib.minnowboard_add()
for an example instantiation.-
image_types_valid
= ('kernel', 'kernel-x86')¶
-
image_do_set
(image_type, image_name)¶ Take file image_name from target-broker storage for the current user and write it to the target as image-type.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
images_do_set
(images)¶ Called once image_do_set() has been called for every image.
This is for targets might need to flash all at the same time, or some post flash steps.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
class
ttbl.tt.
simics
(_id, simics_cmds, _tags=None, image_size_mb=100)¶ Driver for a target based on Simics simulation of a platform
Currently this driver is quite basic and supports only the image and console management interfaces:
- images are only supported as an ELF file that is booted by grub2 when simics boots from a hard disk image generated on the fly.
- the only supported console is a serial output (no input)
System setup
In a configuration file (e.g. /etc/environment), set the base package for Simics:
SIMICS_BASE_PACKAGE=/opt/simics/5.0/simics-5.0.136
note that all the packages and extensions installed in there must have been registered with the global Simics configuration, as it will execute under the user as which the daemon is run (usually ttbd).
Note that the installation of Simics and any extra packages needed can be done automagically with:
$ destdir=/opt/simics/5.0 $ mkdir -p $destdir # --batch: no questions asked, just proceed # -a: auto select packages and register them $ ./install-simics.pl --batch -a --prefix $destdir \ package-1000-5.0.136-linux64.tar.gz.aes KEY-1000 \ package-1001-5.0.54-linux64.tar.gz.aes KEY-1001 \ package-1010-5.0.59-linux64.tar.gz.aes KEY-1010 \ package-1012-5.0.24-linux64.tar.gz.aes KEY-1012 \ package-2018-5.0.31-linux64.tar.gz.aes KEY-2018 \ package-2075-5.0.50-linux64.tar.gz.aes KEY-2075
-
exception
error_e
¶
-
exception
simics_start_e
¶
-
base_package
= None¶ location of the base Simics installation in the file system; by default this taken from the SIMICS_BASE_PACKAGE environment variable, if it exists; it can also be set in a configuration file as:
>>> ttbl.tt.simics.base_package = "/some/path/simics-5.0.136"
-
simics_vars
= None¶ Variables that can be expanded in the Simics configuration script passed as an argument
-
image_types_valid
= ('kernel', 'kernel-x86')¶
-
image_do_set
(image_type, image_name)¶ Take file image_name from target-broker storage for the current user and write it to the target as image-type.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
images_do_set
(images)¶ Called once image_do_set() has been called for every image.
This is for targets might need to flash all at the same time, or some post flash steps.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(_target)¶ Flip the power off
-
power_get_do
(_target)¶ Return the power state
-
console_do_list
()¶ Return list of available console_ids
-
console_do_read
(console_id=None, offset=0)¶ Parameters: offset – Try to read output from given offset Returns: an iterator with the output that HAS TO BE DISPOSED OF with del once done. Raises: IndexError if console_id is not implemented, ValueError if all is not supported.
-
console_do_write
(_data, _console_id=None)¶ Parameters: data – byte string to write to the console
Note
This is now deprecated and replaced by ttbl.tt_qemu2
.
-
class
ttbl.tt_qemu.
qmp_c
(sockfile)¶ Dirty handler for the Qemu Monitor Protocol that allows us to run QMP commands and report on status.
-
command
(command, **kwargs)¶
-
-
class
ttbl.tt_qemu.
tt_qemu
(id, bsps, _tags)¶ Implement a test target that runs under a QEMU subprocess.
Supports power control, serial consoles and image flashing.
A subclass of this must provide a command line to start QEMU, as described in
qemu_cmdlines
.For the console read interface to work, the configuration must export a logfile called BSP-console.read in the state directory. For write to work, it must provide a socket BSP-console.write:
# Serial console tt_qemu.py can grok -chardev socket,id=ttyS0,server,nowait,path=%(path)s/%(bsp)s-console.write,logfile=%(path)s/%(bsp)s-console.read -serial chardev:ttyS0
Using power_on_pre, power_on_post and power_off_pre functions, one can add functionality without modifying this file.
Parameters: bsps – list of BSPs to start in parallel (normally only one is set); information has to be present in the tags description for it to be valid, as well as command lines for starting each. If more than one BSP is available, this is the equivalent of having a multicore machine that can be asymetric. Thisis mostly used for testing. WARNING
Note this might be called from different processes who share the same configuration; hence we can’t rely on any runtime storage. Any values that are needed are kept on a filesystem database (self.fsdb). We start the process with Python’s subprocess.Popen(), but then it goes on background and if the parent dies, the main server process will reap it (as prctl() has set with SIG_IGN on SIGCHLD) and any subprocess of the main process might kill it when power-off is called on it.
Examples
-
qemu_cmdlines
= None¶ Command line to launch QEMU, keyed for the BSP it implements (eg: ‘x86’, or ‘arc’, or ‘arm’).
Note this can contain %(FIELD)[sd] to replace values coming from
self.kws
.- derivative classes can add values in their __init__() methods
- default values:
- path: location of the directory where state is kept
- targetname: name of this target
- bsp: BSP on which we are acting (as a target might have multiple BSPs)
:func:power_on_do() will add a few command line options to add QMP socket that we use to talk to QEMU, to introduce a pidfile, and a GDB socket (for debugging).
Note this is per-instance and not per-class as each instance might add different command line switches based on its networking needs (for example)
-
debug_do_start
(tt_ignored)¶ Start the debugging support for the target
-
debug_do_stop
(tt_ignored)¶ Stop the debugging support for the target
-
debug_do_info
(tt_ignored)¶ Returns a string with information on how to connect to the debugging target
-
image_do_set
(image_type, image_name)¶ Take file image_name from target-broker storage for the current user and write it to the target as image-type.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
images_do_set
(images)¶ Called once image_do_set() has been called for every image.
This is for targets might need to flash all at the same time, or some post flash steps.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
power_on_do
(_target)¶ Flip the power on
-
power_off_do
(_target)¶ Flip the power off
-
power_get_do
(_target)¶ Return the power state
-
console_do_list
()¶ Return list of available console_ids
-
console_do_read
(console_id=None, offset=0)¶ Parameters: offset – Try to read output from given offset Returns: an iterator with the output that HAS TO BE DISPOSED OF with del once done. Raises: IndexError if console_id is not implemented, ValueError if all is not supported.
-
console_do_size
(console_id=None)¶
-
console_do_write
(data, console_id=None)¶ Parameters: data – byte string to write to the console
-
-
class
ttbl.tt_qemu.
plugger
(name, **kwargs)¶ Plugger class to plug external devices to QEMU VMs
Parameters: kwargs (dict) – parameters for
qmp_c.command()
’s device_add method, which for example, could be:- driver = “usb-host”
- hostbus = BUSNUMBER
- hostaddr = USBADDRESS
-
plug
(target, thing)¶ Plug thing into target
Caller must own both target and thing
Parameters: - target (ttbl.test_target) – target where to plug
- thing (ttbl.test_target) – thing to plug into target
-
unplug
(target, thing)¶ Unplug thing from target
Caller must own target (not thing necessarily)
Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
General driver to create targets that implement a virtual machine using QEMU.
Note
This is now replacing ttbl.tt_qemu
which is now
deprecated.
ttbl.tt_qemu2.tt_qemu
is a generic class that implements a
target which supports the following interfaces:
- power control
- serial consoles
- image flashing
- debugging
- networking
- provisioning mode
it is however a raw building block which requires extra
configuration. To create targets, use conf_00_lib.qemu_pos_add()
(for example).
-
class
ttbl.tt_qemu2.
qmp_c
(sockfile)¶ Dirty handler for the Qemu Monitor Protocol that allows us to run QMP commands and report on status.
-
command
(command, **kwargs)¶
-
-
class
ttbl.tt_qemu2.
plugger
(name, **kwargs)¶ Plugger class to plug external devices to QEMU VMs
Parameters: kwargs (dict) – parameters for
qmp_c.command()
’s device_add method, which for example, could be:- driver = “usb-host”
- hostbus = BUSNUMBER
- hostaddr = USBADDRESS
-
plug
(target, thing)¶ Plug thing into target
Caller must own both target and thing
Parameters: - target (ttbl.test_target) – target where to plug
- thing (ttbl.test_target) – thing to plug into target
-
unplug
(target, thing)¶ Unplug thing from target
Caller must own target (not thing necessarily)
Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
-
class
ttbl.tt_qemu2.
tt_qemu
(name, qemu_cmdline, consoles=None, _tags=None)¶ Implement a test target that runs under a QEMU subprocess.
Supports power control, serial consoles and image flashing.
Parameters: - qemu_cmdline (str) –
command line to start QEMU, as described in
tt_qemu.qemu_cmdline
.FIXME: describe better
- consoles (list(str)) –
names of serial consoles to start by adding command line configuration. Note each string needs to be a simple string [a-zA-Z0-9].
For the console read interface to work, the configuration must export a logfile called
NAME-console.read
in the state directory. For write to work, it must provide a socketNAME-console.write
:-chardev socket,id=ttyS0,server,nowait,path=%(path)s/NAME-console.write,logfile=%(path)s/NAME-console.read -serial chardev:ttyS0
Using power_on_pre, power_on_post and power_off_pre functions, one can add functionality without modifying this file.
WARNING
Note this might be called from different processes who share the same configuration; hence we can’t rely on any runtime storage. Any values that are needed are kept on a filesystem database (self.fsdb). We start the process with Python’s subprocess.Popen(), but then it goes on background and if the parent dies, the main server process will reap it (as prctl() has set with SIG_IGN on SIGCHLD) and any subprocess of the main process might kill it when power-off is called on it.
Examples
-
qemu_cmdline
= None¶ Command line to launch QEMU
-
qemu_cmdline_append
= None¶ Runtime additions to QEMU’s command line
-
debug_do_start
(_tt_ignored)¶ Start the debugging support for the target
-
debug_do_stop
(_tt_ignored)¶ Stop the debugging support for the target
-
debug_do_info
(_tt_ignored)¶ Returns a string with information on how to connect to the debugging target
-
image_do_set
(image_type, image_name)¶ Take file image_name from target-broker storage for the current user and write it to the target as image-type.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
images_do_set
(images)¶ Called once image_do_set() has been called for every image.
This is for targets might need to flash all at the same time, or some post flash steps.
Raises: Any exception on failure This function has to be specialized for each target type. Upon finishing, it has to leave the target in the powered off state.
-
power_on_do
(_target)¶ Flip the power on
-
power_off_do
(_target)¶ Flip the power off
-
power_get_do
(_target)¶ Return the power state
-
console_do_list
()¶ Return list of available console_ids
-
console_do_read
(console_id=None, offset=0)¶ Parameters: offset – Try to read output from given offset Returns: an iterator with the output that HAS TO BE DISPOSED OF with del once done. Raises: IndexError if console_id is not implemented, ValueError if all is not supported.
-
console_do_size
(console_id=None)¶
-
console_do_write
(data, console_id=None)¶ Parameters: data – byte string to write to the console
- qemu_cmdline (str) –
-
class
ttbl.flasher.
flasher_c
¶ Interface to flash and debug a target
Implementations shall derive from here to provide the actual functionality, such as for example,
openocd_c
.-
exception
error
¶
-
start
()¶
-
stop
()¶
-
image_write
(image_type, file_name, timeout_factor=1, verify=True)¶
-
image_erase
(image_type, size)¶
-
image_hash
(image_type, size, timeout_factor=1)¶
-
target_halt
(targets=None, for_what='')¶
-
target_reset
(for_what='')¶
-
target_reset_halt
(for_what='')¶
-
target_resume
(targets=None, for_what='')¶
-
debug
()¶
-
static
openocd_cmd
(cmd)¶
-
test_target_link
(tt)¶ Tell this flasher who is our target – we can’t do that in __init__() because we don’t necessarily know it at the time, so the class that uses us must do it.
The implementation might use it or not, their choice.
-
exception
-
class
ttbl.flasher.
action_logadapter_c
(logger, extra)¶ -
process
(msg, kwargs)¶ Process the logging message and keyword arguments passed in to a logging call to insert contextual information. You can either manipulate the message itself, the keyword args or both. Return the message and kwargs modified (or not) to suit your needs.
Normally, you’ll only need to override this one method in a LoggerAdapter subclass for your specific needs.
-
-
class
ttbl.flasher.
openocd_c
(_board_name, serial=None, openocd_path='openocd', openocd_scripts='/usr/share/openocd/scripts', debug=False)¶ This is a flasher object that uses OpenOCD to provide flashing and GDB server support.
The object starts an OpenOCD instance (that runs as a daemon) – it does this behaving as a power-control implementation that is plugged at the end of the power rail.
To execute commands, it connects to the daemon via TCL and runs them using the
'capture "OPENOCDCOMMAND"'
TCL command (FIXME: is there a better way?). The telnet port is open for manual debugging (check your firewall! no passwords!); the GDB ports are also available.The class knows the configuration settings for different boards (as given in the board_name parameter. It is also possible to point it to specific OpenOCD paths when different builds / versions need to be used.
Note how entry points from the flasher_c class all start with underscore. Functions
__SOMETHING()
are those that have to be called with a_expect_mgr
context taken [see comments on__send_command
for the reason.Parameters: board_name (str) – name of the board to use, to select proper configuration parameters. Needs to be declared in ttbl.flasher.openocd_c._boards. -
exception
error
¶
-
exception
error_eof
¶
-
exception
error_timeout
¶
-
hack_reset_halt_after_init
= None¶ Inmediately after running the OpenOCD initialization sequence, reset halt the board.
This is meant to be used when we know we are power cycling before flashing. The board will start running as soon as we power it on, thus we ask OpenOCD to stop it inmediately after initializing. There is still a big window of time on which the board can get itself in a bad state by running its own code.
(bool, default False)
-
hack_reset_after_init
= None¶ Inmediately after running the OpenOCD initialization sequence, reset the board.
This is meant to be used for hacking some boards that don’t start properly OpenOCD unless this is done.
(bool, default False)
-
exception
expect_connect_e
¶
-
power_on_do
(target)¶ Flip the power on
-
reset_do
(target)¶ Do a reset
This would ideally trigger a hard reset without a power cycle; but it can default to a power-cycle. The power has to be on for this to be usable.
-
power_off_do
(target)¶ Flip the power off
-
openocd_cmd
(cmd)¶
-
power_get_do
(target)¶ Return the power state
-
exception
8.7.2. User access control and authentication¶
-
class
ttbl.user_control.
User
(userid, fail_if_new=False)¶ Implement a database of users that are allowed to use this system
The information on this database is obtained from authentication systems and just stored locally for caching–it’s mainly to set roles for users.
-
file_access_lock
= <thread.lock object>¶
-
exception
user_not_existant_e
¶ Exception raised when information about a user cannot be located
-
state_dir
= None¶
-
static
is_authenticated
()¶
-
static
is_active
()¶
-
static
is_anonymous
()¶
-
is_admin
()¶
-
get_id
()¶
-
set_role
(role)¶
-
has_role
(role)¶
-
save_data
()¶
-
static
load_user
(userid)¶
-
static
create_filename
(userid)¶ Makes a safe filename based on the user ID
-
static
search_user
(userid)¶
-
-
class
ttbl.user_control.
local_user
(**kwargs)¶ Define a local anonymous user that we can use to skip authentication on certain situations (when the user starts the daemon as such, for example).
See https://flask-login.readthedocs.org/en/latest/#anonymous-users for the Flask details.
-
save_data
()¶
-
is_authenticated
()¶
-
is_anonymous
()¶
-
is_active
()¶
-
is_admin
()¶
-
-
class
ttbl.auth_ldap.
authenticator_ldap_c
(url, roles=None)¶ Use LDAP to authenticate users
To configure, create a config file that looks like:
>>> import ttbl.auth_ldap >>> >>> add_authenticator(timo.auth_ldap.authenticator_ldap_c( >>> "ldap://URL:PORT", >>> roles = { >>> .... >>> roles = { >>> 'role1': { 'users': [ "john", "lamar", ], >>> 'groups': [ "Occupants of building 3" ] >>> }, >>> 'role2': { 'users': [ "anthony", "mcclay" ], >>> 'groups': [ "Administrators", >>> "Knights who say ni" ] >>> }, >>> }))
The roles dictionary determines who gets to be an admin or who gets access to XYZ resources.
This will make that john, lamar and any user on the group Occupants of building 3 to have the role role1.
Likewise for anthony, mcclay and any user who is a member of either the group Administrators or the group Knights who say ni, they are given role role2
Parameters: -
login
(email, password, **kwargs)¶ Validate a email|token/password combination and pull which roles it has assigned
Returns: set listing the roles the token/password combination has according to the configuration Return type: set Raises: authenticator_c.invalid_credentials_e if the token/password is not valid Raises: authenticator_c.error_e if any kind of error during the process happens
-
-
class
ttbl.auth_localdb.
authenticator_localdb_c
(name, users)¶ Use a simple DB to authenticate users
To configure, create a config file that looks like:
>>> import ttbl.auth_localdb >>> >>> add_authenticator(ttbl.auth_localdb.authenticator_localdb_c( >>> "NAME", >>> [ >>> ['user1', 'password1', 'role1', 'role2', 'role3'...], >>> ['user2', 'password2', 'role1', 'role4', 'role3' ], >>> ['user3', None, 'role2', 'role3'...], >>> ['user4', ], >>> ]))
Each item in the users list is a list containing:
- the user id (userX)
- the password in plaintext (FIXME: add digests); if empty, then the user has no password.
- list of roles (roleX)
Parameters: -
login
(email, password, **kwargs)¶ Validate a email|token/password combination and pull which roles it has assigned
Returns: set listing the roles the token/password combination has according to the configuration Return type: set Raises: authenticator_c.invalid_credentials_e if the token/password is not valid Raises: authenticator_c.error_e if any kind of error during the process happens
-
class
ttbl.auth_party.
authenticator_party_c
(roles=None, local_addresses=None)¶ Life is a party! Authenticator that allows anyone to log in and be an admin.
To configure, create a config file that looks like:
>>> import timo.auth_party >>> >>> add_authenticator(timo.auth_party.authenticator_party_c( >>> [ 'admin', 'user', 'role3', ...], >>> local_addresses = [ '127.0.0.1', '192.168.0.2' ] )) >>>
Where you list the roles that everyone will get all the time.
Normally you want this only for debugging or for local instances. Note you can set a list of local addresses to match against (strings or regular expressions) which will enforce that only authentication from those addresses will just be allowed.
FIXME: check connections are coming only from localhost
-
login
(email, password, **kwargs)¶ Validate a email|token/password combination and pull which roles it has assigned
Kwargs: ‘remote_addr’ set to a string describing the IP address where the connection comes from. Returns: set listing the roles the token/password combination has according to the configuration Return type: set Raises: authenticator_c.invalid_credentials_e if the token/password is not valid Raises: authenticator_c.unknown_user_e if there are remote addresses initialized and the request comes from a non-local address. Raises: authenticator_c.error_e if any kind of error during the process happens
-
8.7.3. Console Management Interface¶
-
class
ttbl.
test_target_console_mixin
bidirectional input/output channel
FIXME:
- has to allow us to control a serial port (escape sequences?)
- buffering? shall it read everything since power on?
- serial port not necessarily the endpoint, but ability to set baud rate and such must be considered
- more than one channel? console_id defaults to None, the first one defined by the target, which is always available
-
exception
test_target_console_e
-
console_do_read
(console_id=None, offset=0) Parameters: offset – Try to read output from given offset Returns: an iterator with the output that HAS TO BE DISPOSED OF with del once done. Raises: IndexError if console_id is not implemented, ValueError if all is not supported.
-
console_do_size
(console_id=None)
-
console_do_write
(data, console_id=None) Parameters: data – byte string to write to the console
-
console_do_setup
(console_id=None, **kwargs) Set console-specific parameters
-
console_do_list
() Return list of available console_ids
-
console_read
(who, console_id=None, offset=0) Parameters: offset (int) – Try to read output from given offset Returns: an iterator with the output that HAS TO BE DISPOSED OF with del once done. Raises: IndexError if console_id is not implemented, ValueError if all is not supported. Note the target does not have to be acquired to read off it’s console. FIXME: makes sense?
-
console_size
(_who, console_id=None) Return how many bytes have been read from the console so far
Parameters: console_id (str) – (optional) name of the console Returns: number of bytes read
-
console_write
(who, data, console_id=None) Parameters: data – byte string to write to the console
-
console_setup
(who, console_id=None, **kwargs) Set console-specific parameters
-
console_list
() Return list of available console_ids
-
exception
expect_e
-
exception
expect_failed_e
-
exception
expect_timeout_e
-
EXPECT_TIMEOUT
= -1 Consider a timeout a valid event instead of raising exceptions in
expect()
.
-
expect
(expectations, timeout=20, console_id=None, offset=0, max_buffering=4096, poll_period=0.5, what='') Wait for any of a list of things to come off the given console
Waits for a maximum timeout to receive from a given console any of the list of things provided as expectations, checking with a poll_period.
Parameters: - expectations – list of strings and/or compiled regular
expressions for which to wait. It can also be the constant
EXPECT_TIMEOUT
, in which case, a timeout is a valid event to expect (instead of causing aexception
. - timeout (int) – (optional, default 20) maximum time in seconds to wait for any of the expectations.
- offset (int) – (optional, default 0) offset into the console from which to read.
- max_buffering (int) – (optional, default 4k) how much to look back into the read console data.
- poll_period (float) – (optional) how often to poll for new data
- what (str) – (optional) string describing what we are waiting for (for error messages/logs).
- expectations – list of strings and/or compiled regular
expressions for which to wait. It can also be the constant
-
expect_sequence
(sequence, timeout=20, offset=0, console_id=None) Execute a list of expect/send commands to a console
Each step in the list can first send something, then expect to recevive something to pass (or to fail) and then move on to the next step until the whole sequence is completed.
Parameters: - sequence –
List of dictionaries with the parameters:
- receive: string/regex of what is expected
- fail: (optional) strings/regex of something that if received will cause a failure
- send: (optional) string to send after receiving pass
- wait: (optional) integer of time to wait before sending send
- delay: (optional) when sending send, delay delay seconds in between characters (useful for slow readers)
e.g.:
[ dict(receive = re.compile("Expecting number [0-9]+"), fail = re.compile("Error reading"), send = "I am good", wait = 1.3, delay = 0.1), dict(receive = re.compile("Expecting number 1[0-9]+"), fail = re.compile("Error reading")) ]
- timeout (int) – (optional, default 20s) maximum time in seconds the operation has to conclude
- offset (int) – (optional, default 0) offset from which to read in the console.
- console_id (int) – (optional) console to use to read/write
Returns: Nothing if everything went well, otherwise raises exceptions
expect_timeout_e
orexpect_failed_e
.- sequence –
-
class
ttbl.cm_serial.
cm_serial
(state_dir, specs)¶ Implement console/s over serial ports or network connections using the Python serial module
The consoles maybe be open right after power on (if the target supports the power-on interface) and closed right before power off – this is needed for device nodes that are powered off AT the same time as the target. This also means that you can loose console output from the time the target is powered on until the serial port is attached to the system.
FIXME
Parameters: - specs (dict) –
list of serial ports or dictionaries describing how to open serial ports
- string “post-open”: the serial ports have to be open after
the target powers up.
This is needed for ports whose power is tied to the target’s power (thus, when the target is off, the port is gone).
NOTE: you will lose serial output since the time the target is powered up until the serial port starts being monitored.
- string “pc”: there is a power-control unit (cm_serial.pc)
tied to the power-control rail for this target that will
turn ports on and off.
This is needed for ports whose power is tied to the target’s power (thus, when the target is off, the port is gone).
NOTE: you will lose serial output since the time the target is powered up until the serial port starts being monitored.
- A dictionary describing how to open a serial port eg:
{ "port": DEVNAME, "console" : CONSOLENAME, "baudrate": 115200, "bytesize": EIGHTBITS, "parity": PARITY_NONE, "stopbits": STOPBITS_ONE, "timeout": 2, "rtscts": True, }
see the documentation for
serial
for more information as the elements on this dictionary would match the keyword arguments toserial.serial_for_url()
.Note
console
is the name of the console associated to this port; the first one registered is always considered the default and assigned the name “default” if none given. - string “post-open”: the serial ports have to be open after
the target powers up.
- save_log (bool) – shall the server record anything that comes through this serial port?
-
consoles_close
()¶
-
consoles_open
()¶ Open all serial ports assigned to the target
-
consoles_reset
()¶ Truncate all the logfiles (usually called when running a reset)
-
console_do_list
()¶ Return list of available console_ids
-
console_do_read
(console_id=None, offset=0)¶ Parameters: offset – Try to read output from given offset Returns: an iterator with the output that HAS TO BE DISPOSED OF with del once done. Raises: IndexError if console_id is not implemented, ValueError if all is not supported.
-
console_do_size
(console_id=None)¶
-
console_do_write
(data, console_id=None)¶ Parameters: data – byte string to write to the console
-
console_takeover
(**kwds)¶ Indicate the console serial background port reading thread that it has to stop reading from the port.
>>> with object.console_takeover(CONSOLEID) as descr, log: >>> # ... operate the descr serial object and log file
When the with statement is left, the background reader takes control of the port again.
- specs (dict) –
-
class
ttbl.cm_serial.
pc
¶ Treat taget’s serial ports like a power controller so that it is opened when the power control object is powering up all the power control implementations that give power to a target.
This is used when a serial port adapter has to be powered up before powering up the target it is connected to. The power control implementation would be a list of three objects:
- power control implementation for the serial port
- this object
- power control implementation for the tatrget
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
class
ttbl.cm_loopback.
cm_loopback
(state_dir, names=None)¶ Implement console/s over serial ports or network connections using the Python serial module
The consoles maybe be open right after power on (if the target supports the power-on interface) and closed right before power off – this is needed for device nodes that are powered off AT the same time as the target. This also means that you can loose console output from the time the target is powered on until the serial port is attached to the system.
Parameters: names (dict) – string or list of strings with the names of the consoles to open. -
consoles_close
()¶
-
consoles_open
()¶
-
consoles_reset
()¶
-
console_do_list
()¶ Return list of available console_ids
-
console_do_read
(console_id=None, offset=0)¶ Parameters: offset – Try to read output from given offset Returns: an iterator with the output that HAS TO BE DISPOSED OF with del once done. Raises: IndexError if console_id is not implemented, ValueError if all is not supported.
-
console_do_write
(data, console_id=None)¶ Parameters: data – byte string to write to the console
-
console_takeover
(**kwds)¶
-
-
ttbl.cm_logger.
setup
()¶ FIXME
-
ttbl.cm_logger.
spec_add
(logfile_name, spec)¶ FIXME
-
ttbl.cm_logger.
spec_write
(logfile_name, data=None, filename=None)¶ Write to a file descriptor monitored by a logger
Parameters:
-
ttbl.cm_logger.
spec_rm
(logfile_name)¶
-
ttbl.cm_logger.
spec_reset
(logfile_name)¶
8.7.4. Debugging Interface¶
-
class
ttbl.
tt_debug_mixin
(impl=None) Generic debug interface to start and stop debugging on a target.
When debug is started before the target is powered up, then upon power up, the debugger stub shall wait for a debugger to connect before continuing execution.
When debug is started while the target is executing, the target shall not be stopped and the debugging stub shall permit a debugger to connect and interrupt the target upon connection.
Each target provides its own debug methodolody; to find out how to connect, issue a debug-info command to find out where to connect to.
-
debug_start
(who) Start debugging the target
If called before powering, the target will wait for the debugger to connect before starting the kernel.
-
debug_halt
(who) Resume the target’s CPUs after a breakpoint (or similar) stop
-
debug_reset
(who) Reset the target’s CPUs
-
debug_reset_halt
(who) Reset the target’s CPUs
-
debug_resume
(who) Resume the target
This is called to instruct the target to resume execution, following any kind of breakpoint or stop that halted it.
-
debug_info
(who) Return information about how to connect to the target to debug it
-
debug_stop
(who) Stop debugging the target
This might not do anything on the target until power off, or it might disconnect the debugger currently connected.
-
debug_openocd
(who, command) Run an OpenOCD command on the target’s controller (if the target supports it).
-
8.7.5. Power Control Interface¶
-
class
ttbl.
tt_power_control_mixin
(impl=None) This is the power control interface
This allows a target to be fully powered off, on, power cycled or reset.
To run functions before power-off or after power-on, add functions that take the target object as self to the power_(on|off)_(post|pre)_fns lists.
Power control is implemented with
ttbl.tt_power_control_mixin
, which can be subclassed or given a subcless of an implementation object (ttbl.tt_power_control_impl
) or a list of them (to create a power rail).Power rails allow to create complex power configurations where a target requires many things to be done in an specific sequence to power up (and viceversa to power down). For example, for the Arduino101 boards, the following happens:
power_control = [ ttbl.pc_ykush.ykush("YK20954", 2), # Flyswatter2 # delay power-on until the flyswatter2 powers up as a USB device ttbl.pc.delay_til_usb_device(serial = "FS20000"), # delay on power off until the serial console device is gone, # to make sure it re-opens properly later. It also helps the # main board reset properly. ttbl.pc.delay_til_file_gone(off = "/dev/tty-arduino101-02"), ttbl.pc_ykush.ykush("YK20954", 0), # serial port ttbl.cm_serial.pc(), # plug serial ports ttbl.pc_ykush.ykush("YK20954", 1), # board ],
this is a six-component power rail, which:
- powers a port in a USB hub to power up the Flyswatter2
(firmware flasher) (with a
ttbl.pc_ykush.ykush
) - waits for the USB device representing said device to show up in
the system (with
ttbl.pc.delay_til_usb_device
). - (only used during power off) delay until said file is not
present in the system anymore
(
ttbl.pc.delay_til_file_gone
); this is used when we power off something (like a USB serial device) and we know that as a consequence, udev will remove a device node from the system. - powers up a serial port
- connects the serial ports of the target (once it has been powered up in the previous step)
- powers up the target itself
The power off sequence runs in the inverse, first powering off the target and then the rest of the components.
These power rails are often necessary for very small, low power devices, that can get resideual power leaks from anywhere, and thus anything connected to it has to be powered off (in specific sequences) to ensure the board fully powers off.
The optional tag idle_poweroff can be given to the target to control how long the target has to be idle before it is powered off. If 0, it will never be automatically powered off upon iddleness. Defaults to
ttbl.config.target_max_idle
.-
power_on
(who)
-
reset
(who) Reset the target (or power it on if off)
-
power_off
(who)
-
power_cycle
(who, wait=None) Power cycle the target, guaranteeing that at the end, it is powered on.
Parameters:
-
power_state
= []
-
power_rail_get
() Return the state of each item of the power rail which powers this target.
Returns list(bool): list of power states for the power rail that powers this target.
-
power_rail_get_any
() Return True if any power rail element is on, False otherwise.
-
power_get
() Return True if all power rail elements are turned on and thus the target is on, False otherwise.
- powers a port in a USB hub to power up the Flyswatter2
(firmware flasher) (with a
8.7.5.1. Power control module to start DHCP daemon when a network is powered on¶
-
ttbl.dhcp.
tftp_dir
= '/var/lib/tftpboot'¶ Directory where the TFTP tree is located
-
ttbl.dhcp.
syslinux_path
= '/usr/share/syslinux'¶ Directory where the syslinux tree is located
-
ttbl.dhcp.
template_rexpand
(text, kws)¶ Expand Python keywords in a template repeatedly until none are left.
if there are substitution fields in the config text, replace them with the keywords; repeat until there are none left (as some of the keywords might bring in new substitution keys).
Stop after ten iterations
-
ttbl.dhcp.
pxe_architectures
= {'efi-bc': {'copy_files': ['/usr/share/syslinux/efi64/', '/home/ttbd/public_html/x86_64/vmlinuz-tcf-live', '/home/ttbd/public_html/x86_64/initramfs-tcf-live'], 'boot_filename': 'syslinux.efi', 'rfc_code': '00:07'}, 'efi-x86_64': {'copy_files': ['/usr/share/syslinux/efi64/', '/home/ttbd/public_html/x86_64/vmlinuz-tcf-live', '/home/ttbd/public_html/x86_64/initramfs-tcf-live'], 'boot_filename': 'syslinux.efi', 'rfc_code': '00:09'}, 'x86': {'copy_files': ['/usr/share/syslinux/lpxelinux.0', '/usr/share/syslinux/ldlinux.c32'], 'boot_filename': 'lpxelinux.0', 'rfc_code': '00:00'}}¶ List of PXE architectures we support
This is a dictionary keyed by architecture name (ARCHNAME); the value is a dictionary keyed by the following keywords
rfc_code
(str) a hex string in the format “HH:HH”, documenting a PXE architecture as described in https://datatracker.ietf.org/doc/rfc4578/?include_text=1 (section 2.1).This is used directly for the ISC DHCP configuration of the option architecture-type:
Code Arch Name Description ----- ----------- -------------------- 00:00 x86 Intel x86PC 00:01 NEC/PC98 00:02 EFI Itanium 00:03 DEC Alpha 00:04 Arc x86 00:05 Intel Lean Client 00:06 EFI IA32 00:07 efi-bc EFI BC (byte code) 00:08 EFI Xscale 00:09 efi-x86_64 EFI x86-64
boot_filename
(str): name of the file sent over PXE to a target when it asks what to boot. This will be converted to TFTP path/ttbd-INSTANCE/ARCHNAME/BOOT_FILENAME
which will be requested by the target.copy_files
(list of str): list of files or directories that have to copy/rsynced toTFTPDIR/ttbd-INSTANCE/ARCHNAME
; everything needed for the client to bootBOOT_FILENAME
has to be listed here for them to be copied and made available over TFTP.This allows to patch this in runtime based on the site configuration and Linux distribution
The DHCP driver, when powered on, will create
TFTPDIR/ttbd-INSTANCE/ARCHNAME
, rsync the files or trees incopy_files
to it and then symlinkTFTPDIR/ttbd-INSTANCE/ARCHNAME/pxelinux.cfg
toTFTPDIR/ttbd-INSTANCE/pxelinux.cfg
(as the configurations are common to all the architectures).To extend in the system configuration, add to any server configuration file in
/etc/ttbd-INSTANCE/conf_*.py
; for example, to use another bootloader for eg,x86
:>>> import ttbl.dhcp >>> ... >>> ttbl.dhcp.pxe_architectures['x86']['copy_files'].append( >>> '/usr/local/share/syslinux/lpxelinux1.0`) >>> ttbl.dhcp.pxe_architectures['x86']['boot_file'] = 'lpxelinux1.0`
-
class
ttbl.dhcp.
pci
(if_addr, if_net, if_len, ip_addr_range_bottom, ip_addr_range_top, mac_ip_map=None, allow_unmapped=False, debug=False, ip_mode=4)¶ -
exception
error_e
¶
-
exception
start_e
¶
-
dhcpd_path
= '/usr/sbin/dhcpd'¶ This class implements a power control unit that can be made part of a power rail for a network interconnect.
When turned on, it would starts DHCP to provide on the network.
With a configuration such as:
import ttbl.dhcp ttbl.config.targets['nwa'].pc_impl.append( ttbl.dhcp.pci("fc00::61:1", "fc00::61:0", 112, "fc00::61:2", "fc00::61:fe", ip_mode = 6) )
It would start a DHCP IPv6 server on fc00::61:1, network fc0)::61:0/112 serving IPv6 address from :2 to :fe.
-
power_on_do
(target)¶ Start DHCPd servers on the network interface described by target
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
exception
-
ttbl.dhcp.
pos_cmdline_opts
= {'tcf-live': ['initrd=%(pos_http_url_prefix)sinitramfs-%(pos_image)s ', 'rd.live.image', 'selinux=0', 'audit=0', 'ip=dhcp', 'root=/dev/nfs', 'rd.luks=0', 'rd.lvm=0', 'rd.md=0', 'rd.dm=0', 'rd.multipath=0', 'ro', 'plymouth.enable=0 ', 'loglevel=2']}¶ List of string with Linux kernel command options to be passed by the bootloader
-
ttbl.dhcp.
power_on_pre_pos_setup
(target)¶
-
class
ttbl.pc.
nil
(id)¶ -
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
-
class
ttbl.pc.
manual
(id)¶ Implement a manual power control interface that prompts the user to do the stuff.
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
-
class
ttbl.pc.
delay
(on=0, off=0)¶ Introduce artificial delays when calling on/off/get to allow targets to settle.
This is meant to be used in a stacked list of power implementations given to a power control interface.
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
-
class
ttbl.pc.
delay_til_file_gone
(poll_period=0.25, timeout=25, on=None, off=None, get=None)¶ Delay until a file dissapears.
This is meant to be used in a stacked list of power implementations given to a power control interface.
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
-
class
ttbl.pc.
delay_til_file_appears
(filename, poll_period=0.25, timeout=25, action=None, action_args=None)¶ Delay until a file appears.
This is meant to be used in a stacked list of power implementations given to a power control interface.
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
-
class
ttbl.pc.
delay_til_usb_device
(serial, when_powering_on=True, want_connected=True, poll_period=0.25, timeout=25, action=None, action_args=None)¶ Delay power-on until a USB device dis/appears.
This is meant to be used in a stacked list of power implementations given to a power control interface.
Parameters: - serial (str) – Serial number of the USB device to monitor
- when_powering_on (bool) – Check when powering on if True (default) or when powering off (if false)
- want_connected (bool) – when checking, we want the device to be connected (True) or disconnected (False)
- action (collections.Callable) – action to execute when the
device is not found, before waiting. Note the first parameter
passed to the action is the target itself and then any other
parameter given in
action_args
- action_args – tuple of parameters to pass to
action
.
-
exception
not_found_e
¶ Exception raised when a USB device is not found
-
backend
= None¶
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
class
ttbl.pc.
dlwps7
(_url, reboot_wait_s=0.5)¶ Implement a power control interface to the Digital Logger’s Web Power Switch 7
Parameters: - _url (str) –
URL describing the unit and outlet number, in the form:
http://USER:PASSWORD@HOST:PORT/OUTLETNUMBER
where USER and PASSWORD are valid accounts set in the Digital Logger’s Web Power Switch 7 administration interface with access to the OUTLETNUMBER.
- reboot_wait (float) – Seconds to wait in when power cycling an outlet from off to on (defaults to 0.5s) or after powering up.
Access language documented at http://www.digital-loggers.com/http.html.
If you get an error like:
Exception: Cannot find ‘<!– state=(?P<state>[0-9a-z][0-9a-z]) lock=[0-9a-z][0-9a-z] –>’ in power switch responsethis might be that you are going through a proxy that is messing up things. In some cases the proxy was messing up the authentication and imposing javascript execution that made the driver fail.
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_cycle_do
(target, wait=0)¶
-
state_regex
= <_sre.SRE_Pattern object>¶
-
power_get_do
(target)¶ Get the power status for the outlet
The unit returns the power state when querying the
/index.htm
path…as a comment inside the HTML body of the respose. ChuckleSo we look for:
<!-- state=XY lock=ANY -->
XY is the hex bitmap of states against the outlet number. ANY is the hex lock bitmap (outlets that can’t change).
- _url (str) –
-
class
ttbl.pc_ykush.
ykush
(ykush_serial, port)¶ A power control implementation using an YKUSH switchable hub https://www.yepkit.com/products/ykush
This is mainly devices that are USB powered and the ykush hub is used to control the power to the ports.
Note this devices appears as a child connected to the YKUSH hub, with vendor/device IDs 0x04d8:f2f7 (Microchip Technology)
You can find the right one with lsusb.py -ciu:
usb1 1d6b:0002 09 2.00 480MBit/s 0mA 1IF (Linux 4.3.3-300.fc23.x86_64 xhci-hcd xHCI Host Controller 0000:00:14.0) hub 1-2 2001:f103 09 2.00 480MBit/s 0mA 1IF (D-Link Corp. DUB-H7 7-port USB 2.0 hub) hub 1-2.5 0424:2514 09 2.00 480MBit/s 2mA 1IF (Standard Microsystems Corp. USB 2.0 Hub) hub 1-2.5.4 04d8:f2f7 00 2.00 12MBit/s 100mA 1IF (Yepkit Lda. YKUSH YK20345) 1-2.5.4:1.0 (IF) 03:00:00 2EPs (Human Interface Device:No Subclass:None)
Note the Yepkit Ltd, YK20345; YK20345 is the serial number.
To avoid permission issues:
choose a Unix group that the daemon will be running under
add a UDEV rule to
/etc/udev/rules.d/90-tcf.rules
(or other name):# YKUSH power switch hubs SUBSYSTEM=="usb", ATTR{idVendor}=="04d8", ATTR{idProduct}=="f2f7", GROUP="GROUPNAME", MODE = "660"
restart UDEV, replug your hubs:
$ sudo udevadm control --reload-rules
Parameters: -
exception
notfound_e
¶
-
backend
= None¶
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
class
ttbl.pc_ykush.
plugger
(ykush_serial, port)¶ Plugger to connect/disconnect a USB device with a YKUSH
-
plug
(target, _thing)¶ Plug thing into target
Caller must own both target and thing
Parameters: - target (ttbl.test_target) – target where to plug
- thing (ttbl.test_target) – thing to plug into target
-
unplug
(target, _thing)¶ Unplug thing from target
Caller must own target (not thing necessarily)
Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
-
8.7.5.2. Power control module to start a rsync daemon when a network is powered-on¶
-
class
ttbl.rsync.
pci
(address, share_name, share_path, port=873, uid=None, gid=None, read_only=True)¶ -
exception
error_e
¶
-
exception
start_e
¶
-
path
= '/usr/bin/rsync'¶ This class implements a power control unit that starts an rsync daemon to serve one path to a network.
Thus, when the associated target is powered on, the rsync daemon is started; when off, rsync is killed.
E.g.: an interconnect gets an rsync server to share some files that targets might use:
>>> ttbl.config.interconnect_add( >>> ttbl.tt.tt_power('nwa', [ >>> ttbl.rsync.pci("192.168.43.1", 'images', >>> '/home/ttbd/images'), >>> vlan_pci() >>> ]), >>> tags = dict( >>> rsync_server = '192.168.43.1::images', >>> ..., >>> ic_type = "ethernet" >>> )
-
power_on_do
(target)¶ Start the daemon, generating first the config file
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
exception
8.7.5.3. Power control module to start a socat daemon when a network is powered-on¶
This socat daemon can provide tunneling services to allow targets to access outside isolated test networks via the server.
-
class
ttbl.socat.
pci
(proto, local_addr, local_port, remote_addr, remote_port)¶ -
exception
error_e
¶
-
exception
start_e
¶
-
path
= '/usr/bin/socat'¶ This class implements a power control unit that can forward ports in the server to other places in the network.
It can be used to provide for access point in the NUTs (Network Under Tests) for the testcases to access.
For example, given a NUT represented by
NWTARGET
which has an IPv4 address of 192.168.98.1 in the ttbd server, a port redirection from port 8080 to an external proxy server proxy-host.in.network:8080 would be implemented as:>>> ttbl.config.targets[NWTARGET].pc_impl.append( >>> ttbl.socat.pci('tcp', >>> '192.168.98.1', 8080, >>> 'proxy-host.in.network', 8080))
Then to facilitate the work of test scripts, it’d make sense to export tags that explain where the proxy is:
>>> ttbl.config.targets[NWTARGET].tags_update({ >>> 'ftp_proxy': 'http://192.168.98.1:8080', >>> 'http_proxy': 'http://192.168.98.1:8080', >>> 'https_proxy': 'http://192.168.98.1:8080', >>> })
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
exception
-
class
ttbl.usbrly08b.
rly08b
(serial_number)¶ A power control implementation for the USB-RLY08B relay controller https://www.robot-electronics.co.uk/htm/usb_rly08btech.htm.
This serves as base for other drivers to implement
per relay power controllers
,USB pluggers as *thing* or power controllers
.This device offers eight relays for AC and DC. The relays being on or off are controlled by a byte-oriented serial protocol over an FTDI chip that shows as:
$ lsusb.py -iu ... 1-1.1.1 04d8:ffee 02 2.00 12MBit/s 100mA 2IFs (Devantech Ltd. USB-RLY08 00023456) 1-1.1.1:1.0 (IF) 02:02:01 1EP (Communications:Abstract (modem):AT-commands (v.25ter)) cdc_acm tty/ttyACM0 1-1.1.1:1.1 (IF) 0a:00:00 2EPs (CDC Data:) cdc_acm ...
Note the 00023456 is the serial number.
To avoid permission issues, it can either:
The default rules in most Linux platforms will make the device node owned by group dialout, so make the daemon have that supplementary GID.
add a UDEV rule to
/etc/udev/rules.d/90-ttbd.rules
(or other name):SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "00023456", GROUP="GROUPNAME", MODE = "660"
restart udev*:
$ sudo udevadm control --reload-rules
replug your hubs so the rule is set.
Parameters: -
exception
not_found_e
¶
-
backend
= None¶
-
class
ttbl.usbrly08b.
pc
(serial_number, relay)¶ -
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
-
-
class
ttbl.usbrly08b.
plugger
(serial_number, bank)¶ Implement a USB multiplexor/plugger that allows a DUT to be plugged to Host B and to Host A when unplugged. It follows it can work as a USB cutter if Host A is disconnected.
It also implements a power control implementation, so when powered off, it plugs-to-Host-B and when powered on, it plugs-to-host-A. Likewise, if Host B is disconnected, when off it is effectively disconnected. This serves to for example, connect a USB storage drive to a target that will be able to access it when turned on and when off, it can be connected to another machine that can, eg: use it to flash software.
This uses a
rly08b
relay bank to do the switching.Parameters: System setup details
A USB connection is four cables: VCC (red), D+ (white), D- (green), GND (black) plus a shielding wrapping it all.
A relay has three terminals; NO, C and NC. - ON means C and NC are connected - OFF means C and NO are connected
- (it is recommended to label the cable connected to NO as
OFF/PLUGGED and the one to NC as ON/UNPLUGGED)
We use the USB-RLY8B, which has eight individual relays, so we can switch two devices between two USB hosts each.
We connect the DUT’s cables and host cables as follows:
DUT1 pin Host A1/ON pin Host B1/OFF pin VCC (red) 1C VCC (red) 1NO VCC (red) 1NC D+ (white) 2C D+ (white) 2NO D+ (white) 2NC D- (green) 3C D- (green) 3NO D- (green) 3NC GND (black) 4C GND (black) 4NO GND (black) 4NC DUT2 pin Host A2/ON pin Host B1/OFF pin VCC (red) 5C VCC (red) 5NO VCC (red) 5NC D+ (white) 6C D+ (white) 6NO D+ (white) 6NC D- (green) 7C D- (green) 7NO D- (green) 7NC GND (black) 8C GND (black) 8NO GND (black) 8NC For example, to switch an Arduino 101 between a NUC and the TTBD server that flashes and controls it:
- DUT (C) is our Arduino 101,
- Host B (NC) is another NUC machine in the TCF infrastructure
- Host A (NO) is the TTBD server (via the YKUSH port)
For a pure USB cutter (where we don’t need the connection to a TTBD server on MCU boards that expose a separate debugging cable for power and flashing), we’d connect the USB port like:
- DUT (C) is the MCU’s USB port
- Host B (NC) is the NUC machine in the TCF infrastructure
- Host A (NO) is left disconnected
Note
switching ONLY the VCC and GND connections (always leave D+ and D- connected the the Host A to avoid Host B doing a data connection and only being used to supply power) does not work.
Host A still detects the power differential in D+ D- and thought there was a device; tried to enable it, failed and disabled the port.
Note
We can’t turn them on or off at the same time because the HW doesn’t allow to set a mask and we could override settings for the other ports we are not controlling here–another server process might be tweaking the other ports.
** Configuration details **
Example:
To connect a USB device from system A to system B, so power off means connected to B, power-on connected to A, add to the configuration:
ttbl.config.target_add( ttbl.tt.tt_power( "devicename", power_control = [ ttbl.usbrly08b.plugger("00023456", 0), ]), target_type = "device-switcher" )
Thus to connect to system B:
$ tcf acquire devicename $ tcf power-off devicename
Thus to connect to system A:
$ tcf power-on devicename
Example:
If system B is the ttbd server, then you can refine it to test the USB device is connecting/disconnecting.
To connect a USB drive to a target before the target is powered on (in this example, a NUC mini-PC with a USB drive connected to boot off it, the configuration block would be as:
ttbl.config.target_add( ttbl.tt.tt_power( "nuc-43", power_control = [ # Ensure the dongle is / has been connected to the server ttbl.pc.delay_til_usb_device("7FA50D00FFFF00DD", when_powering_on = False, want_connected = True), ttbl.usbrly08b.plugger("00023456", 0), # Ensure the dongle disconnected from the server ttbl.pc.delay_til_usb_device("7FA50D00FFFF00DD", when_powering_on = True, want_connected = False), # power on the target ttbl.pc.dlwps7("http://admin:1234@SPNAME/SPPORT"), # let it boot ttbl.pc.delay(2) ]), tags = { 'linux': True, 'bsp_models': { 'x86_64': None }, 'bsps': { 'x86_64': { 'linux': True, } } }, target_type = "nuc-linux-x86_64" )
Note that the serial number 7FA50D00FFFF00DD is that of the USB drive and 00023456 is the serial number of the USB-RLY8b board which implements the switching (in this case we use bank 0 of relays, from 1 to 4).
Example:
An Arduino 101 is connected to a NUC mini-PC as a USB device using the thing interface that we can control from a script or command line:
In this case we create an interconnect that wraps all the targets together (the Arduino 101, the NUC) to indicate they operate together and the configuration block would be:
ttbl.config.interconnect_add(ttbl.test_target("usb__nuc-02__a101-04"), ic_type = "usb__host__device") ttbl.config.targets['nuc-02'].add_to_interconnect('usb__nuc-02__a101-04') ttbl.config.targets['a101-04'].add_to_interconnect('usb__nuc-02__a101-04') ttbl.config.targets['nuc-02'].thing_add('a101-04', ttbl.usbrly08b.plugger("00033085", 1))
Where 00033085 is the serial number for the USB-RLY8b which implements the USB plugging/unplugging (in this case we use bank 1 of relays, from 5 to 8)
-
plug
(target, _thing)¶ Plug thing into target
Caller must own both target and thing
Parameters: - target (ttbl.test_target) – target where to plug
- thing (ttbl.test_target) – thing to plug into target
-
unplug
(target, _thing)¶ Unplug thing from target
Caller must own target (not thing necessarily)
Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
-
power_on_do
(target)¶ Flip the power on
-
power_off_do
(target)¶ Flip the power off
-
power_get_do
(target)¶ Return the power state
8.7.6. Other interfaces¶
8.7.6.1. Interface to press buttons in a target¶
Implementation interface for a button driver
Buttons interface to the core target API
An instance of this gets added as an object to the main target with:
>>> ttbl.config.targets['android_tablet'].interface_add( >>> "buttons", >>> ttbl.buttons.interface( >>> power = ttbl.usbrly08b.button("00023456", 4), >>> vol_up = ttbl.usbrly08b.button("00023456", 3), >>> vol_down = ttbl.usbrly08b.button("00023456", 2), >>> ) >>> )
where in this case the buttons are implemented with an USB-RLY08B relay board.
This for example, can be used to instrument the power, volume up and volume down button of a tablet to control power switching. In the case of most Android tablets, the power rail then becomes:
>>> ttbl.config.target_add( >>> ttbl.tt.tt_power("android_tablet", [ >>> ttbl.buttons.pci_buttons_released( >>> [ "vol_up", "vol_down", "power" ]), >>> ttbl.buttons.pci_button_sequences( >>> sequence_off = [ >>> ( 'power', 'press' ), >>> ( 'vol_down', 'press' ), >>> ( 'resetting', 11 ), >>> ( 'vol_down', 'release' ), >>> ( 'power', 'release' ), >>> ], >>> sequence_on = [ >>> ( 'power', 'press' ), >>> ( 'powering', 5 ), >>> ( 'power', 'release' ), >>> ] >>> ), >>> ttbl.pc.delay_til_usb_device("SERIALNUMBER"), >>> ttbl.adb.pci(4036, target_serial_number = "SERIALNUMBER"), >>> ]), >>> tags = dict(idle_poweroff = 0), >>> target_type = "ANDROID TABLET'S TYPE" >>> ) >>> >>> ttbl.config.targets['android_tablet'].interface_add( >>> "buttons", >>> ttbl.buttons.interface( >>> power = ttbl.usbrly08b.button("00023456", 4), >>> vol_up = ttbl.usbrly08b.button("00023456", 3), >>> vol_down = ttbl.usbrly08b.button("00023456", 2) >>> ) >>> )
Parameters: impls (dict) – dictionary keyed by button name and which values are instantiation of button drivers inheriting from
ttbl.buttons.impl
.Names have to be valid python symbol names.
Execute a sequence of button actions on a target
List button on a target
Process a request into this interface from a proxy / brokerage
When the ttbd daemon is exporting access to a target via any interface (e.g: REST over Flask or D-Bus or whatever), this implements a brige to pipe those requests in to this interface.
Parameters: - target (test_target) – target upon which we are operating
- who (str) – user who is making the request
- method (str) – ‘POST’, ‘GET’, ‘DELETE’ or ‘PUT’ (mapping to HTTP requests)
- call (str) – interface’s operation to perform (it’d map to the different methods the interface exposes)
- args (dict) – dictionary of key/value with the arguments to the call, some might be JSON encoded.
- user_path (str) – Path to where user files are located
Returns: dictionary of results, call specific e.g.:
>>> dict( >>> output = "something", >>> value = 43 >>> )
For an example, see
ttbl.buttons.interface
.
Power control implementation that clicks a button as a step to power on or off something on a target.
Flip the power on
Flip the power off
Return the power state
Power control implementation that executest a button sequence on power on, another on power off.
Flip the power on
Flip the power off
Return the power state
Power control implementation that ensures a list of buttons are not pressed before powering on a target.
Flip the power on
Flip the power off
Return the power state
8.7.6.2. Stream and snapshot capture interace¶
This module implements an interface to capture things in the server and then return them to the client.
This can be used to, for example:
capture screenshots of a screen, by connecting the target’s output to a framegrabber, for example:
…
and then running somethig such as ffmpeg on its output
capture a video stream (with audio) when the controller can say when to start and when to end
capture network traffic with tcpdump
-
class
ttbl.capture.
impl_c
(stream, mimetype)¶ Implementation interface for a capture driver
The target will list the available capturers in the capture tag.
Parameters: -
start
(target, capturer)¶ If this is a streaming capturer, start capturing the stream
Usually starts a program that is active, capturing to a file until the
stop_and_get()
method is called.Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, usually nothing
-
stop_and_get
(target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or take a snapshot and return it.
This stops the capture of the stream and return the file or take a snapshot capture and return.
Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, including the data; to stream a large file, include a member in this dictionary called stream_file pointing to the file’s path; eg:
>>> return dict(stream_file = CAPTURE_FILE)
-
-
class
ttbl.capture.
interface
(**impls)¶ Interface to capture something in the server related to a target
An instance of this gets added as an object to the target object with:
>>> ttbl.config.targets['qu05a'].interface_add( >>> "capture", >>> ttbl.capture.interface( >>> vnc0 = ttbl.capture.vnc(PORTNUMBER) >>> vnc0_stream = ttbl.capture.vnc_stream(PORTNUMBER) >>> hdmi0 = ttbl.capture.ffmpeg(...) >>> screen = "vnc0", >>> screen_stream = "vnc0_stream", >>> ) >>> )
Note how screen has been made an alias of vnc0 and screen_stream an alias of vnc0_stream.
Parameters: impls (dict) – dictionary keyed by capture name name and which values are instantiation of capture drivers inheriting from
ttbl.capture.impl_c
or names of other capturers (to sever as aliases).Names have to be valid python symbol names.
-
start
(who, target, capturer)¶ If this is a streaming capturer, start capturing the stream
Parameters: - who (str) – user who owns the target
- target (ttbl.test_target) – target on which we are capturing
- capturer (str) – capturer to use, as registered in
ttbl.capture.interface
.
Returns: dictionary of values to pass to the client
-
stop_and_get
(who, target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or if no streaming, take a snapshot and return it.
Parameters: - who (str) – user who owns the target
- target (ttbl.test_target) – target on which we are capturing
- capturer (str) – capturer to use, as registered in
ttbl.capture.interface
.
Returns: dictionary of values to pass to the client
-
list
(target)¶ List capturers available on a target
Parameters: target (ttbl.test_target) – target on which we are capturing
-
request_process
(target, who, method, call, args, _user_path)¶ Process a request into this interface from a proxy / brokerage
When the ttbd daemon is exporting access to a target via any interface (e.g: REST over Flask or D-Bus or whatever), this implements a brige to pipe those requests in to this interface.
Parameters: - target (test_target) – target upon which we are operating
- who (str) – user who is making the request
- method (str) – ‘POST’, ‘GET’, ‘DELETE’ or ‘PUT’ (mapping to HTTP requests)
- call (str) – interface’s operation to perform (it’d map to the different methods the interface exposes)
- args (dict) – dictionary of key/value with the arguments to the call, some might be JSON encoded.
- user_path (str) – Path to where user files are located
Returns: dictionary of results, call specific e.g.:
>>> dict( >>> output = "something", >>> value = 43 >>> )
For an example, see
ttbl.buttons.interface
.
-
-
class
ttbl.capture.
vnc
(port)¶ Implementation interface for a button driver
-
start
(target, capturer)¶ If this is a streaming capturer, start capturing the stream
Usually starts a program that is active, capturing to a file until the
stop_and_get()
method is called.Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, usually nothing
-
stop_and_get
(target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or take a snapshot and return it.
This stops the capture of the stream and return the file or take a snapshot capture and return.
Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, including the data; to stream a large file, include a member in this dictionary called stream_file pointing to the file’s path; eg:
>>> return dict(stream_file = CAPTURE_FILE)
-
-
class
ttbl.capture.
ffmpeg
(video_device)¶ -
start
(target, capturer)¶ If this is a streaming capturer, start capturing the stream
Usually starts a program that is active, capturing to a file until the
stop_and_get()
method is called.Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, usually nothing
-
stop_and_get
(target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or take a snapshot and return it.
This stops the capture of the stream and return the file or take a snapshot capture and return.
Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, including the data; to stream a large file, include a member in this dictionary called stream_file pointing to the file’s path; eg:
>>> return dict(stream_file = CAPTURE_FILE)
-
-
class
ttbl.capture.
generic_snapshot
(name, cmdline, mimetype, pre_commands=None, extension='')¶ This is a generic snaptshot capturer which can be used to invoke any program that will do capture a snapshot.
For example, in a server configuration file, define a capturer that will connect to VNC and take a screenshot:
>>> capture_screenshot_vnc = ttbl.capture.generic_snapshot( >>> "%(id)s VNC @localhost:%(vnc_port)s", >>> # need to make sure vnc_port is defined in the target's tags >>> "gvnccapture -q localhost:%(vnc_port)s %(output_file_name)s", >>> mimetype = "image/png" >>> )
Then attach the capture interface to the target with:
>>> ttbl.config.targets['TARGETNAME'].interface_add( >>> "capture", >>> ttbl.capture.interface( >>> vnc0 = capture_screenshot_vnc, >>> ... >>> ) >>> )
Now the command:
$ tcf capture-get TARGETNAME vnc0 file.png
will download to
file.png
a capture of the target’s screen via VNC.Parameters: - name (str) –
name for error messages from this capturer.
E.g.: %(id)s HDMI
- cmdline (str) –
commandline to invoke the capturing the snapshot.
E.g.: ffmpeg -i /dev/video-%(id)s; in this case udev has been configured to create a symlink called /dev/video-TARGETNAME so we can uniquely identify the device associated to screen capture for said target.
- mimetype (str) – MIME type of the capture output, eg image/png
- pre_commands (list) –
(optional) list of commands (str) to execute before the command line, to for example, set parameters eg:
>>> pre_commands = [ >>> # set some video parameter >>> "v4l-ctl -i /dev/video-%(id)s -someparam 45", >>> ]
Note all string parameters are %(keyword)s expanded from the target’s tags (as reported by tcf list -vv TARGETNAME), such as:
- output_file_name: name of the file where to dump the capture output; file shall be overwritten.
- id: target’s name
- type: target’s type
- … (more with tcf list -vv TARGETNAME)
Parameters: extension (str) – (optional) string to append to the filename, like for example, an extension. This is needed because some capture programs insist on guessing the file type from the file name and balk of there is no proper extension; eg:
>>> extension = ".png"
avoid adding the extension to the command name you are asking to execute, as the system needs to know the full file name.
System configuration
It is highly recommendable to configure udev to generate device nodes named after the target’s name, so make configuration simpler and isolate the system from changes in the device enumeration order.
For example, adding to /etc/udev/rules.d/90-ttbd.rules:
SUBSYSTEM == "video4linux", ACTION == "add", KERNEL=="video*", ENV{ID_SERIAL} == "SERIALNUMBER", SYMLINK += "video-TARGETNAME"
where SERIALNUMBER is the serial number of the device that captures the screen for TARGETNAME. Note it is recommended to call the video interface video-SOMETHING so that tools such as ffmpeg won’t be confused.
-
start
(target, capturer)¶ If this is a streaming capturer, start capturing the stream
Usually starts a program that is active, capturing to a file until the
stop_and_get()
method is called.Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, usually nothing
-
stop_and_get
(target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or take a snapshot and return it.
This stops the capture of the stream and return the file or take a snapshot capture and return.
Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, including the data; to stream a large file, include a member in this dictionary called stream_file pointing to the file’s path; eg:
>>> return dict(stream_file = CAPTURE_FILE)
- name (str) –
-
class
ttbl.capture.
generic_stream
(name, cmdline, mimetype, pre_commands=None, wait_to_kill=1)¶ This is a generic stream capturer which can be used to invoke any program that will do capture the stream for a while.
For example, in a server configuration file, define a capturer that will record video with ffmpeg from a camera that is pointing to the target’s monitor or an HDMI capturer:
>>> capture_vstream_ffmpeg_v4l = ttbl.capture.generic_snapshot( >>> "%(id)s screen", >>> "ffmpeg -i /dev/video-%(id)s-0" >>> " -f avi -qscale:v 10 -y %(output_file_name)s", >>> mimetype = "video/avi", >>> wait_to_kill = 0.25, >>> pre_commands = [ >>> "v4l2-ctl -d /dev/video-%(id)s-0 -c focus_auto=0" >>> ] >>> )
Then attach the capture interface to the target with:
>>> ttbl.config.targets['TARGETNAME'].interface_add( >>> "capture", >>> ttbl.capture.interface( >>> hdmi0_vstream = capture_vstream_ffmpeg_v4l, >>> ... >>> ) >>> )
Now, when the client runs to start the capture:
$ tcf capture-start TARGETNAME hdmi0_vstream
will execute in the server the pre-commands:
$ v4l2-ctl -d /dev/video-TARGETNAME-0 -c focus_auto=0
and then start recording with:
$ ffmpeg -i /dev/video-TARGETNAME-0 -f avi -qscale:v 10 -y SOMEFILE
so that when we decide it is done, in the client:
$ tcf capture-get TARGETNAME hdmi0_vstream file.avi
it will stop recording and download the video file with the recording to file.avi.
Parameters: - name (str) – name for error messges from this capturer
- cmdline (str) – commandline to invoke the capturing of the stream
- mimetype (str) – MIME type of the capture output, eg video/avi
- pre_commands (list) – (optional) list of commands (str) to execute before the command line, to for example, set volumes.
- wait_to_kill (int) – (optional) time to wait since we send a SIGTERM to the capturing process until we send a SIGKILL, so it has time to close the capture file. Defaults to one second.
Note all string parameters are %(keyword)s expanded from the target’s tags (as reported by tcf list -vv TARGETNAME), such as:
- output_file_name: name of the file where to dump the capture output; file shall be overwritten.
- id: target’s name
- type: target’s type
- … (more with tcf list -vv TARGETNAME)
For more information, look at
ttbl.capture.generic_snapshot
.-
start
(target, capturer)¶ If this is a streaming capturer, start capturing the stream
Usually starts a program that is active, capturing to a file until the
stop_and_get()
method is called.Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, usually nothing
-
stop_and_get
(target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or take a snapshot and return it.
This stops the capture of the stream and return the file or take a snapshot capture and return.
Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, including the data; to stream a large file, include a member in this dictionary called stream_file pointing to the file’s path; eg:
>>> return dict(stream_file = CAPTURE_FILE)
8.7.6.3. Interface to provide flash the target using fastboot¶
-
class
ttbl.fastboot.
interface
(usb_serial_number, allowed_commands)¶ Interface to execute fastboot commands on target
An instance of this gets added as an object to the main target with something like:
>>> ttbl.config.targets['targetname'].interface_add( >>> "fastboot", >>> ttbl.fastboot.interface("R1J56L1006ba8b"), >>> { >>> # Allow a command called `flash_pos`; the command >>> # >>> # flash_pos partition_boot /home/ttbd/partition_boot.pos.img >>> # >>> # will be replaced with: >>> # >>> # flash partition_boot /home/ttbd/partition_boot.pos.img >>> # >>> # anything else will be rejected >>> "flash_pos": [ >>> ( "flash_pos", "flash" ), >>> "partition_boot", >>> "/home/ttbd/partition_boot.pos.img" >>> ], >>> # Allow a command called `flash`; the command >>> # >>> # flash partition_boot FILENAME >>> # >>> # will be replaced with: >>> # >>> # flash partition_boot /var/lib/ttbd-INSTANCE/USERNAME/FILENAME >>> # >>> # anything else will be rejected >>> "flash": [ >>> "flash", >>> "partition_boot", >>> ( re.compile("^(.+)$"), "%USERPATH%/\g<1>" ) >>> ], >>> } >>> )
this allows to control which commands can be executed in the sever using fastboot, allowing access to the server’s user storage area (to which files can be uploaded using the tcf broker-upload command or
tcfl.tc.target_c.broker_files.upload
).The server configuration will decide which commands can be executed or not (a quick list can be obtained with tcf fastboot-list TARGETNAME).
Parameters: - usb_serial_number (str) – serial number of the USB device
under which the target exposes the fastboot interface. E.g.:
"R1J56L1006ba8b"
. - allowed_commands (dict) –
Commands that can be executed with fastboot. This is a KEY/VALUE list. Each KEY is a command name (which doesn’t necessarily need to map a fastboot command itself). The VALUE is a list of arguments to fastboot.
The user must send the same amount of arguments as in the VALUE list.
Each entry in the VALUE list is either a string or a regular expression. What ever the user sends must match the string or regular expression. Otherwise it will be rejected.
The entry can be a tuple ( STR|REGEX, REPLACEMENT ) that allows to replace what the user sends (using
re.sub()
). In the example above:>>> ( re.compile("^(.+)$"), "%USERPATH%/\g<1>" )
it is meant to take a filename uploaded to the server’s user storage area. A match is done on the totality of the argument (ala: file name) and then
\g<1>
in the substitution string is replaced by that match (group #1), to yield%USERPATH%/FILENAME
.Furthermore, the following substitutions are done on the final strings before passing the arguments to fastboot:
%USERPATH%
will get replaced by the current user path
Warning
There is a potential to exploit the system’s security if wide access is given to touch files or execute commands without filtering what the user is given. Be very restrictive about what commands and arguments are whitelisted.
-
path
= '/usr/bin/fastboot'¶ path to the fastboot binary
can be changed globally:
>>> ttbl.fastboot.interface.path = "/some/other/fastboot"
or for an specific instance
>>> ttbl.config.targets['TARGETNAME'].fastboot.path = "/some/other/fastboot"
-
request_process
(target, who, method, call, args, _user_path)¶ Process a request into this interface from a proxy / brokerage
When the ttbd daemon is exporting access to a target via any interface (e.g: REST over Flask or D-Bus or whatever), this implements a brige to pipe those requests in to this interface.
Parameters: - target (test_target) – target upon which we are operating
- who (str) – user who is making the request
- method (str) – ‘POST’, ‘GET’, ‘DELETE’ or ‘PUT’ (mapping to HTTP requests)
- call (str) – interface’s operation to perform (it’d map to the different methods the interface exposes)
- args (dict) – dictionary of key/value with the arguments to the call, some might be JSON encoded.
- user_path (str) – Path to where user files are located
Returns: dictionary of results, call specific e.g.:
>>> dict( >>> output = "something", >>> value = 43 >>> )
For an example, see
ttbl.buttons.interface
.
- usb_serial_number (str) – serial number of the USB device
under which the target exposes the fastboot interface. E.g.:
8.7.6.4. Interface to flash the target using ioc_flash_server_app¶
-
class
ttbl.ioc_flash_server_app.
interface
(tty_path)¶ Remote tool interface
An instance of this gets added as an object to the main target with:
>>> ttbl.config.targets['TARGETNAME'].interface_add( >>> "ioc_flash_server_app", >>> ttbl.ioc_flash_server_app.interface("/dev/tty-TARGETNAME-FW") >>> )
Where
/dev/tty-TARGETNAME-FW
is the serial line for the IOC firmware interface for TARGETNAME.Note this requires the Intel Platform Flash Tool installed in your system; this driver will expect the binary available in a location described by
path
.Parameters: tty_path (str) – path to the target’s IOC firmware serial port -
path
= '/opt/intel/platformflashtool/bin/ioc_flash_server_app'¶ path to the binary
can be changed globally:
>>> ttbl.ioc_flash_server_app.interface.path = "/some/other/ioc_flash_server_app"
or for an specific instance
>>> ttbl.config.targets['TARGETNAME'].ioc_flash_server_app._path = "/some/other/ioc_flash_server_app"
-
allowed_modes
= ('fabA', 'fabB', 'fabC', 'grfabab', 'grfabc', 'grfabd', 'grfabe', 'hadfaba', 'kslfaba', 'generic', 'w', 't')¶ allowed operation modes
these translate directly to the command line option
-MODE
- fabA
- fabB
- fabC
- grfabab
- grfabc
- grfabd
- grfabe
- hadfaba
- kslfaba
- generic (requires the generic_id parameter too)
- w
- t
-
run
(who, target, baudrate, mode, filename, _filename, generic_id)¶
-
request_process
(target, who, method, call, args, user_path)¶ Process a request into this interface from a proxy / brokerage
When the ttbd daemon is exporting access to a target via any interface (e.g: REST over Flask or D-Bus or whatever), this implements a brige to pipe those requests in to this interface.
Parameters: - target (test_target) – target upon which we are operating
- who (str) – user who is making the request
- method (str) – ‘POST’, ‘GET’, ‘DELETE’ or ‘PUT’ (mapping to HTTP requests)
- call (str) – interface’s operation to perform (it’d map to the different methods the interface exposes)
- args (dict) – dictionary of key/value with the arguments to the call, some might be JSON encoded.
- user_path (str) – Path to where user files are located
Returns: dictionary of results, call specific e.g.:
>>> dict( >>> output = "something", >>> value = 43 >>> )
For an example, see
ttbl.buttons.interface
.
-
8.7.7. Common helper library¶
This module implements a simple expression language.
The grammar for this language is as follows:
- expression ::= expression “and” expression
expression “or” expression“not” expression“(” expression “)”symbol “==” constantsymbol “!=” constantsymbol “<” numbersymbol “>” numbersymbol “>=” numbersymbol “<=” numbersymbol “in” listsymbollist ::= “[” list_contents “]”
- list_contents ::= constant
list_contents “,” constant- constant ::= number
string
When symbols are encountered, they are looked up in an environment dictionary supplied to the parse() function.
For the case where
expression ::= symbol
it evaluates to true if the symbol is defined to a non-empty string.
For all comparison operators, if the config symbol is undefined, it will be treated as a 0 (for > < >= <=) or an empty string “” (for == != in). For numerical comparisons it doesn’t matter if the environment stores the value as an integer or string, it will be cast appropriately.
Operator precedence, starting from lowest to highest:
or (left associative) and (left associative) not (right associative) all comparison operators (non-associative)
The ‘:’ operator compiles the string argument as a regular expression, and then returns a true value only if the symbol’s value in the environment matches. For example, if CONFIG_SOC=”quark_se” then
filter = CONFIG_SOC : “quark.*”
Would match it.
-
commonl.expr_parser.
t_HEX
(t)¶ 0x[0-9a-fA-F]+
-
commonl.expr_parser.
t_INTEGER
(t)¶ d+
-
commonl.expr_parser.
t_error
(t)¶
-
commonl.expr_parser.
p_expr_or
(p)¶ expr : expr OR expr
-
commonl.expr_parser.
p_expr_and
(p)¶ expr : expr AND expr
-
commonl.expr_parser.
p_expr_not
(p)¶ expr : NOT expr
-
commonl.expr_parser.
p_expr_parens
(p)¶ expr : OPAREN expr CPAREN
-
commonl.expr_parser.
p_expr_eval
(p)¶ expr : SYMBOL EQUALS const | SYMBOL NOTEQUALS const | SYMBOL GT number | SYMBOL LT number | SYMBOL GTEQ number | SYMBOL LTEQ number | SYMBOL IN list | SYMBOL IN SYMBOL | SYMBOL COLON STR
-
commonl.expr_parser.
p_expr_single
(p)¶ expr : SYMBOL
-
commonl.expr_parser.
p_list
(p)¶ list : OBRACKET list_intr CBRACKET
-
commonl.expr_parser.
p_list_intr_single
(p)¶ list_intr : const
-
commonl.expr_parser.
p_list_intr_mult
(p)¶ list_intr : list_intr COMMA const
-
commonl.expr_parser.
p_const
(p)¶ const : STR | number
-
commonl.expr_parser.
p_number
(p)¶ number : INTEGER | HEX
-
commonl.expr_parser.
p_error
(p)¶
-
commonl.expr_parser.
ast_sym
(ast, env)¶
-
commonl.expr_parser.
ast_sym_int
(ast, env)¶
-
commonl.expr_parser.
ast_expr
(ast, env)¶
-
commonl.expr_parser.
parse
(expr_text, env)¶ Given a text representation of an expression in our language, use the provided environment to determine whether the expression is true or false
8.7.7.1. TCOB Fixture setup instructions¶
The TCOB board is a rectangular board, considered to be:
- facing up (DIP switches 1, 2 3, and 4 facing the viewer) -
- top left is the side where the power barrel connector J4 is
- bottom is the side where jumpers J1, J2, J3, J6 and J7, J8, J9 are
+---------------------------------------------------------------+
|+----+ +--J11----------------+ +---J10---------------+ |
|| J4 | | X11 X10 X9... X1 X0 | | X11 X10 X9... X1 X0 | |
|+----+ +---------------------+ +---------------------+ |
| +------------------+ +-----------------+ |
| +----+ | DIP3 | | DIP2 | |
| | J5 | +------------------+ +-----------------+ +---+ |
| +----+ | D | |
| | I | |
| | P | |
| | 4 | |
| +---------+ +---+ |
| | DIP1 | |
| +---------+ |
| +----+----+----+ +----+ +----+----+----+ |
| | J2 | J3 | J6 | | J1 | | J7 | J8 | J9 | |
| +----+----+----+ +----+ +-----+ +----+----+----+ |
+-----------------------------------| J13 |---------------------+
+-----+
Sensing pins banks
These are the pins that are connected to the Arduino header for testing
J11 is a bank of 12 pins on the top left of the board, named X0 through X11 (from right to left)
J11’s I2C 3-bit address is controlled by J2/J3/J6. Shorting the two left pins of the jumper means 1, shorting the two right pins means 0. The address is computed as:
J2 << 2 | J3 << 1| J6
J10 is a bank of 12 pins on the top left of the board, named X0 through X11 (from right to left)
J11’s I2C address is controlled by J7/J8/J9; same SPI address assignment strategy as for J11.
To address the sensing pins XN we use the tuple (addr, N), where addr is the address of the J11 or J10 block where the pin is physically located. Thus, if J11 has assigned address 3, J11’s X5 would be (3, 5).
DIP configuration to allow control of the TCOB via the I2C protocol
- DIP3: turn on 1, 2 and 4 (I2C SCL, SDA and GND)
- DIP2: turn on 6 (reset)
- DIP3: FIXME? turn on 3, 9, 10 on first TCOB layer
- DIP4: turn on 2, 3, 4 and 5 to use the Arduino Due SPI port for pins MISO/MOSI/SCK/CS (FIXME: not clear).
Power configuration
- default power supply: short J5 two bottom pins (we use this by default)
- external (barrel) 12 VDC power supply: open all of J5 pins
- J1: reference voltage
- short J1 if if the Arduino (FIXME: Dut? controller?) logic voltage level is 3.3V
- connect to 5V for reference voltage otherwise
Stacking
TCOBs are designed to be stacked, the top level J1, J5 and J21 being passed through. The top level board shall be the one used to short J1 and J5 if such is the configuration.
Connection map
We will give the TCOB controller a connection and a pin map, in which we describe to which pins the different Arduino pins are connected, for example:
_common_connections = dict(
# switch 0
DUT_SCL = "0 0",
DUT_SDA = "0 1",
SPI_0_MOSI = "0 2",
)
pin map: FIXME, what is it for?
Indicates that pin X0 of the bank which I2C address 0 is to be connected to the Arduino headers SCL line. By definition, it means there has to be a sensing pink bank with address 0
For most configurations described below, we stack two TCOBs:
- top TCOB: J11 is #1 (001), J10 is #0 (000)
- bottom TCOB: J11 is #3 (011), J10 is #2 (010)
-
commonl.tcob.
validate_pin_map
(pin_map)¶
-
commonl.tcob.
validate_connections
(connections)¶