The DAQ Manager has extensive APIs for database, server REST access. Also provided are a set of user interface libraries that provide canned views which can either be coupled with standard model/controller APIs to build user interfaces, or coupled with application specific model/controller code to produce custom written GUIs.
At present these APIs only supply Tcl bindings. As time goes on, it is likely that additional bindings will be written and documented here and in the reference pages.
The organization of this chapter is as follows:
Manager Configuration Database API documents the components of the DAQ manager configuration database API. This API not only supports database manipulation but includes code used by the manager.
Manager REST client API. documents the components of the manager REST client API. This API allows you to create custom code to interact witht he manager.
Each of these sections will list and describe the packages provided in each category and provide copious references to man pages that provide detailed descriptions of their public interfaces.
This section will provide documenation for Tcl packages that provide direct access to the experiment configuration database used by the manager. In actual fact, these packages also contain the manager server, minus its REST interface packages.
The Tcl packages described in this section are available in the
NSCLDAQ Tcl library tree. This is defined by the $DAQTCLLIBS environment
variable when you source in daqsetup.bash from
NSCLDAQ 12.0 and later. You can add this directory to your path
by either adding it to the TCLLIBPATH environment variable when running
tclsh or explicitly including it in auto_path
prior to sourcing your packages. The next pair of examples illustrate
both options.
Example 46-1. Setting the TCLLIBPATH environment variable for tclsh
# On the Tcl command line: TCLLIBPATH="$TCLLIBPATH $DAQTCLLIBS" tclsh # Or via a persistent environment variable: TCLLIBPATH="$TCLLIBPATH $DAQTCLLIBS" export TCLLIBPATH tclsh
Note that the method shown appends the DAQTCLLIBS environment variable value to any existing directorys in the TCLLIBPATH. This environment variable is a space separated list of directories (valid Tcl list) that are top levels of directories searched for packages.
This code snippet takes advantage of the fact that environment
variables in Tcl are stored in a global array named env
whose indices are variable names and values the values. Using ::
in front of these variable names forces them to be evaluated in the
global scope allowing this code snippet to work even in the body of a
proc.
The database API packages are described in subsections listed below:
The The containers package. section describes a package that allows the manipulation of container definitions as well as the ability to start a persistent container as defined by the database in an arbitrary node that can access the container image and user's home directory tree.
The The programs package section describes a package that suports the definition of programs in the database and can manipulate them as well.
The The sequence package section describes a package that supports manipulation of the state machine and sequences attached to state transitions.
The The kvstore package section describes a package that can manipulate the key value store.
The The auth package. section describes support for an authorization database.
The The eventloggers Package. section describes a package that supports the definition of event loggers.
Note that configuration databases are SQLite3 database files. Thus a pre-requisite for all of these packages is the sqlite3 package. More details will be provided in the sections below.
The containers package is provided to manipulate the definitions of containers and to start and stop persistent containers defined in a configuration database. A common parameter to many of the exported procs is an sqlite3 database command. The example below shows what you need to do to create this command.
Example 46-3. Creating an SQLite3 database command
package require sqlite3 ... sqlite3 db mydbfile.db
This code snippet pulls in the sqlite3 Tcl package and later uses the sqlite3 command to associate the command db with the database file mydbfile.db. The sqlite3 command, by default will create a new database file, however since the database must have a specific schema before it can be used by this package you should not rely on this capability but use mg_mkconfig to create configuration databases instead.
The remainder of this section will briefly describe the capability of the package along with examples. Reference material on the package can be found at: containers.
Containers are represented to have the following attributes:
Each container definition has a unqique name. This name is used to identify the container definition throughout the API>
Each container has an image fie. This is a singularity image that, when activated produces a containerized environment within which programs can run.
Each container definition can have an initialization script. This is a shell script that is run prior to running any program in the container. One common use for this script is to source an appropriate daqsetup.bash into the shell that runs programs in the container.
Each conatainer definition has a possibly empty list of mount points. A mount point is a host file or directory and where it should appear inthe containerized environment. The term for these in singularity's documentation are bindpoints.
The singularity configuration provides a set of default mount points that meet most needs, however some mount points depend on the container. For example a containerized environment for Debian 10 will probably need a directory tree of the NSCL software compiled for that software while one for Debian 8 will need a different directory tree.
Containers can be created, removed, listed, tested for, activated, and deactivated. A program can be run in an activated container.
Here's an example of a typical container creation we might have at the FRIB. We want a container that will run the image /user/opt/buster.img and, for each program run in it will have the environment variables set up for NSCLDAQ-12.0. This will require an initialization script (to set up those variables) and the following code:
Example 46-4. Creating a container definition for buster
#---------------- Contents of ~/daq12.0.sh --------------- #!/bin/bash. /usr/opt/daq/12.0/daqsetup.bash #------------------------------------------------------------ #------------------------ contents of mkcontainr.tcl ----------- lappend auto_path $env(DAQTCLLIBS)
package require sqlite3
package require containers sqlite3 db myconfig.db
![]()
container::add db buster /usr/opt/buster.img ~/daq12.0.sh \ [list [list /usr/opt/opt-buster /usr/opt]] db close
![]()
Let's pick this apart.
Note, as we shall see later, the path used for daqsetup.bash, is therefore where this file will appear in the activated container.
The binding we create will make the host directory tree /usr/opt/opt-buster visible in the activated containers as /usr/opt.
This matches our FRIB convention of storing built NSCLDAQ software for a specific container type in /usr/opt/opt-containertype which, for proper use should appear at /usr/opt.
Defining a container does nothing more than that. To use the containerized environment it defines, you must activate the container and subsequently run programs in that container. You can then shutdown a container by deactivating it. The example below makes a ringbuffer in the system spdaq99 named fox. We override the default ring buffer data size and create a 16Mbyte ringbuffers.
Example 46-5. Using containers To Make a RingBuffer.
lappend auto_path $env(DAQTCLLIBS) package require sqlite3 package require containers sqlite3 db myconfig.db set containerFd [container::activate db buster spdaq99]set programFd [container::run buster spdaq99 \
{$DAQBIN/ringbuffer create fox --datasize=16m}] puts [gets $programFd]
close $programFd container::deactivate spdaq99 buseter
close $containerfd
![]()
The variable programFd
is a file descriptor
that receives output and error from the SSH command used
to activate the container.
DAQBIN
. Second because in our container
definition, we've arranged for daqsetup.bash
to be run the environment variable DAQBIN
is defined. Finally, because the container package writes a
script to run the program, the string $DAQBIN
will get properly substituted.
Having captured and output the ringbuffer output/error messages, we close the file descriptor.
Note that the package's ideas about which containers are active depend on internal data rather than any system information. Therefore If you activate a container in one program and then run a program in it in another, you may fail.
Note also that activating a container and running programs in it, depends on being able to run the ssh command. It is therefore important that your ~/.ssh/authorized_keys file has a public key for your ssh identity. See the results of google searches for SSH without password for information about how to set this up.
The programs package provides support for making and manipulating program definitions, for running and killing programs in hosts (either in or without containers), and for capturing program output. The package, if needed, activates containers using the containers package we just described to run programs within a containerized environment.
Reference material on the programs package can be found at: programs
A program is defined by the following mandatory data:
An integer that is unique across all programs.
This name is used to identify the program primarily to people. The name must be unique among all programs.
The filesystem path to an executable entity. Executable entities can be binary program sor they can be scripts.
If the program runs containerized the executable entity must be locatable within the containerized environment. This means that if the path to the script is incomplete, the PATH variable in the container must be able to locate the program. Alternatively, if the path to the program is complete, but the image of the filesystem within the container is different that in the host, the path within the container must be used.
The type of the program. Programs can be Transitory, which means it is expected they will exit. There are also two types of persistent programs (programs that are not expected to exit): Critical programs are required for the data acquisition system to function properly. If a Critical program exits, the DAQ manager forces the DAQ system to shutdown. Persistent programs are not expected to exit but, if they do, data taking can continue.
The DNS name of the host in which the program will be run. You should specify exactly the name of the host in which to run the program and not use localhost.
Several optional bits of data can be associated with a program and define the environment in which the program runs. Not all of these items are used at this implementation of the system.
The name of a container in which the program will run. This container must be defined in the containers package.
An initialization script run prior to running the program. This is not yet used. The contents of this script are pulled into the database. Thus changes to the initialization script (once it's implemented) will not bee seen until the program is re-defined.
By not used, I mean that the GUI systems to edit program definitions don't provide a mechanism for providing this script. At program activation time, any initialization script provided is used.
Since the contents of the script are sucked into the database, it's important to provide the path to the script at the time the program is defined.
Not currently used. If the program provides a REST service, its name should be provided here.
A list of environment name, value pairs which will be put into the program's environment before it is started.
For example, when a Tcl script is the program, you may need to supply a TCLLIBPATH=$DAQTCLLIBS environment definition
The working directory in which the program will be started. If not provided, you should make no assumptions about the working directory the program will be run in.
These bits of optional data are used to construct the command used to run the program.
These are the program options and optionally values
needed by those options. For example, for a Readout,
an option might
be --ring
with a value like
fox
. It is legal for options not to have
a value (for example --oneshot
).
Options are considered to be unordered.
Parameters are placed on the command line following all options. They are considered to be a list of ordered values. An example of program parameters might be the name of the host in which the manager is running.
Note that since many of these items are used to construct a script to run the command environment substitutions are supported e.g. if daqsetup has been run in the context in which the program was run $DAQBIN can be used to specify the path to an NSCLDAQ program.
Let's look at some of the operations you can perform on programs. We're going to assume that there's a container defined named buster whose initialization script runs daqsetup.bash.
Adding a Program. To add a program running VMUSBReadoutin spdaq99, for example we might do the following:
Example 46-6. Adding a New Program Definition
lappend auto_path $env(DAQTCLLIBS)package require sqlite3
package require programs sqlite3 db myconfig.db
program::add db readcrate1 \$DAQBIN/VMUSBReadout Critical spdaq99 \ [dict create \ container buster \ options [list {--daqconfig ~/spdaq99/daqconfig.tcl} \
{--ctlconfig /dev/null} {--ring=crate1} {--sourceid 99} \ {--initscript ~/spdaq99/startrest.tcl} \ ] \ environment {TCLLIBPATH $DAQTCLLIBS} \ service ReadoutREST \ ] db close
auto_path
so that the
programs package can be found.
Note that while this command will create a database if does not exist, the resulting database will not have any table definitions. To create an experiment configuration database file, you should instead use the NSCLDAQ mg_mkconfig command.
The next two parameters are the name and program path respectively. The name must be unique over all programs or else an error will be thrown. The program path must be valid within the environment in which the program will be run. Note the use of the backslash character to prevent variable substitution in specifying that the VMUSBReadout command lives in the NSCDAQ binaries directory.
The program type is Critical since it is a readout and likely the experiment cannot run without it.
The program will run in the host spdaq99
The optional data are specified as a dict. The key of the dict specifies the data provided and the value the data itself. The container key specifies the program will be started containerized and will run in the buster container.
The options key specifies a list of
option/value pairs passed to the program when it is run.
Each option is a one or two element sublist containing,
inorder, the
option name and value. If an option does not require a name,
it must be specified with a single element sublist.
The option/value pairs in this example should be familiar
to users of VMUSBReadout, with the
possible exception of --initscript
which
supplies a TCL script sourced by the program's Tcl interpreter
to start the REST server component.
The environment key provides environment variables. In this case we ensure that the DAQ Tcl libraries are in the library search path. This will be needed by ~/spdaq99/startrest.tcl.
While not yet used, we set the servicename used by the REST interface so that when this is used by the system, we don't need any changes.
The contents of the ~/spdaq99/startrest.tcl script are simple:
Example 46-7. Initialization Script to start Readout REST servers
package require ReadoutREST close stdin
Listing Program Definitions. There are two entries for getting program definitions getdef returns a dict that describes a program definition when given a database handle and the name of a program. listDefinitions returns a list of those dicts, one for each program definition.
The example below lists the name of each program, where it runs and the image it runs. See programs for a full description of the dict used to describe program definitions.
Example 46-8. Listing Program Definitions
lappend auto_path $env(DAQTCLLIBS) package require sqlite3 pacakge require programs sqlite3 db myconfig.db set inf [program::listDefinitions db] foreach p $inf { set name [dict get $p name] set host [dict get $p host] set path [dict get $p path] puts "$name runs $path in $host" }
This example should be relatively self explanatory listDefinitions only requires a database command ensemble connected to a properly formatted database.
Running a Program. Once a program has been defined it can be run. When you run a program, if it has a container specified, the package will first activate the container. Regardless it will write a script to run the program. The program itself, is run over an ssh pipe and the file descriptor for that pipe's output and stderr is returned to the caller.
program::run establishes an output handler for the pipe. Optionally the caller can supply an output handler as well. Regardless, it's important, for those output handlers, that at some point an event loop is entered to allow those file handlers to be dispatched.
In the example below, we run two programs: readcrate1 and unimportant. We'll catch and output the output/error of readcrate1 and let the default handler take care of unimportant.
When readcrate1 exits (detected by an eof on the pipe), we'll exit as well.
Example 46-9. Starting Programs in the programs Package
lappend auto_path $env(DAQTCLLIBS) package require sqlite3 package require programs sqlite3 db myconfig.db set done 0 proc ReadCrate1OutputHandler {name fd} {if {![eof $fd]} { puts "$name: [gets $fd]" } else { incr ::done } } program::run db readcrate1 ReadCrateOutputHandler
set fd [program::run db unimportant] vwait done
if {[program::isActive unimportant]} { program::kill db unimportant
catch [close $fd] } db close
If the file descriptor has not been closed by the program,
a line is read and output preceded by the name of the program.
If an end file is detected, we increment the done
variable. More about this later.
Note that we don't close the file descriptor. The program API wraps our output handler in its own output handler. That output handler wrapper will take care of closing the file descriptor.
done
global variable is modified. As we've seen, this will
be modified when readcrate1 exits
(or at least closes its output and error pipe).
Entering the event loop allows Tcl to dispatch the output
handlers attached to file descriptors
(see the Tcl documentation of fileevent).
It's a good idea to wrap close file file descriptors open on pipelines in a catch. This is because anything that looks like a failure of the process connected to the pipe will result in an error from the close command. This includes not only a non zero exit status but any output on the program's stderr as well.
A note about program::kill. This operates by locating the program image in the remote system and using ssh to issue a kill command. Sadly, there are cases where this is unsufficient and I don't know how to help that. Specifically, if the image is a script that runs a program, even though the script gets killed, the program my continue to run.
The sequence package provides support for two concepts within the manager:
The manager implements a state machine. While there is a default state machine loaded into the manager it is possible to define any state machine as long as its initial state is called SHUTDOWN and any state can transition to SHUTDOWN
The manager associates sequences to transitions into a specific state. A sequence is a list of programs (see previous section) that are run to properly transition to that state.
Reference information on the sequence package is at sequence
The remainder of this section will describe:
What a state machine is, and the default state machine that's implemented when a configuration database is created. This is described in State Machines, and the Default State Machine
Parts of the sequence package that can be used to create other state machines than the default state machine. Note that in most cases it is not necessary to create a custom state machine. This is described in The State machine API
The Sequence API sequences and the API section of the sequence package that manage sequences.
The Transition API describes transitions and the parts of the API that manage them.
Miscellaneous API Entries. describes miscellaneous parts of the API that don't neatly fit into any of these categories.
This section describes, in a general way, what a state machine is and the default state machine that is loaded into the database by mg_mkconfig.
A state machine is a system that can live in any of a set of named states. State machines define, as well, the legal transitions to successor states given the system is in a specific state.
State machines have at most two special states. The initial state defines the state the system is in when it is initially instantiated. A system may also have a final state. The final state is one that has no legal successors. If a state machine enters the final state it has completed execution.
The table below describes the default state machine as the set of state names and their legal successor states. The initial state is also described.
Table 46-1. Default Manager State Machine
State | Successors |
---|---|
SHUTDOWN (initial) | BOOT, SHUTDOWN |
BOOT | SHUTDOWN, HWINIT, BEGIN |
HWINIT | SHUTDOWN, BEGIN |
BEGIN | SHUTDOWN, END |
END | SHUTDOWN, HWINIT, BEGIN |
This state machine has no final state.
The state machine API allows you to configure an arbitrary state machine. The requirement that the manager has for statemachines is simply that there's an initial state named SHUTDOWN and that this state is directly reachable from any other state. Note that various other GUI's such as rdo_RunControl have additional requirements.
The three procs that support modifying the state machine are newState which creates a new state. newTransition which defines a new transition and rmvState which removes a state and all transitions to/from it as well as all sequences it triggers. See The Sequence API for more on sequences and triggers.
In the example below we extend the state machine to support a PAUSED state. The paused state can be reached from the existing BEGIN state and can reach BEGIN, END and, of course SHUTDOWN.
Example 46-10. Adding a PAUSED State
lappend auto_path $env(DAQTCLLIBS) package require sqlite3 package require sequencesqlite3 db myconfig.db sequence::newState db PAUSED
sequence::newTransition db BEGIN PAUSED sequence::newTransition db PAUSED END
sequence::newTransition db PAUSED BEGIN sequence::newTransition db PAUSED SHUTDOWN db close
PAUSED
to the state machine. At this point there are no
states that can transition into PAUSED
and PAUSED cannot transition to any
successor state.
Using the rmvState operation it's possible to make a completely new state machine by removing all states in the existing state machine and starting from scrach.
States and transitions by themselves do nothing. The power of this package is the ability to add sequences of actions to transitions into a state. In this version (NSCLDAQ 12.0), the actions that can be added are simply running programs that have been defined using e.g. the programs API.
A sequence is a series of ordered steps. Each step runs a program with an optional pre and post delay. Sequences are triggered by the entry into a specific state. Any number of sequences can be defined and a transition into a state can trigger as many sequences as desired.
Each sequence step runs a program that was defined using the programs API. Recall that there are three types of programs; Transitory, Persistent and Critical. Transitory programs are expected to exit quickly while Persistent and Critical programs are expected to endure indefinitely.
Therefore, when a Transitory program runs, the step stalls until that program exits. Persistent and Critical programs, however, are simply started. If a Critical program exit is detected, the tate machine initiates a transition to the SHUTDOWN state.
Supose we want to start the readout1 and readout2 the event builder (eventbuilder) and data sources for readout1 and readout2 (feed1 and feed2) on a transition to BOOT. These programs have already been defined in the database. The following code snippet performs this task:
Example 46-11. Defining A Boot Sequence
lappend auto_path $env(DAQTCLLIBS) package require sequence package require sqlite3 sqlite3 db myconfig.db sequence::add db InitiateDataFlow BOOT::sequence::addStep db InitiateDataFlow readout1 ::sequence::addStep db InitiateDataFlow readout2
::sequence::addStep db InitiateDataFlow eventbuilder 0 2
::sequence::addStep db InitiateDataFlow feeder1 ::sequence::addStep db InitiateDataFlow feeder2 db close
Transitions are the act of placing the system in a new state that can be legally reached from the current state. Doing so requires that all sequences triggered on the new state run successfully.
There are several API members that allow you to determine the current state, the legal transtions from that state, to test if a transition is legal and, most importantly, to initiate a transition.
Transitions require scheduling and this requires the event loop. Thus normally, you initiate a transition and enter the event loop at least until the transition completes.
Note that if a sequence triggered by the transition runs either Persistent or Critical programs, capturing output from those programs is also done via the event loop. Thus users of this package that drive the state machine through transitions must be sure to enter the event loop in a manner that makes the system responsive to events.
The example below tests to see if we are in SHUTDOWN state and, if so, initiates a transition to BOOT the system.
Example 46-12. Booting the System.
lappend auto_path $env(DAQTCLLIBS) package require sqlite3 package require sequence sqlite3 db myconfig.db set status "" proc completion {db manager completionStatus} {set ::status $completionStatus } if {[::sequence::currentState db] eq "SHUTDOWN"} {
::sequence::transition db BOOT completion
vwait status
puts "Transition to BOOT completed in $status } else { puts "Not in SHUTDOWN state." } db close
See the reference page for more information about the Transition manager and its public methods.
status
global variable with the status
of the transition which ends the vwait and makes the
completion status available.
Several miscellaneous API interfaces allow you to gain information about the system. It is possible, for example, for applications to:
Get information about the currently active state transition.
Capture the output of specific sequence steps.
Send text to the output server so that it is relayed to all output clients.
Note, again, that transitions and output handling are triggered by the event loop and thaty therefore the program in which these are used must enter the event loop in a timely manner.
The fragment below will determine which, if any transitionis currently in progress, and attach an output monitor to each step of the sequences that are triggered by that transition. This makes use of the services of a TransitionManager object that is documented fully in the reference material.
Example 46-13. Using Miscellaneous Sequence Facilities
... package require sequence package require snit... set manager [::sequence::getCurrentTransition]
if {$manager ne ""} { set db [$manager cget -database] set t [$manager cget -type] set allSequences [::sequence::listSequences $db]
foreach seq $allSequences { set name [dict get $seq transition_name] if {[dict get $seq transition_name] eq $t} { set steps [::sequence::listSteps $db $name]
foreach step $steps { set aMonitor [MyMonitor %AUTO% \
-database $db -sequence $seq -step $stepno \ ] set stepno [dict get $step step] ::sequence::addMonitor $db $name $stepno $aMonitor
} } } }
snit::type MyMonitor { option -step option -database option -sequence constructor args { $self configurelist $args} method onOutput {db program fd} {
set line [gets $fd] set name [dict get $program name] ::sequence::relayOutput "Step $options(-step); $name: $line"
} method onExit {program fd} {
::sequence::relayOutput "[dict get $program name] exited!!" ::sequence::addMonitor $options(-database) $options(-sequence) \ $options(-step) "" after 0 [mymethod destroy] } } ... # More code that eventually enters the event loop.
It's important to emphasize that this code is a program fragment not an complete program. The elipses show where additional code would be needed.
We're going to use snit types to define our output monitors. There are other options like namespace ensembles, TclOO and Incremental Tcl (in fact SNIT is a recursive acronym that stands for SNIT Is Not Incremental Tcl).
The key point, as we'll see is that in order to make an output monitor, you need to have a command ensemble that accepts the two subcommand onOutput and onExit.
Since snit is also used as a megawidget framework.
in addition to encapsulating
variables, a snit type can
define options. Options are like the options in a
Tk widget. They can be supplied at construction time or
built in methods configure
and cget set or query options.
Options appear to method code like an array named
options
indexed by the option name.
Finally a snit type method defines a subcommand of the instance command ensembles.
This sample implementation simply informs clients of the output relay server, removes itself as a monitor (by passing an empty command to ::sequence::addMonitor). Finally it schedules itself for destruction. This is done from the event loop to prevent potential awkward problems with the object being destroyed while one of its method is still being executed. While I have successfully done $self destroy in similar situations, it's always left me fealing uneasy.
A key value store can be thought of as an array whose keys are strings. The manager key value store is string valued. Associated with each key is a string value. A kvstore package provides the abiliy directly access the key value store in the configuration SQLite3 database file.
Using the API, you can create a new key/value pair. Your application can read existing keys and modify their values. Finally, you can also remove an existing key/value pair from the store.
The key value store can be used by you as you wish. However, note that the readout control packages create and use a pair of keys: run and title to hold the run number and title of the next run. As the manager evolves it is possible that additional facilities will create and use other standard key/value pairs.
See kvstore for reference information on the key value store API.
The example below shows a few of the features of the kvstore package. It assumes that the DAQ environment for a version 12.0 or higher NSCLDAQ has been setup or, alternatively that the TCLLIBPATH environment variable includes the TclLibs subdirectory of such a version.
Example 46-14. Using the kvstore package
if {[array names env DAQTCLLIBS] ne ""} { lappend auto_path $env(DAQTCLLIBS) } package require kvstorepackage require sqlite3 sqlite3 db myconfig.db kvstore::create db mykey myvalue
kvstore::modify db mykey {A new value}
puts "Contents of kvstore:" dict for {key value} [::kvstore::listAll db] {
' puts "$key => $value" } db close
While it is not yet used as of NSCLDAQ-12.0, the manager supports an authorization database. In the future, this will be used to limit the things individual users can request of the manager. Understanding this part of the API and database requires understanding the following basic terms:
Users represent holders of computer accounts. Users band together to collaborate on experiments.
A role is represents a bundle of capabilities. Usually role names are designed to represent the things that individual users on an experiment might do. For example manager might be the people that can configure the system and start/stop the DAQ manager.
Roles can be granted and revoked to users. When a role has been granted, the user gains the bundle of capabilities represented by the role. For example, if a user is granted the role of ShiftOperator, they might gain the ability to start/stop runs.
Full reference material for the authorization package (auth)) is provided in auth.
The example below adds a new user to the experiment, creates a few roles and grants a few of those roles to the new user.
Example 46-15. Program Using the auth Package
if {[array names env DAQTCLLIBS] ne ""} { lappend auto_path $env(DAQTCLLIBS) } package require authpackage require sqlite3 sqlite3 db myconfig.db auth::adduser db fox
auth::addrole db manager auth::addrole db analyzer
auth::addrole db operator auth::grant fox operator
puts "People and the roles they have:" dict for {person roles} [auth::listAll db] {
puts "$person has been granted [join $roles ", "] } db close
The eventloggers package provides the ability to define and control event loggers. Event logger definitions in the manager configuration database subsume all of the capabilities of the NSCLDAQ-11 ReadoutShell's multilogger and primary event logger.
An arbitrary number of event loggers of both types can be defined, enabled, disabled, and marked as critical. If a critical logger fails, a SHUTDOWN state transition is forced. In this sense, this package is dependent on the sequence package.
As you build the set of event loggers you will use, be careful to consider the bandwidth required to run them.
Event loggers have the following attributes:
A unique integer, or id is assigned to each logger as it is created. This id can be used to refer to that logger in the future.
Top level directory of the NSCLDAQ installation that contains the event logger that will be run. This determines the version of NSCLDAQ from which the event logger comes.
If the event logger is containerized (see options below), this path must be the correct path inside the running container.
The URL which defines the ring buffer from the logger logs data.
The host in which the event logger must run. It is assumed, that root is valid in that host and that the host is running the NSCLDAQ services.
The top level directory in which the loggers stores data. See TYPES OF LOGGERS below for more information about this.
Boolean value. If true, this logger is a partial logger. See TYPES OF LOGGERS below.
Boolean. If true, the logger is a critical component of the running DAQ system. If this logger fails, the manager will therefore force a SHUTDOWN transition.
Boolean that, if true, indicates the logger is enabled. If not enabled, a logger won't run to take data during a data taking run. If enabled it will.
Note that it is legal for critical loggers to be disabled.
THe name of the container the event logger will run in.
Destination directories, and data source URI's must be unique.
TYPES OF LOGGERS. The event logging subsystem recognizes two types of loggers, partial and complete. Partial loggers, like loggers in the NSCLDAQ multilogger package just log event files into a directory in a 'soup of files'. Event files are preceded with the date/time at which the logger started to ensure they are unique. For partial loggers, the destiniation is simply the directory in which these files are stored.
Full loggers, on the other hand, behave like the primary event logger in the NSCLDAQ ReadoutShell. The destination directory is the top level directory of the directory tree managed by the logger and its wrapper. The structure of this tree is shown below. The intent is to provide an experiment view and a per run view. The experiment view provides access to all event files while the per run view also provides access to associated run metadata stored in the experiment/current subdirectory when the run ended.
Figure 46-1. Full logger directory tree
(destiniation) +
+----> experiment+
| +---> current
| +---> run1
| +---> run2
... ...
+----> complete
The example below illustrates some of the simple capabilities of the eventloggers package. Full reference documentation can be found at eventloggers
Example 46-16. Sample Eventloggers Program
if {[array names env DAQTCLLIBS]} { lappend auto_path $env(DAQTCLLIBS) } package require eventloggers package require sqlite3sqlite3 db myconfig.db set dest [file normalize ~/stagearea/evlog1]
file mkdir $dest set id [::eventlog::add db /usr/opt/daq/12.0-000 \
tcp://spdaq99/fox $dest \ [dict create host evloghost critical 1 enabled 1 container buster] \ ] set killdest [file normalize ~/stagearea/evlog2]
foreach logger [eventlog::listLoggers db] {
if {[file normalize[dict get $logger destination]] eq $killdest} { eventlog::rm db [dict get $logger id]
} } db close
The final parameter is a dict whose keys define overrides for optional configuration values. We illustrate some of the more common options here:
The system which will run the event logger.
Flag that indicates whether or not the logger is critical, we've marked it critical
Flag that indicates whether or not the logger is enabled.
Name of a container that was established via the containers package. Note that when supplying this any filesystem paths provided (in this case the destination and root), must be valid within the running container.
killdest
), and
destroy it using
eventlog::rm
This loop loops over the loggers searching for one
who's normalized destination path is the same as
the path in killdest
.
When comparing file system paths, it is important
to compare normalized paths as several path designations
can point to the same place.