Now that the data look right, we're going to tailor SpecTcl to do online analysis of that data. In this example, we are going to write our code as if the module is the only item to decode. We will decode our data in to parameter named t.0...t.31. In a larger set up you might want to encapsulate the data from the TDC in a packet and have your code search for that packet in the data, only unpacking that part of the data.
SpecTcl analyzes data using the model of an analysis pipeline. Each stage of the pipeline has access to the prior stage's data, as well as the raw event. We'll illustrate this by writing a second stage of the pipeline that produces time differences between adjacent channels of the TDC. The interesting thing is that we can write that stage without having any knowledge of the format of the raw event.
This suggests that your SpecTcl software should consists of analysis stages that first decode the raw data and then produce physically useful parameters once those data are decoded. This protects what is usually the hard part of your analysis software from changes to the structure of the raw event.
SpecTcl's data analysis pipeline is expected to take a stream
of raw events and unpack it into parameters.
Spectra can then be defined on those parameters. Gates can also be
defined on parameters and used to conditionalize when a spectrum
is incremented. Each stage of the event analysis pipeline is an
object from a class derived from
CEventProcessor
.
In this section we'll write an event processor class that will decode the data our readout program is producing. To do this we will:
Obtain a copy of the SpecTcl skeleton.
Write our event processor class.
Add an object of our event processor to the SpecTcl analysis pipeline.
Create a SpecTcl startup script that creates raw time spectra.
Build and test our modified SpecTcl.
In this section we are going to create a working directory and copy the SpecTcl-v3.4 skeleton in to that directory.
Here is the header for our raw unpacker (event processor) pipeline stage:
Example 10-9. Raw TDC unpacker (RawUnpacker.h)
#ifndef _RAWUNPACKER_H #define _RAWUNPACKER_H #include <config.h> #include <EventProcessor.h> class CTreeParameterArray; class CRawUnpacker : public CEventProcessor { public: CRawUnpacker(); virtual ~CRawUnpacker(); virtual Bool_t operator()(const Address_t pEvent, CEvent& rEvent, CAnalyzer& rAnalyzer, CBufferDecoder& rDecoder); private: CTreeParameterArray& m_times; }; #endif
This header must be included prior to any other headers.
CEventProcessor
abstract base class. Our class will derive from
that base class. This means that the compiler
will need to know the shape of that class when
compiling our header.
CRawUnpacker
.
This will be our class for unpacking the raw
data. Note that it is declared with
CEventProcessor
as
a base class.
Let's fill in the class implementation. We'll do this in sections so that no single sample chunk of code is very large. Note that some sections may be presented out of order for the sake of clarity.
Example 10-10. v775 raw unpacker implementation includes and defs (RawUnpacker.cpp)
#include "RawUnpacker.h" #include <TreeParameter.h> #include <TranslatorPointer.h> #include <BufferDecoder.h> #include <TCLAnalyzer.h> #include <assert.h> #include <stdint.h> static const uint32_t TYPE_MASK (0x07000000); static const uint32_t TYPE_HDR (0x02000000); static const uint32_t TYPE_DATA (0x00000000); static const uint32_t TYPE_TRAIL(0x04000000); static const unsigned HDR_COUNT_SHIFT(8); static const uint32_t HDR_COUNT_MASK (0x00003f00); static const unsigned GEO_SHIFT(27); static const uint32_t GEO_MASK(0xf8000000); static const unsigned DATA_CHANSHIFT(16); static const uint32_t DATA_CHANMASK(0x001f0000); static const uint32_t DATA_CONVMASK(0x00000fff);
CTreeParameterArray
.
Note that byte order signatures will typically be those of the system writing the event data. If those data, in turn, come from a device with different endian-ness thatn the host system, the user will have to know this and perform additional conversions.
Example 10-11. V775 raw unpacker - getLong utility (RawUnpacker.cpp)
static inline uint32_t getLong(TranslatorPointer<uint16_t>& p) { uint32_t l = *p++ << 16; l |= *p++; return l; }
While our redout program runs on a little endian computer, the VME bus is big-endian. This utility takes a translator pointer object that points to a specific 32 bit item in big endian format and converts it to native format.
Note that the translating pointer object is passed by reference. The pointer is incremented to point beyond the uint32_t that was converted.
Example 10-12. V775 raw unpacker object constructor/destructor (RawUnpacker.cpp)
CRawUnpacker::CRawUnpacker() : m_times(*(new CTreeParameterArray("t", 4096, 0.0, 4095.0, "channels", 32, 0))) {} CRawUnpacker::~CRawUnpacker() { delete &m_times; }
Example 10-13. v75 raw unpacker unpacking events (RawUnpacker.cpp)
Bool_t CRawUnpacker::operator()(const Address_t pEvent, CEvent& rEvent, CAnalyzer& rAnalyzer, CBufferDecoder& rDecoder) { TranslatorPointer<uint16_t> p(*rDecoder.getBufferTranslator(), pEvent); CTclAnalyzer& a(dynamic_cast<CTclAnalyzer&>(rAnalyzer)); TranslatorPointer<uint32_t>p32 = p; uint32_t size = *p32++; p = p32; a.SetEventSize(size*sizeof(uint16_t)); uint32_t header = getLong(p); assert((header & TYPE_MASK) == TYPE_HDR); assert(((header & GEO_MASK) >> GEO_SHIFT) == 0xa); int nchans = (header & HDR_COUNT_MASK) >> HDR_COUNT_SHIFT; for (int i =0; i < nchans; i++) { uint32_t datum = getLong(p); assert((datum & TYPE_MASK) == TYPE_DATA); int channel = (datum & DATA_CHANMASK) >> DATA_CHANSHIFT; uint16_t conversion = datum & DATA_CONVMASK; m_times[channel] = conversion; } uint32_t trailer = getLong(p); assert((trailer & TYPE_MASK) == TYPE_TRAIL); return kfTRUE; }
operator()
method
of a registered event processor is called for each
event. This function has access to the raw
event via the pEvent
parameter and access to the unpacked parameters
via the rEvent
parameter
array, although binding tree parameters and
tree parameter arrays to rEvent
is usually simpler.
SpecTcl has two other objects that event processors
need. The rAnalyzer
parameter
is a reference to an object that oversees the flow
of control through the analysis of the data. It is
actually the object that invoked
operator()
. Knowledge
of the top level structure of the event data is
held in rDecoder
which,
for
historic reasons is called a CBufferDecoder
.
It is responsible for picking apart the outer
structure of the data and passing type decoded
data
to the analyzer for dispatch.
The expectation is that the analysis pipeline will:
Decode the raw event turning it into parameters
that are in rEvent
(tree parameters get automatically bound to
elements of rEvent
before the analysis pipline starts).
Inform the analyzer about the number of bytes that are in the event.
Detect and inform the analyzer about failures in the pipeline that should abort its execution and discard the parameters prior to the histogramming pass over the data.
All of the work done by operator()
is directed at one of these three tasks. Note that
in a larger pipeline, the second of these tasks,
telling the analyzer the event size, only needs to
be performed by one of the pipeline elements.
There is also no need for all elements of the
analysis pipeline to touch the raw event data and,
in complex analysis, usually only a few will.
TranslatorPointer
objects behave somewhat like pointers but automatically
translate data from the readout system's bye ordering
to the host system's byte ordering. Therefore,
to be fully portable, we encourage all access to
raw event data to be done via a translating pointer.
This line of code creates a translating pointer for uint16_t data that points to the raw event.
TcAnalyzer
object.
While the analyzer object can be configured by
the user normally this is not done.
We need to know the analyzer type because the
method used to pass the size of the event back
to the analyzer is, unfortunately analyzer dependent.
In this line we initialize the variable
a
to be a reference to the
anzalyzer. The dynamic cast will throw an
exception if the analyzer is not, in fact, a
CTclAnalyzer
or an object
from a type derived from CTclAnalyzer
.
CTclAnalyzer
SetEventSize
to inform the analyzer of the event size.
The code does not care about the byte ordering of
the data in the buffer because it creates
a TranslatorPointer<int32_t>
to extract this size. Note that translator pointers
of various simple data types can be assigned.
Using our utility function getLong
to extract the next 32 bit item from the buffer
in host order.
Ensuring that the type field of that item is that of a header (the assert macros will make the program exit with an error message unless the program is defined with -DNDEBUG).
Ensuring the geographical address field of the item matches the geographical address we programmed into the module.
Once the item is validated as a header, the number of
channels of data are extracted from it. Note that
in production code, the use of
assert
is probabl not appropriate
other alternatives are:
Throw an std::string
exception. The analyzer catches those exceptions,
and outputs the message to stdout. If
the analyzer catches an exception, it aborts
the event processing pipeline and does not
histogram any parameters tht were extracted
at that time.
Output an error message and return kfFALSE. This return value causes the analyzer to abort the event processing pipeline and not to run the histogrammer for the parameters extracted so far.
In some cases it may even be appropriate to output a messasge and return kfTRUE. That stops processing the event data but lets the analyzer continue with the next stage of the pipeline (or with histogramming the parameters unpacked so far if this is the last stage).
Note that the data also contains a Valid bit. The default programming of the TDC suppresses data for which this bit is not set. If you turn that supression off, you will need to decide what to do with data that are not valid.
We have code to unpack the TDC. SpecTcl needs to be told
to use that code. This is done in the method
CreateAnalysisPipeline
in the
skeleton file MySpecTclApp.cpp.
That file contains an example event processing pipeline which needs to be deleted. In this section we'll look at the modifications you need to make to MySpecTclApp.cpp for our simple setup.
First locate the section of that file that contains #include directives. Add the following line after the last #include:
That makes our raw event unpacking class
CRawUnpacker
known to the compiler
in this file.
Next modify the CreateAnalysisPipeline
method body to look like this:
void CMySpecTclApp::CreateAnalysisPipeline(CAnalyzer& rAnalyzer) { RegisterEventProcessor(*(new CRawUnpacker), "Raw-TDC"); }
RegisterEventProcessor
adds a new
event processor to the end of the analysis pipeline. The
first parameter is a reference to the event processor object
(an instance of a CRawUnpacker
).
The second parameter is a name to associate with the pipeline
element.
The name is an optional parameter, but there is the capability to introspect and to modify the analysis pipeline at run time (adding and removing pipeline elements at specific points in the pipe). The methods that locate a specific pipeline element require a name for that pipeline element.
We have our unpacking code and SpecTcl has an instance of our unpacker as its only analysis pipeline element. If we ran SpecTcl now it could unpack the data just fine but nothing would be done with the unpacked parameters. We also need to define a set of spectra. We are going to write a startup script for SpecTcl that does this and ensure that this script is run by SpecTcl when it starts up.
Before we do this, I want to point out that the Tcl in the name SpecTcl is there because SpecTcl uses an enhanced Tcl interpreter to implement its command language. Tcl is a powerful scripting language with a very simple and regular syntax.
For information about Tcl, and it's graphical user interface languate Tk, see http://www.tcl.tk/doc/. http://www.tcl.tk/man/tcl8.5/tutorial/tcltutorial.html is a good online tutorial that can get you up and running with the simple stuff quickly. http://www.tkdocs.com/tutorial/index.html is a tutorial for Tk if you are interested in building GUIs on top of SpecTcl.
Basing SpecTcl's command language around Tcl and Tk allows you to automate tasks SpecTcl performs as well as tailoring application specific graphical user interfaces (GUIs) on top of the program. Most experimental groups have their own GUIs. In our script we're going to look at two approaches to defining our spectra. One is simple but verbose, the other takes better advantage of Tcl's capabilities and is much more consise but still clear.
Example 10-14. Defining raw Time spectra the hard way.
spectrum t.00 1 t.00 {{0 4095 4096}} spectrum t.01 1 t.01 {{0 4095 4096}} spectrum t.02 1 t.02 {{0 4095 4096}} spectrum t.03 1 t.03 {{0 4095 4096}} spectrum t.04 1 t.04 {{0 4095 4096}} spectrum t.05 1 t.05 {{0 4095 4096}} spectrum t.06 1 t.06 {{0 4095 4096}} spectrum t.07 1 t.07 {{0 4095 4096}} ... spectrum t.31 1 t.31 {{0 4095 4096}}
Typing all that was pretty traumatic and error prone wasn't it? Let's first look at the spectrum command, which is used to define, delete and list information about spectra. It is used to create spectra with a general form:
Where
Is the name of the spectrum you are creating and must be unique.
Is the type of spectrum being created. SpecTcl supports a rich set of spectrum types. Type 1 is a one dimensional specttrum.
Is a list of parameters that are used to increment the spectrum. The actual meaning of this will vary from spectrum type to spectrum type. A one dimensional spectrum needs only one parameter.
Are a list of three element lists that define the axis ranges and bins on each axis. These spectrum has a single axis definition with a range of 0..4095 and 4096 channels along that axis.
The number of axis definitions depends on the type of the spectrum (e.g. 2d spectra, type 2), have two axis definitions.
When you were typing this in I hope you were thinking "If Tcl is a scripting language there must be a better way to do this right? (Well actually I'm hoping you didn't bother to type this all in and were waiting for this next version).
Have a look at this:
for {set i 0} {$i < 32} {incr i} { set name [format t.%02d $i] spectrum $name 1 $name {{0 4096 4095}} }
Key things to know when decoding this:
If a $ precedes a variable name, the value of that variable is substituted at that place in the command prior to executing the command. (e.g. $i < 32).
If a string is enclosed with square brackets, It is considered to be a command and the result of executing that command is substituted right there in the original command prior to execution.
(e.g. [format t.%02d $i]).
The format command is like the C function
sprintf
the first command
parameter is a format string that is, essentially
a sprintf
format string.
The remaining command parameters are values for the
placeholders in this string. The command result
is what sprintf
would have
stored in its str
buffer.
For example in format t.%02d $i
if i
has the value 3, the
format command result would be
t.03
Now that's something that's much more type-able. Create a file spectra.tcl and copy/paste that text into it.
Having created spectra.tcl we want to ensure that our SpecTcl will execute the commands in that file when it starts. SpecTcl automatically executes a script named SpecTclRC.tcl in the current working directory when it starts running. This script is executed towards the end of initialization, after the analysis pipeline has been created (and in case the tree parameters have been created and bound to actual SpecTcl parameters).
A sample SpecTclRC.tcl is provided with the skeleton you copied. Locate the line:
splash::progress $splash {Loading SpecTcl Tree Gui} 1
Insert the following lines above that line:
set here [file dirname [info script]] source [file join $here spectra.tcl] sbind -all
The first line defines the variable here
to be the directory in which the SpecTclRC.tcl
file lives.
The second line sources the spectra.tcl
file we created from that directory. The third line binds
all spectra into the shared memory region SpecTcl uses to
provide its displayer the spectra.
Let's see if what we have so far actually works. To do this we need to:
Modify the skeleton Makefile so that our code will be built and linked to SpecTcl
Build our tailored SpecTcl
Run our tailored SpecTcl and attach it to the online data stream.
Start a run so that we're taking data
View the spectra SpecTcl creates.
As with the Makefile for the SBS readout program, an
OBJECTS
variable lists the names of the
objects we want to build. Edit the Makefile that came with the
skeleton you copied and change the definition of
OBJECTS
to look like this:
OBJECTS=MySpecTclApp.o RawUnpacker.o
To build you tailored SpecTcl you can then type:
make
Run SpecTcl via the command:
./SpecTcl
A number of windows will pop up. We're going to use two of them. The window titled treegui will be used to connect to the online data. The window titled Xamine will be used to look at plots of our spectra.
To attach SpecTcl to the online system; use the treegui window and select the
menum ite from the menu at the top of that window. In the dialog that pops up, change the radio buttons at the bottom of the dialot to select ring11 and click . You are now connected to the online data coming from the system on which you are logged in (so be sure that system is the one connected physically to your VME crate). You can also acquire data that is taken in a remote host, as long as the system you are logged into is running NSCLDAQ. Simply type the name of that system in the box labeled Host: before accepting the dialog.Now using another terminal window login run the Readout program you had already created, and begin a run. You should see statistics at the bottom of the SpecTcl treegui window changing showing that SpecTcl is analyzing data. SpecTcl should not exit (that would most likely show that an assertion failed).
To view a spectrum Click on the Display button at the bottom of the Xamine window and select the desired spectrum from the list either by double clicking it or by selecting it and clicking
.If you are using the sample electronics setup, the spectra that have signals should show sharp peaks that correspond to the delay you have set in your gate and delay generator.
In this section we are going to write a second event processor. This event processor will be positioned after the raw unpacker we just wrote in the event processing pipeline. It will produce parameters that are the differences of the times different channels of the TDC. To test this event processor you will need to fan out your delayed start so that at least two channels will have data. For more fun, delay the fanned signals so that there is a time difference between those channels.
The purpose of this section is to teach the following concepts:
How event processors can obtain data from previous stages of the event processing pipeline.
How to determine if a parameter has been assigned a value by a previous stage of the pipeline
We will produce parameters with names like tdiff.00.01 which wil be the time difference betweeen channel 0 and 1. For simplicity we will produce parameters like tdiff.00.00 even though these will always have the value 0. We just won't produce spectra for those parameters.
Let's see what the header for an event processor like this might look like:
Example 10-15. Header for time difference event processor (Tdiff.h)
#ifndef _TDIF_H #define _TDIF_H #include <config.h> #include <EventProcessor.h> class CTreeParameterArray; class CTdiff : public CEventProcessor { public: CTdiff(); virtual ~CTdiff(); virtual Bool_t operator()(const Address_t pEvent, CEvent& rEvent, CAnalyzer& rAnalyzer, CBufferDecoder& rDecoder); private: CTreeParameterArray& m_times; CTreeParameterArray* m_diffs[32]; }; #endif
All this should look very familiar. The notable difference
(Besides the change in the class name) is that in addition
to a tree parameter reference for the raw times, we have
m_diffs
is an array of 32 pointesr
to CTreeParameterArray
objects.
We use pointers because, without creating a new class for
encapsulating an array of CTreeParameterArray
objects we don't have a good way to initialize an array of
references.
The idea of this data structure is that
m_diffs[i]
will be an array of differences
between channel i
and the other
channels of the TDC.
We will make our life simple by not considering the problems inherent in allowing copy construction and assignment for a class like this.
Let's look at the implementation of the CTdiff
class:
Example 10-16. CTdiff implementation (Tdiff.cpp)
#include "Tdiff.h" #include <TreeParameter.h> #include <BufferDecoder.h> #include <TCLAnalyzer.h> #include <stdio.h> CTdiff::CTdiff() : m_times(*(new CTreeParameterArray("t", 8192, -4095, 4095, "channels", 32, 0))) { char baseName[100]; for (int i =0; i < 32; i++) { sprintf(baseName, "tdiff.%02d", i); m_diffs[i] = new CTreeParameterArray(baseName, 8192, -4095, 4095, "channels", 32, 0); } } CTdiff::~CTdiff() { for (int i =0; i <32; i++) { delete m_diffs[i]; } } Bool_t CTdiff::operator()(const Address_t pEvent, CEvent& rEvent, CAnalyzer& rAnalyzer, CBufferDecoder& rDecoder) { for (int i = 0; i < 32; i++) { if (m_times[i].isValid()) { for (int j = 0; j < 32; j++) { if (m_times[j].isValid()) { (*m_diffs[i])[j] = m_times[i] - m_times[j]; } } } } return kfTRUE; }
rEvent
vector have a method called
isValid
which returns
true if this is the case.
This line ensures that the first time has been assigned a value.
Note how all of this is done without needing to know the structure of the raw event data. Should the experiment need to change the hardware in a way that changes that structure this code still works properly. A well structured SpecTcl tailoring should consist of several event processors working together to produce the needed parameters.
Don't forget to add an instance of this class to the analysis pipeline.
We'll leave it as an exercise to create a script that makes spectra and to modify SpecTclRC.tcl to source that script into the SpecTcl at startup. The axis specifications of these spectra should be {{-4095 4095 8192}} e.g.