This section describes a framework for analysis that does a lot of the nitty gritty decoding work for you, passing pre-digested data to callback methods of classes you write. This software was introduced in SpecTcl 5.10-008 and, at this time can handle data from all versions of NSCLDAQ up through 11.4.
Note that this software may not be suitable for use with event built data since it is blind to body headers when used with NSCLDAQ-11.
For detailed reference information, see CAnalysisEventProcessor
and CAnalysisBase
in the programming
reference manual.
The callout analysis software consists of an event processor that decodes what it can from items that are not event data. Each item decoded then calls a method in an analysis object that is passed into the event processor at creation time.
The code fragment below gives a more concrete example of what I mean. It would be placed in your MySpecTclApp.cpp file:
Example 3-13. Idea of the callback analysis framework
#include <CAnalysisEventProcessor.h>#include <CAnalysisBase.h>
... class MyCallbackClass : public CAnalysisBase
{ ... }; ... void CMySpecTclApp::CreateAnalysisPipeline(CAnalyzer& rAnalyzer) { ... auto callbacks = new MyCallbackClass;
auto processor = new CAnalysisEventProcessor(callbacks);
RegisterEventProcessor(*processor, "callback-processor");
... }
Lets' go through this code fragment step-by-step.
CAnalysisEventProcessor
. This defines
the event processor class that runs the callback framework.
Note that while this example only shows a single instantiation
of this event processor, there's nothing to stop you from defining
several callback classes and instantiating one event processor
for each of them.
CAnalysisBase
,
from which all callback
classes must be built. Callback classes contain the
code that actually analyzes the data that
CAnalysiysEventProcessor
pulls out of
each ring item.
CAnalysis
base class. This class could (and should) be defined
in header files and external program modules. Here we
should it in MySpecTclApp.cpp
to simplify the example.
CreateAnalysisPipeline
,
you'll create a callback object....
To conclude this section, lets' look at the callback methods you can
implement, their parameterization and the sorts of error conditions
you can signal from your callbacks. Note that the base class,
CAnalysisBase
implements all of these
callback methods to do nothing. You therefore only need
to provide methods that you need.
State change items document the start, stop, pause and resumption
of a run. when encountered, the event processor will decode them
and call the onStateChange
method.
The signature of this method is:
Example 3-14. onStateChange method signature
virtual void onStateChange( StateChangeType type, int runNumber, time_t absoluteTime, float runTime, std::string title, void* clientData );
type
Is defined in the CAnalysisBase
class and has one of the values
Begin, End,
Pause or Resume
indicating the type of state change that took place.
runNumber
Is the run number of the run whose state is changing.
absoluteTime
Is the linux time of day at which the state change was logged.
runTime
Is the number of seconds into the run at which the statechange occured.
title
Is the title of the run that underwent the state change.
clientData
Is additional data that is passed by the event processor.
This actually points to a
CAnalysisEventProcessor::ClientData
struct which contais the members s_pUserData
,
that contains optional user data passed to tyhe analysis event processor
when it was constructed and s_pCaller
which is a pointer to the AnalysisEventProcessor
calling the callback.
When a scaler item is encountered, the event processor calls
onScalers
which has the following
signature:
Example 3-15. onScalers callback signagure
virtual void onScalers( time_t absoluteTime, float startOffset, float endOffset, std::vector<unsigned> scalers, bool incremental, void* clientData );
The parameters have the following meaning.
absoluteTime
The absolute time at which the data were generated. The various Unix time functions can be used to manipulate this value.
startOffset
Number of seconds into the run at which this counting interval started.
endOffset
Number of seconds into the run at which this counting interval ended.
scalers
The scalers extracted from the item.
incremental
Flag that, if true, indicates the scalers are incremental rather than cumulative. Most NSCLDAQ systems use incremental scalers to reducd the chances of having to deal with scaler overflows.
clientData
Additional data. See onStateChange
for more about what this points to.
Some item types contain lists of string data. These data types
have an enumerated type in CAnalysisBase
that describes the types of items that are string lists;
PacketTypes in the sbs readout, the documented
data packet types, MonitoredVariables the set
of Tcl variables being monitored. RunVariables
the set of variables that are readonly during a run.
When a stringlist item type is encountered, onStringLists
is invoked. It has the following method signature.
Example 3-16. The onStringLists method signature
virtual void onStringLists( StringListType type, time_t absoluteTime, float runTime, std::vector<std::string> strings, void* clientData );
Where:
type
The type of the item. See above.
absoluteTime
The wall clock time and date at which the entry was made. The Unix time manipulation functions can be used with this type.
runTime
Number of seconds into the run at which this item was created.
strings
The strings in the item.
clientData
Additional data. See the description of
onStateChange
for a descrption
of this parameter.
onEvent is called when a physics event is encountered. The signature of that method is:
pEvent
is the pointer to the event as it
was passed to the event processor's operator()
.
clientData
is additional data described in
onStateChange
.
The base class CAnalysisBase
provides
two exception types that can be thrown and are handled by the
event processor. These are derived from std::runtime_error
and, are constructed with a reference to a std::string
message.
If CAnalysisBase::NonFatalException
is thrown,
the event processing pipeline is aborted with the message provided
displayed on stderr. Processing continues with the next item.
As the name implies, if CAnalysisBase::FatalException
is throw, the message is displayed to stderr and SpecTcl exits.
A pre-packaged call-out class for the callout analysis framework is provided in SpecTcl 5.10-008 and later. This framework maintains a set of SpecTcl Tcl interpreter variables that reflect scaler values for the run being analyzed. It was initially developed for a group that wanted to integrate the scaler display program with SpecTcl.
This section describes:
How to set up your SpecTcl to use this scaler callout class.
The Tcl variables maintained by the scaler callout processor
The limitations of the scaler callout processor.
The procedure for using the scaler callout class is identical to the procedure for using any other callout analysis class. Below we show code fragments in MySpecTclApp that show how to do this:
Example 3-18. Using the Scaler callback analysis class
#include <CAnalysisEventProcessor.h> #include <CScalerProcessor.h>#include <SpecTcl.h>
... void CMySpecTclApp::CreateAnalysisPipeline(CAnalyzer& rAnalyzer) { ... auto pApi = SpecTcl::getInstance();
auto pInterp = pApi->getInterpreter(); auto pScalerProcessor = new ScalerProcessor(*pInterp);
auto processor = new CAnalyayisEventProcessor(pScalerProcessor);
RegisterEventProcessor(*processor, "Scaler-analyzer");
... }
While this exacmpe is relatively simple, let's go through it step-by-step.
ScalerProcessor
which is the callout
processor that we want to use with the callout
event processor.
ScalerProcessor
will need
the SpecTcl interpreter. This can be gotten
via the SpecTcl API. Therefore we include
the API's header here.
ScalerProcessor
is derived from CAnalysisBase
so it can be used directly).
When the Scaler callout processor has been registered as shown in the previous example, when analyzing data, the following Tcl global variables will be maintained. Furtheremore, A set of Tcl procs must be defined and will be called at specific points in analysis.
Tcl variables maintained
RunNumber
State transition items will extract the run number and store it in this variable.
ElapsedRunTime
When processing items that provide the elapsed run time, this variable is updated with the number of seconds into the run specified by that time. Note that this is a floating point value as NSCLDAQ-11 provides support for sub-second precision.
RunTitle
The titles in state transition items are extracted and stored in this variable.
ScalerRunState
State transition items update the value of this variable to reflect the state of the run. This variable will have a value that is one of Active, Halted, Paused.
I would expect that Paused will be the case for only a very short time in properly ended runs as it's item should be immediately followed by a resume item which would switch the value back to Active
ScalerDeltaTime
Processing of scaler items will compute the length of time over which the scalers counted to produce that item. This will be stored (in seconds) in this variable.
Scaler_Increments(i)
When a scaler item is processed, this variable will be updated with the counts in scaler i stored in that item.
Scaler_Totals(i)
These variabls are zeroed on begin run transitions and store running sums of the scaler values over all scaler items seen so far.
The scaler callout processor expects several Tcl procs to be defined and calls them without any parameters at specific points in processing. All procs are called after the Tcl variables updated by item processing has been completed. There is no requirement these procs do anything but they must be defined. These procs are:
Tcl procs called.
Called when a begin run state change item has been processed.
Called when an end run state change item has been processed.
Called when a pause run state change item has been processed.
Called when a resume run state change item has been processed.
Called when a scaler item has been processed.