A Quick- guide – for all details see DR cookbook. (or the PDF version)
Contents
Data
JCMT continuum data obtained with SCUBA-2 are written to disk and available at the JCMT, in Hilo, or via CADC.
Software
For information on downloading and installing Starlink suite of data reduction and analysis software click here.
Raw data
Each SCUBA-2 observation is made up of a number of raw files. For a single observation taken on a specific date there are separate files for each 30seconds of data – for each wavelength and for each sub-array.
SCUBA-2 raw data files follow a fixed naming convention. For instance, within the file name “s8a20141024_00033_0004.sdf“, the sub-sections have the following meaning:
- “s8a“: Indicates the data is from the 850 µm “a” array (there are four arrays of bolometers labelled “a”, “b”, “c” and “d” for each of the two wavelengths – 450 µm and 850 µm)
- “20141024“: Indicates the UT date on which the observation was taken – in this case the 24th October 2014.
- “00033″: An index that uniquely identifies the observation within the night in question. So for instance, this file holds data for the 33rd observation taken on the 24th October 2014.
- “0004“: The sub scan index that identifies each specific block of 30 seconds of data within a single observation.
- “.sdf”: Indicates the file holds data in the STARLINK NDF format.
Each of the main sub-scan files contains a 3-dimensional cube of data values in which each plane holds a “time-slice” – a snap-shot of the raw data counts from each of the 32×40 bolometers in a single SCUBA-2 array. Roughly 200 of these time-slices are taken each second as the telescope scans across the sky, and are stacked into a cube. Thus each 30 second sub-scan file typically contains 30×200 = 6000 time slices, resulting in the cube having dimensions of (32,40,6000).
Quick Pipeline Reduction
For a quick SCUBa-2 reduction we recommend you follow the following steps to run the data reduction pipeline (ORAC-DR). To invoke the ORAC-DR software:
>> oracdr_scuba2_850
this sets up ORAC-DR to run in your current working directory. you will also need to specify where your raw data is stored. For tcsh users we do:
>> setenv ORAC_DATA_IN folder/
or for bash:
>> export ORAC_DATA_IN=folder/
For help with ORAC-DR simply do:
>> oracdr -help
To reduce your raw SCUBA-2 data using the ORAC-DR software run:
>> oracdr -loop file -file mylist.lis
where mylist.lis is a list of raw files (either for a specific wavelength, single/multiple observations/sub arrays) that should be located in the “folder” directory as specified by ORAC_DATA_IN.
As an example users at the JCMT observing might do:
>> ls /jcmtdata/raw/scuba2/s8a/20150616/00016/s*.sdf > mylist.lis
to get the data from the a sub-array or to get all 850 micron data:
>> ls /jcmtdata/raw/scuba2/s8?/20150616/00016/s*.sdf > mylist.lis
or if it is older data and not accessible at the telescope:
>> ls /net/mtserver/export/data/jcmtdata/raw/scuba2/s4?/20150616/00016/s*.sdf > mylist.lis
The raw data will be automatically reduced using a pre-defined reduction recipe. If you are reducing data at the EAO offices you can find the raw data here.
>> oracdr -loop file -file mylist.lis -nodisplay -log sf
The above is another way of running the ORAC-DR pipeline. In the above example the output will be written to screen and logged in a .oracdr_* file (as specified by -log sf s = screen f = file) and not displayed in an xwindow (as specified -nodisplay).
To really understand what is happening to your data (and if you have read further in the cookbook it is advised to also (for the first few times/reductions) run with -verbose. This will print messages from the Starlink engines (rather than just ORAC-DR messages):
>> oracdr -loop file -file mylist.lis -nodisplay -log sf -verbose
Data Products
Data that have been reduced by the pipeline have already had a standard peak Flux Conversion Factor applied to the data. The pipeline produces several data products.
- log.group – file containing all raw data included in the reduction
- s20141024_00033_850_reduced.sdf – reduced file from single observation
- s20141024_00033_850_reduced_*.png – image files of individual reductions
- log.mapstats – file containing information on the individually reduced data
- log.nefd – file containing NEFD information from raw data
- log.noise – file containing noise information from the reduced data
- gs20141024_00033_850_reduced.sdf – group file – i.e. all reduced files co-added
- gs20141024_00033_850_reduced_*.png – image files of co-added reductions
- s20141024_00033_850_reduced.sdf.FIT – fits file containing sources of emission within the map
Visualising Data
Images can be viewed using GAIA – in interactive image and cube visualisation tool created by the STARLINK project for viewing NDFs and FITS files.
>> gaia filename.sdf
For greater versatility, the STARLINK KAPPA package contains many commands for visualising images in many different ways, including displaying multiple pictures in a grid, overlaying masks and contours, scatter plots, histograms, etc, etc.
Combining Images
If you have reduced several SCUBA-2 files separately then they can be combined into a single co-added map with the PICARD recipe MOSAIC_JCMT_IMAGES:
>> picard -recpars mypar.lis MOSAIC_JCMT_IMAGES *files.sdf
where the “parameter file” mypar.lis looks like the following, with a mosaic method specified (in this example makemos):
[MOSAIC_JCMT_IMAGES] MOSAIC_TASK = makemos MAKEMOS_METHOD = mean REGISTER_IMAGES = 0
Configuration files
ORAC-DR is a pipeline that calls several STARLINK commands in order to reduce your data. the precise behavior of the pipeline is governed by the recipe file. You can find out which recipe is set in the data header via the FITS header RECIPE keyword in any of your raw files. For example both of these options will return the same result:
>> kappa >> fitsval s8a20120725_00045_0003 RECIPE REDUCE_SCAN
>> fitslist s8a20120725_00045_0003 | grep RECIPE REDUCE_SCAN
A configuration file (set in the parameter file by MAKEMAP_CONFIG) contains a set of parameters which guide the data reduction process. The JCMT supplies several standard configuration files. Each of these files contains a set of configuration parameter values that have been found to work well with a specific type of astronomical source:
- dimmconfig_blank_field.lis: ideal for faint cosmological sources and blank fields.
- dimmconfig_bright_compact.lis: ideal for bright compact sources such as planets or bright compact galaxies.
- dimmconfig_bright_extended.lis: ideal for bright galactic sources and other significantly extended sources.
- dimmconfig_jsa_generic.lis: a general purpose configuration file that is designed to minimise the chances of artificial large scale structures appearing in the map, at the expense of suppressing some real large scale structure. This is a good default configuration if you do not have a specific reason for using one of the others.
The specific configuration file parameters can be found here:
ls $STARLINK_DIR/share/smurf/dimmconfig* less $STARLINK_DIR/share/smurf/dimmconfig_blank_field.lis less $STARLINK_DIR/share/smurf/dimmconfig_jsa_generic.lis
The specific reduction can be determined by using:
hislist s8a20120725_00045_0003.sdf | grep CONFIG
Or it is possible to use the configecho KAPPA command.
It is possible to run with a different configuration file by entering the following:
>> oracdr -loop file -files mylist.lis -recpars mypars.ini REDUCE_SCAN
where mypars.ini is a parameter file you create containing the following information:
[REDUCE_SCAN] MAKEMAP_CONFIG = dimmconfig_bright_extended.lis
or:
[REDUCE_SCAN] MAKEMAP_CONFIG = dimmconfig_blank_field.lis
Classically REDUCE_SCAN recipe file calls the dimmconfig_jsa_generic.lis configuration file, however this is not always the case. It is possible to supply your own configuration file by specifying it in recipe file. The recipe file can also be set up to produce data in mJy/beam units:
[REDUCE_SCAN] MAKEMAP_CONFIG = dimmconfig_bright_extended.lis CALUNITS = beam
On inmJy/arcsec2:
[REDUCE_SCAN] MAKEMAP_CONFIG = dimmconfig_bright_extended.lis CALUNITS = arcsec
You might also want to change the pixel size of a reduction. This is possible – again by updating the recipe file i.e.:
[REDUCE_SCAN] MAKEMAP_PIXSIZE = 2
Estimating the noise
You can estimate the rms within an observation/file by one of three ways:
- Open your map with GAIA. Select a region of noise using the ‘Image region’ tool under the ‘Image-Analysis’ menu and chose ‘Selected Stats’ to return the standard deviation of your selected area.
- Use the KAPPA command stats. You may chose to use the comp=err option which will report back the statistics of the error component of the map and thus not be contaminated by any strong sources.
>> stats map.sdf comp=err
-
Use the PICARD recipe named SCUBA2_MAPSTATS which will calculate various properties of the observation, its noise and its average NEFD given a single reduced SCUBA-2 observation. This command will produce an output file named ‘log.mapstats’ in the directory specified by the environmental variable ‘ORAC_DATA_OUT’ (if set), or in the current directory if that is not set. You can run the command as follows:
>> picard -log sf SCUBA2_MAPSTATS map.sdf
The following parameters can be specified when running Picard’s SCUBA2_MAPSTATS:
- KEEPFILES – if set to 1, the _crop files created during processing will be retained on disk, otherwise all intermediate files will be deleted.
- MAP_RADIUS – radius (in arcsec) of the region used for calculating the map noise and NEFD. Default is 90 arcsec. Note that poor results can be derived if the map radius is too small and it may be more useful to use a larger radius for larger maps.
- STATS_ESTIMATOR – Statistical estimator for the NEFD and RMS calculated from the input map. May be mean or median – default is median.
To include any of these options when running Picard’s SCUBA2_MAPSTATS simply provide a parameter file such as params.ini containing the following i.e.
[SCUBA2_MAPSTATS] MAP_RADIUS = 180 KEEPFILES = 1
and run using:
>> picard -log sf -recpars params.ini SCUBA2_MAPSTATS map.sdf
One way to view your results (especially if you have reduced multiple observations) is in TOPCAT:
>> topcat -f ascii log.mapstats
Other useful commands
The following are additional useful KAPPA commands. The command ndftrace displays general information about the contents of an NDF, including pixel size and the quantity represent by each axis:
>> ndftrace file.sdf
Display the individual raw data files that went into your map
>> provshow file.sdf show=roots
Find out what commands have previously been run on your data
>> hislist file.sdf
To inspect the FITS-like header information of your file
>> fitslist file.sdf
Use fitsval if you know the FITS keyword you want to extract. For instance:
>> fitsval file.sdf OBJECT
To get the status of the telescope and the focal plane array during an observation or sub-scan you should use the SMURF command jcmtstate2cat. Information provided in the catalog produced includes positions and temperatures, this is a good place to start to diagnose some problems with the instrument that may have fed back into the data.
>> jcmtstate2cat /jcmtdata/raw/scuba2/s8a/20111224/00022/* > s850_20111224_24_state.tst
and opened with TOPCAT i.e.
>> topcat -f tst s850_20111224_24_state.tst
Herschel Data
It is often useful to utilise data from other wavelengths (either for a comparison or for an external mask). The following outlines how to convert Herschel fits data, down loaded from the Herschel Science Archive web page, to ndf format:
>> convert
The above invokes the STARLINK format-conversion package CONVERT. The following steps will convert your data:
>> fits2ndf file.fits file.sdf
>> ndfcopy in=file.sdf.more.fits_ext_1 out=file_data.sdf
Quick makemap Reduction
It is also possible to reduce your data using makemap outside the pipeline – I recommend you read the SCUBA-2 cookbook. Essentially you run the map maker using the following command (note the backslash allows continuation of the command input onto the next line and has no other purpose):
>> makemap in=^mylist.lis out=file.sdf \ config=^$STARLINK_DIR/share/smurf/dimmconfig_bright_compact.lis
It may also prove useful to record the output of the map making process. This can be easily done by redirecting the output to a file using “>” or to both file and screen by using the “tee” command:
>> unbuffer $STARLINK_DIR/bin/smurf/makemap in= out= \ method=iterate config= |& tee output.log