IPAC 2MASS Working Group Meeting #75 Minutes 10/03/95

IPAC 2MASS Working Group Meeting #75 Minutes 10/03/95

Attendees: T. Chester, R. Cutri, T. Evans, J. Fowler, L. Fullmer, G. Kopan, B. Light, C. Lonsdale, H. McCallon, S. Terebey


  1. Status Reports
  2. Aperture Magnitudes and Uncertainties
  3. Analysis Tasks
  4. PSF Handling Revisited


  1. Status Reports -- The status reports normally provided by all attendees on the first meeting of the month were skipped this time, because they are all the same: everyone is working on their SDS's.

  2. Aperture Magnitudes and Uncertainties -- T. Evans reported that since aperture-photometry magnitudes are computed for each source for about a dozen aperture sizes, it might be desirable to designate the value for one particular size as a standard aperture magnitude for each point source and carry it through to the products (these computations are done in PROPHOT and are used by MAPCOR). If this is done, then an uncertainty is needed to go along with the aperture magnitude, and the question of how we want to compute the uncertainty arises.

    The group agreed that such a standard-aperture magnitude should be carried forward for each point source. It was felt that the dominant effect on the uncertainty was the photon noise in the aperture, which is computed by PROPHOT. Unless it becomes clear that other effects contribute significantly, this will be the sole basis for computing the uncertainty carried with the aperture magnitude.

  3. Analysis Tasks -- C. Lonsdale requested that any analysis issues needing attention be discussed. J. Fowler reported that in a meeting with R. Cutri the previous day, it became clear that some analysis of sky offset stability was needed. "Sky offset" refers to the additive pixel response correction computed from survey data frames by CsFlat in the pipeline and planned for DFLAT in 2MAPPS.

    The stability issue arose because of the following question. Before the sky-offset method of frame flattening was adopted as the baseline approach, the frame-flattening program was to compute corrections to the pixel response scale factors from the survey data frames (this method worked fairly well with data from the previous protocamera). Concern over the effect of the high source densities in galactic-plane scans led to adopting the approach of processing all calibration scans before processing any survey scans, so that for any given survey scan, time-bracketing flats from the (supposedly) cleaner calibration fields could be used for comparison to or replacement of the flats derived from the survey scans. The reasoning was that the pixel response scale factors would be very slowly varying in time, making this approach valid. (There was also at least one other reason for processing all calibration scans before any survey scans; it involved obtaining information on the PSF).

    The latest protocamera revealed the need for a change to the flattening method: now pixel response scale factors are to be obtained from twilight or dome flats, and survey data are to be used to compute additive corrections (apparently needed because of illumination-dependent bias phenomena). This method seems to work well with the April '95 data. But the new question is whether these additive corrections, "sky offsets", are stable in the same way, justifying the use of time-bracketing calibration results for galactic-plane scans if needed. Implementing such code in DFLAT is nontrivial, so we need to be sure resources wouldn't be wasted.

    The behavior of the sky offset images should be studied as a function of time and zenith angle, to name two independent variables. There is no open slot at the moment in the analysis task assignments, however, so this may not be done very soon (certainly not in time for the October 15 deadline for the DFLAT SDS). This aspect of DFLAT will be left visibly TBD for now, with the tentative adoption of an approach suggested by R. Cutri for handling galactic plane scans diagnosed as troublesome, where such diagnosis is based on standard deviations about trimmed-average pixel offsets and offset deviations from linearity over the scan. This corrective action will be to keep track of the chronologically nearest available survey scan with sky offsets that were within limits in the diagnostic parameters and to use these instead of the out-of-limits results.

  4. PSF Handling Revisited -- Last week considerable discussion regarding handling of the PSF took place. Since then additional work has been conducted, and again, modifications to the baseline approach have resulted. Studies by T. Chester and T. Jarrett indicate that their method of deducing the PSF/seeing from coadded images is capable of supplying such information in PICMAN for PROPHOT to use, with no additional input from FREXAS (i.e., the "pfrac" parameter). FREXAS will still compute pfrac on a once-per-scan basis for its own use in single-frame point source extraction and for PICMAN to use in initializing its point source detection algorithm. GALWORKS will supply a subroutine for PICMAN to call before calling PROPHOT; this subroutine (named SEEMAN) will digest some number of coadded images before it emits its first seeing estimate; PROPHOT will buffer the frames and detections passed to it by PICMAN until the first PSF is available via correlation to the seeing estimate, after which it will simply use each new PSF as it becomes available.

    This will allow the tracking of variability in the PSF/seeing, which will be used by PROPHOT, MAPCOR, and GALWORKS. If the first seeing estimate to emerge from SEEMAN is significantly different from what PICMAN used in the initial point source detection stage, PICMAN will signal PROPHOT to flush its detections list, free the frame-buffer memory, and start over with SEEMAN's seeing estimate.

    This will result in changes to the FDD, but these changes will probably not be made before the November 8 review (unless J. Fowler is able to do this during deadtime on jury duty).