How Do Satellites Actually Capture Images? Pushbroom, Whiskbroom, and SAR
Quick Answer: Modern optical satellites primarily use pushbroom sensors — a linear array of detectors that captures one line of pixels at a time as the satellite moves forward. Older systems (Landsat 1-7) used whiskbroom sensors with a scanning mirror. SAR satellites use a fundamentally different approach: they emit microwave pulses and synthesize a long antenna through forward motion to achieve high resolution. Each mechanism has implications for image geometry, noise, and processing requirements.
Most people picture satellite imaging as a camera snapping a photo — click, you've got an image. The reality is quite different. There's no shutter, no single exposure. Satellite images are built line by line, over seconds to minutes, as the platform hurtles through space at 7.5 kilometers per second.
Understanding how this works isn't just academic trivia. It explains why satellite images have the geometric distortions they do, why some sensors produce cleaner data than others, and why SAR images look nothing like photographs.
Pushbroom Sensors: The Modern Standard
Most current optical satellites — Sentinel-2, Landsat 8/9, WorldView, SPOT — use pushbroom sensors. The concept is straightforward:
A linear array of thousands of detectors is arranged perpendicular to the satellite's flight direction. As the satellite moves forward, this line of detectors sweeps across the surface like a broom being pushed across a floor. Each detector records one pixel, and the forward motion provides the along-track dimension.
Sentinel-2's MSI (MultiSpectral Instrument) has detector arrays with up to 12,000 elements per band in the 10-meter channels. Each array captures one cross-track line per exposure. At 786 km altitude and 7.5 km/s orbital velocity, the sensor produces thousands of lines per second.
Advantages of pushbroom:
- Long integration time per pixel (each detector stares at one ground position longer), producing better signal-to-noise ratio
- No moving parts in the optical path, increasing reliability
- Uniform geometry within each cross-track line
Disadvantages:
- Requires very precise calibration of thousands of individual detectors. If detector #4,712 drifts, you get a stripe in the image
- Different detectors may have slightly different spectral responses, causing subtle across-track artifacts
Whiskbroom Sensors: The Classic Approach
Landsat satellites before Landsat 8 used whiskbroom (or "across-track scanner") sensors. Instead of a line of detectors, a single detector (or small group) looked at the ground through a rotating or oscillating mirror that swept the field of view from side to side.
As the mirror rotated, the detector traced a line across the swath. The satellite's forward motion advanced to the next line. It's like reading a page — your eye (the detector) sweeps left to right, and at the end of each line, you move down.
Landsat's Thematic Mapper used an oscillating mirror scanning a 185 km swath with just 16 detectors per band. This was mechanically complex but avoided the calibration challenges of thousands of individual pushbroom detectors.
The trade-off: Whiskbroom sensors have shorter dwell time per pixel (the mirror is constantly moving), resulting in lower signal-to-noise ratios. They also introduce geometric distortions because pixels at the edge of the scan are viewed at a different angle than pixels at the center.
The transition from whiskbroom to pushbroom happened with Landsat 8's OLI sensor in 2013, and the improvement in data quality was immediately noticeable — fewer striping artifacts, better signal-to-noise, and more consistent geometry.
SAR: A Completely Different Approach
Synthetic Aperture Radar doesn't record reflected sunlight at all. It carries its own microwave transmitter, sends pulses toward the ground, and records the echoes.
The "synthetic aperture" part is the clever trick. A real radar antenna's along-track resolution is proportional to its length — to achieve 10-meter resolution from 700 km altitude, you'd need an antenna several kilometers long. Obviously impractical.
Instead, SAR exploits the satellite's motion. As the satellite moves forward, it transmits a pulse, receives the echo, moves a bit, transmits again, receives again. By coherently combining all these echo signals over a stretch of the orbit, the processor synthesizes the effect of a very long antenna. The "synthetic aperture" is this virtual antenna created by combining signals received at many positions along the flight path.
Key differences from optical imaging:
- Active illumination: SAR works day and night, through clouds
- Coherent radiation: SAR records both amplitude and phase of the returned signal. Phase information enables interferometry (measuring ground displacement) and coherence analysis
- Side-looking geometry: SAR views the ground at an angle, not straight down. This creates unique geometric effects — layover, foreshortening, and radar shadow in mountainous terrain
- Speckle noise: The coherent nature of radar produces a granular noise pattern called speckle, absent in optical imagery
Range vs. Azimuth Resolution
SAR resolution has two independent components:
Range resolution (across-track) depends on the bandwidth of the transmitted pulse — wider bandwidth means finer range resolution. Sentinel-1's IW mode achieves about 5 meters in range.
Azimuth resolution (along-track) depends on the synthetic aperture length, which in turn depends on how long the processor integrates signals. Sentinel-1 achieves about 20 meters in azimuth for IW mode.
These are independent of altitude, which is a remarkable advantage. A SAR in orbit can achieve the same resolution as one on an aircraft at a fraction of the altitude — you just need more processing.
Frame vs. Strip Imaging
Another distinction worth understanding:
Strip mode (or "stripmap"): The sensor continuously records as the satellite flies over an area, producing a long strip of imagery. Most standard Sentinel-1 and Sentinel-2 acquisitions work this way.
Frame mode (or "spotlight" for SAR): The sensor focuses on a specific area, either by pointing the optical system or (for SAR) by steering the beam to dwell on one location longer. This produces higher resolution over a smaller area.
ScanSAR / TOPS mode: The SAR beam is steered to cover a wider swath at the cost of azimuth resolution. Sentinel-1's primary mode (Interferometric Wide Swath, IW) uses TOPS — a variant of ScanSAR — achieving 250 km swath width.
Why This Matters for Your Analysis
Understanding the imaging mechanism explains several practical observations:
Striping in older Landsat data: Detector calibration issues in whiskbroom or early pushbroom sensors caused periodic stripes. Knowing this prevents misinterpreting stripes as real surface features.
SAR geometric distortions: Mountains facing the SAR sensor appear compressed (foreshortening) or folded over (layover). The far side of mountains falls in radar shadow. These aren't processing errors — they're inherent to the side-looking geometry.
Temporal offset within a scene: A pushbroom image is built over several seconds. The top and bottom of a large scene are acquired at slightly different times. For most applications this is irrelevant, but for very fast-moving phenomena (rocket plumes, some atmospheric effects), it can be visible.
Different noise characteristics: Optical pushbroom imagery has Gaussian noise characteristics. SAR imagery has multiplicative speckle noise that requires specific filtering approaches (Lee, Frost, Gamma MAP). Using a standard Gaussian filter on SAR data will produce suboptimal results.
The sensor is the starting point of the entire remote sensing data chain. What it measures, how it measures it, and the artifacts it introduces — these all propagate downstream into every analysis you perform.
