Computers and Sculpture>3D Scanning
Home>; 3D Formats | 3D Methods | Anthropometrics | Biovisualization | 3D Cameras | 3D Imaging | Imaging Research | Nuclear
Magnetic Resonance Imaging | Shape from
Shading | Surface Digitizing | Video Digitizing
Shape from Shading - Photoclinometry
Shape from Shading - Photoclinometry
- Mars Crater
Simulations The shape of the craters is taken from measurements of
individual fresh craters on Mars using a technique called photoclinometry, which
uses shading variations on a an obliquely-lighted surface to infer topography
(also called "shape from shading") (Craddock, R., Maxwell, T, and Howard, A.,
1997).
- The Arctic and Antarctic Research Center AARC Home Page - AARC_biblio.txt Scambos, T. A., and M. A. Fahnestock, 1996: Improving digital elevation models
over ice sheets using AVHRR-based photoclinometry. IGS Abstract for Symposium on
Representation of the Cryosphere in Climate and Hydrological Models, Victoria,
B.C.
- Hmtäh
Publications 1989-1990 -
- Muinonen, K., Lumme, K., Peltoniemi, J. I., and Irvine, W. M.: Statistical photoclinometry and surface topography of atmosphereless bodies. Asteroids, Comets, Meteors III (Uppsala, Sweden), 1989, p. 95.
- Muinonen, K., Lumme, K., Zhukov, B. S., and Irvine, W. M.: Statistical photoclinometry of Phobos' surface topography. First Results of the Phobos-Mars Mission and Future Exploration of Mars (Paris, France), 1989, p. 103.
- Muinonen, K., Lumme, K., and Irvine, W. M.: Statistical photoclinometry and surface topography of atmosphereless bodies. 20th Lunar and Planetary Science Conference, 1989, p. 729-730.
- Muinonen, K., Lumme, K., Peltoniemi, J. I., and Irvine, W. M.: Statistical
photoclinometry and surface topography of atmosphereless bodies. Asteroids,
Comets, Meteors III, Proceedings, 1990, p. 155-158.
- Norther Arizona University
Physics and Astronomy Faculty and Staff - Robert Wildey,
PhDAstrophysics and astronomy, stellar evolution, infrared astronomy,
ground-based and spacecraft investigations of the Moon and Mars, synthetic
aperture radar, photoclinometry, cosmology. (California Institute of Technology,
1962)
- Production
of Digital Image Models - McEwen, A.S.,1991b, Photometric Functions for
Photoclinometry and Other Applications, ICARUS, Vol. 92, pp. 298-311.
- NEAR
Imaging Team Member Proposal
- Shape and Relief Determinations - A variety of techniques
exist for determining the shape of an irregular body. These include several
types of photogrammetry, photoclinometry, limb reconstructions, and fused
techniques (those that incorporate one or more of the other approaches). In
addition, at least some of these techniques can be employed in either fully
manual or fully automated modes, and in modes that are partially automated or
partially manual. In this section, Viking observations of Phobos are used to
illustrate some of these techniques.
- Limb Curves - Limb curves (and their close relatives,
terminator lines) provide significant, though not necessarily complete,
information about the shape of a body. With sufficient axial sampling, most of
the features of the body can be reconstructed (the principal limitation being
that the interiors of concave features such as craters never appear on the
limb). Limb curve reconstruction may suffer from operational limitations if it
is not possible to obtain a sufficient number of equally spaced, high resolution
views around known, multiple axes, to eliminate all possible ambiguity (i.e., in
cases where only one axis of rotation can be used, a small hill located between
two larger hills might never appear on the limb).
- Figure 1 shows a technique for manual definition of limb and
terminator positions in multiple images. The technique permits fitting
an initial figure (generally a triaxial ellipsoid) to the object as observed in
several images, and then deforming that figure by incorporation of features of
known geometry (such as craters with circular planform and parabolic
cross-section) or by manually moving grid points to specific locations. The
particular software shown in Figure 1 can be used without prior information on
pointing and other relevant information or, more effectively, used with the
Navigation Ancillary Information Facility (NAIF) toolkit and SPICE
(Spacecraft-Planet-Instrument-"Camera"-Event) navigation/attitude kernels for
precise manipulation. In this latter case, limb/terminator fitting and other
forms of photogrammetry become quite similar. Using this software, a dozen
images can be fit in a few hours.
- Photogrammetry - Control Points In control point
methods, features common to multiple images are identified and their coordinates
on the image plane of each image measured. These measurements have traditionally
have been performed manually, although automated systems, based on pattern
matching, have been developed over the past 5-8 years. A least-squares procedure
is then applied to the ensemble of measurements, along with information about
the target/spacecraft position and camera orientation. Some or all of the
information can be held fixed if it is known with adequate precision; with a
sufficient number of control points, all of the parameters can be left to be
adjusted by the procedure. The result is an estimate of the position in
three-space of each control point, and updated values of the camera orientation
and spacecraft position. To produce a surface from the scattered points, an
interpolation procedure must be used. A variety of spline-based and
minimum-energy surface fitting routines have been implemented in existing
software. As might be expected, some routines work well under some
circumstances, while performing poorly in others. Also as might be expected, the
quality of the surface is generally dependent on the number of points, as is the
time needed for processing.
- Stereophotogrammetry - Stereophotogrammetric
techniques are, as a class, a special case of the more general control point
methodology. In stereoscopy, the position and orientation of the camera is
generally known, a limited number of images (often two) are used to find feature
correspondences, and there is usually considerable overlap of features. One
strength of stereo is that autocorrelation or other automated image-matching
techniques can be used to build a much larger number of corresponding points
than would typically be determined for control points. Using the camera
coordinates, feature displacement owing to parallax maps directly to surface
relief. Figure 2 shows the "front end" of a set of programs used in
stereogrammetry. Left and right images (in side-by-side format or portrayed as
anaglyphs) are examined stereoscopically, and a movable cursor used to establish
points "on" the virtual surface. Points are manually selected individually. This
tool can be used to quickly generate topographic profiles (as shown in Figure
2), but its primary use is to edit points
generated automatically by a combined edge/area correlation program.
- The point positions, derived either manually or automatically, are usually
expressed as heights above a nominal ground plane, and a surface is interpolated
by the methods mentioned above to produce a height grid or digital terrain
model.
- Photoclinometry, also known as shape-from-shading, attempts
to invert the orientation of the surface at each pixel by using a shading model
and knowledge of the illumination conditions. Traditional applications of this
technique in planetary science have used line-based, integrative methods, which
are highly sensitive to errors caused by mismatches between the shading model
and the actual surface, imperfections in the imaging system, and albedo
variations. At least some of these limitations can be overcome by the use of
area-based techniques developed over the past fifteen years by the computer
vision community. These techniques attempt to distribute error across the image
by globally minimizing criteria like departure from integrability. Although the
line-based methods usually require manual input, the area-based techniques
are typically fully-automated (the implementation used to create the relief
shown in Figure 3 was fully automated). Once the orientation of each surface
patch is known, a height map can be built up by simple integration or more
sophisticated iterative methods.
- The fact that these techniques have different strengths and weaknesses
suggests that the best approach is to combine the results from all of them -- an
approach called "sensor fusion" by the computer vision
community. A coarse surface approximation will be derived from control points
supplemented with a finer model supplied by manually-edited automated limb- and
terminator-matching. A technique much like surface rendering of Computer-Aided
Tomography (CAT) scans will be used to synthesize many limb models into a single
model. This model will be compared with photometric models of comparable
resolution, and the process iterated. High spatial resolution photometrically
derived relief will be extracted after the shape models converge, with local
constraints provided by high resolution stereogrammetry.
- Shape and Relief Determinations - A variety of techniques
exist for determining the shape of an irregular body. These include several
types of photogrammetry, photoclinometry, limb reconstructions, and fused
techniques (those that incorporate one or more of the other approaches). In
addition, at least some of these techniques can be employed in either fully
manual or fully automated modes, and in modes that are partially automated or
partially manual. In this section, Viking observations of Phobos are used to
illustrate some of these techniques.
- NASA Video Catalog
1995 - Photoclinometry produced the topography of Triton. Three images are
used to create a sequence of Neptune's rings. The globe of Neptune and 2 views
of the south pole are shown as well as Neptune rotating. The rotation of a
scooter is frozen in images showing differential motion. There is a view of
rotation of the Great Dark Spot about its own axis. Photoclinometry provides a
3-dimensional perspective using a color mosaic of Triton images.
- dps97prg.txt 04.11-P Herkenhoff K. E. Fenton L. K. Murray B. C. Photoclinometry and
Stereogrammetry of the Northern Martian Polar Layered Terrain
- JPL
Mars Pathfinder Quick Facts - Geomorphology, Photoclinometry, and
Topography:: Panoramas of the landing site will be taken both before and
after the camera mast deployment. These images will map the landing site for
rover operations, as well as study the large-and small-scale structure of the
landing site, rock and dune features, and any erosional features. Stereo ranging
will determine the topography of the landing site and support rover operations.
Images of the same areas taken at different solar elevation angels will permit
topographic analysis by shadow length and photoclinometry. Additional images
will be taken to study the nature of the martian soil. This will include imaging
the rover wheel tracks to determine soil strength and compaction properties.
Observations of the calibration targets and the lander surfaces will measure the
rate of dust outfall.
- International
TransAntarctic Scientific Expedition (ITASE)Meeting Three corridors have
been selected in a general sense. It remains to determine where within these
(two-dimensional) broad corridors are the best routes for the (one-dimensional)
tractor routes, and where are the best locations (one-dimensional points) for
the 200-year cores. Meteorological Modelling Meteorological data and modelling
studies are needed very early to identify areas best able to preserve an ENSO
record and a Little Ice Age record. Satellite Images Satellite imagery should be
used to derive surface slopes by photoclinometry since slope may control wind
and accumulation rate. Satellite passive microwave data (e.g. Nimbus-7 ESMR) can
be used to estimate the current accumulation rate patterns, and satellite
infrared images (AVHRR) can be used to estimate surface temperature patterns
prior to selecting the core sites. SOAR Radar Knowledge of the ice thickness and
bedrock can help to select appropriate core sites for ITASE. The SOAR Twin Otter
Aerogeophysical survey aircraft has already covered some blocks in the ITASE
region using ice-penetrating radar. Wherever available, these data should be
used to plan routes. High Resolution Ice-Penetrating Radar Airborne
ice-penetrating radar surveys should be carried out using a system that can
record high quality internal layering in the upper 50 to 100 meters. This data
will identify regions with good preservation of stratigraphy in the past 200
years, and will be used to link the accumulation rate histories derived from the
individual core sites. At least several flight lines should be flown in each
corridor, with cross tie lines were possible. Even if such a high-resolution
radar system does not detect bedrock, maps of the upper internal layering will
be invaluable to the traverse planning and data interpretation.
- Eruption Cloud
Products -
- Description: This algorithm will produce maps of plume top topography
from photoclinometry, plume top temperature, and plume top altitude and
topography from plume top temperatures using MODIS (and ASTER, when available).
The spatial resolution will be 1 km for the maps made from MODIS data, and 90 m
for maps from ASTER data. Approximately 5 eruptions will be studied per year.
- Input: MODIS Level 1B radiance (MOD02), Channels 12, 17, and 31.
About 3 scenes/eruption = 15 scenes/yr. ASTER Level 1B radiance (AST03), three
infrared channels, 60 km x 60 km scene, on an as-available basis (probably less
than 1 scene/yr). MODIS (MOD30) or AIRS (AIR07) temperature profiles, 3 profiles
per MODIS or ASTER scene. Whenever simultaneous observations by MISR are
available, cloud top elevation (MIS04, parameter 1433), will be used for
comparison with products 3293a and 3293c.
- Output from Goddard SCF (HDF File Format, No Browse Images
Available): 8-bit raster images.
- Description: This algorithm will produce maps of plume top topography
from photoclinometry, plume top temperature, and plume top altitude and
topography from plume top temperatures using MODIS (and ASTER, when available).
The spatial resolution will be 1 km for the maps made from MODIS data, and 90 m
for maps from ASTER data. Approximately 5 eruptions will be studied per year.
- Lori Fenton's Homepage - Photoclinometry and
Stereogrammetry of the Northern Martian Polar Layered Terrain
- Pacific-Sierra
Research Image Shape from Shading A method for determining
the shape of a surface from its image
- "Shape from shading" (also known as photoclinometry) is a method for
determining the shape of a surface from its image. For a surface of constant
albedo, the brightness at a point (x,y) in the image is related to the gradients
(p,q) by the following expression:
- i(x,y) = a R[p(x,y),q(x,y)]
- where R is the reflectance map, p = dz/dx and q = dz/dy are the partial
derivatives of the surface in the x- and y-directions, and a is a constant that
depends on the albedo, the gain of the imaging system and other factors. The
above expression also assumes that any additive offsets, for example, because of
atmospheric scattering, have been removed.
- A variety of methods have been developed for inverting the above equation
(see Horn 1990). The next section describes a simple method that provides
satisfactory results in many planetary imaging scenarios. It is based on some
early ideas described by Horn (1977).
- Row Integration - The reflectance map depends on the position of the light
source, the observer, and the type of surface material (Horn 1981). It can be
thought of as a lookup table that gives the brightness as a function of the
gradients. For Lambertian surfaces the brightness is proportional to the cosine
of the angle between the vector that is normal (perpendicular) to the surface
and the vector in the direction of the light source. As noted by Pentland
(1988), if the angle between the vector in the direction of the light source and
the vector in the direction of the observer are more than 30 degrees apart and
the surface is not too rough the reflectance map can be approximated by a linear
relationship. If the image is rotated so that the vector that points to the sun
is in the x-z plane, it can be shown that
- i(x,y) ~ a [sin(s) p(x,y) + cos(s)]
- where s is the zenith angle of the sun. The constant scale factor a is
difficult to determine directly without ground truth (that is, ground targets
with known albedo and slope). However, because in most images the gradients are
more-or-less uniformly distributed in all directions, the expected value of the
gradient in the x-direction E[p] ~ 0 and so the average image brightness
- E[i] ~ a cos(s).
- This then allows us to estimate the scale factor
- a = E[i] / cos(s).
- The elevation map z(x,y) can be obtained iteratively, row-by-row as
- z(x,y) = z(x-1,y) + [i(x,y) - a cos(s)] / a sin(s)
- where z(0,y) are the boundary values. If the boundary values z(0,y) are
unknown, we can minimize the mean-squared elevation difference between rows by
subtracting the average row elevation from the elevations in the row.
- When stereo imagery does exist, shape from shading provides an alternative
method for extracting terrain data. Although its use has been limited primarily
to constant-albedo planetary mapping applications (that is, where the surface is
covered more-or-less by the same material) a new algorithm under development by
PSR will extend to the general case in terrestrial imaging applications where
the albedo is not constant.
- "Shape from shading" (also known as photoclinometry) is a method for
determining the shape of a surface from its image. For a surface of constant
albedo, the brightness at a point (x,y) in the image is related to the gradients
(p,q) by the following expression:
- Extracting
Topographic Information from a Singe Multispectral Image - Mark J. Carlotto, PSR Corp., 1400 Key Blvd. Suite 700, Arlington VA
22209 (markc@psrw.com)
- USGS
Flagstaff PICS DOCUMENTATION - NABISCO NAsty calculations for BISCOpic
photoclinometry $FROM=list $LINE=list $SAMPLE=list $DISTORTD=list $FIT=list
CFILE Programmer: Randolph Kirk, U.S.G.S., Flagstaff
- Moile,
J - Image Processing Techniques Used to Explore Uncharted Territory by Brook
Votaw
- STEREO IMAGING WITH THE IMP
- The IMP had two "eyes" which enabled the creation of stereo (3D) images.
These images made it possible to gauge distance with range-finding which
assisted in the rover’s navigation and helped to create a topographical map of
the area. The image from the right eye and that from the left eye were
superimposed onto one another, and the distance between a rock seen by the right
eye and the same rock seen by the left eye revealed the distance to the object
[8]. The IMP makes color images in one eye from red, green, and blue filters.
Stereo color images are made using the RGB information from one eye in
combination with RB information in the other eye. This method frees one of the
RGB stereo pair filters in one eye to create a stereo pair at 965 nm for reduced
aliasing in stereo ranging measurements [6]. The quantitative photogrammetric
processing of stereo data lead to several useful data products. The U.S.
Geological Survey performed this processing on a commercial digital
photogrammetric workstation (DPWS) after some software modification. For
example, half of every rock in the landing site remained unobservable and
unmodellable based on IMP data alone. Integration of the Sojourner Rover camera
images with the IMP data into a three-dimensional model was used to address this
problem [6]. Also, stereo images were mosaiced to produce a "virtual reality"
scene of the landing site to support rover operations [8].
- http://mars.jpl.nasa.gov/nasa "Imager for Mars Pathfinder (IMP)." Principal Investigator, Peter H. Smith,
University of Arizona. July 20, 1998.
- http://www.lpl.arizona.edu/imp "Imager for Mars Pathfinder". Web page maintained by Sara Smith, Department of
Planetary Sciences, Lunar and Planetary Laboratory. The University of Arizona,
Tucson AZ 85721. July 20, 1998.
- STEREO IMAGING WITH THE IMP
- Imager for Mars Pathfinder -
The Imager for Mars Pathfinder, also known as the IMP, is a multi-spectral
stereo imaging system, a camera, which landed on Mars aboard the Mars Pathfinder
on July 4th, 1997. The Pathfinder lander, renamed the Carl Sagan Memorial
Station, is in an old flood channel, Ares Vallis. Although the landed mission is
officially over, scientists are still at work analyzing the first new pictures
of the Martian surface since the Viking landers in 1976. Because the IMP is a
"multi-spectral" imaging system, it take different kinds of pictures, and the
data returned from the IMP is helping scientists learn about the atmosphere,
geology, and weather of Mars. The IMP was designed at the University of Arizona
Lunar and Planetary Lab, by a team under the direction of Principal Investigator
Peter Smith.
- Pacific-Sierra Research - Nonlinear
Mean-Square Estimation - Mark J. Carlotto (markc@psrw.com) Pacific-Sierra Research
Corporation, 1400 Key Blvd., Suite 700, Arlington, VA 22209 - Pacific-Sierra
Research PSR Image Shape From Shading Form
- In order to reduce the complexity of the algorithm and the size of the
lookup table, we use the first three Landsat TM principal components to predict
the topographic component. Figure 2a shows the first principal component image
over a 200 x 200 pixel training area in the Sandia Mountains. Figure 2b is the
topographic component computed from the DEM using a Lambertian reflectance map.
Within the training area, 14,538 unique combinations of the three principal
component values were found. Figure 2c is the topographic component estimated
from the three principal component images over the training area. Within the
training area, the method is able to spatially enhance the topographic
component, revealing subtle detail not visible in the lower resolution DEM.
- We then used the lookup table to the estimate the topographic component from
the Landsat imagery outside of the training area
- A limitation of the basic method described in Section 2 is that the table
only contains entries for input values in the training area. Values encountered
that are not in the training area must be interpolated. The simplest
interpolation technique is to assign the output value associated with the
nearest entry in the table
- Figure 3a shows the topographic component derived from the DEM over the full
study area. Figure 3b is the topographic component estimated from the imagery
using the lookup table derived over the training area. The full study area
contains 133,213 unique combinations of the three Landsat TM principal component
values. Within the training area only about 10% of the spectral diversity of the
full image is represented. Almost 90% of the values shown in Figure 3b have thus
been interpolated.
- In order to reduce the complexity of the algorithm and the size of the
lookup table, we use the first three Landsat TM principal components to predict
the topographic component. Figure 2a shows the first principal component image
over a 200 x 200 pixel training area in the Sandia Mountains. Figure 2b is the
topographic component computed from the DEM using a Lambertian reflectance map.
Within the training area, 14,538 unique combinations of the three principal
component values were found. Figure 2c is the topographic component estimated
from the three principal component images over the training area. Within the
training area, the method is able to spatially enhance the topographic
component, revealing subtle detail not visible in the lower resolution DEM.
- Resume for Keith H. Stewart - Final image mosaics were spliced together using Martian coordinate system and converted into .PSD graphic formats for geologic mapping. Processed datasets were then analyzed at the Smithsonian Institute (Center for Earth and Planetary Studies) using a Solaris workstation and proprietary photoclinometry software.