The normal human vision system has the remarkable ability to
classify objects based on the distribution in space of
the red, green, and blue color components of light emanating from the objects.
This ability can be exploited to classify based on the distribution of
any three variables by creating a "false color" image in which
each variable is assigned to a different color component.
For example, this false color image results from the combination of images from two different sensors in a simulated baggage inspection system. Suspicious organic materials appear reddish brown, automatically detectable by WAY-2C.
The interpreted image highlights such regions in red. It can also trigger an alarm signal when a specified
color class is detected.
WAY-2C's proven maximum likelihood classification method makes it easy to
automate image classification based on the spatial distribution of up to
three variables. The variables can represent virtually any properties,
whether from multi-spectral or multi-sensor imaging devices, or even maps
of such properties as topography, temperature, strain and/or density .
We have recently been awarded US Patent 8,918,347 for an automated method of rapidly
determining the optimum combination
of available sensor variables to differentiate any particular set of target classes.
This method, which we refer to as relevance spectroscopy has been demonstrated
by experimental studies on a variety of multispectral and hyperspectral aircraft
and satellite images. The resulting combination is suitable for analysis by traditional
and/or our maximum likelihood methods.
The upper image was created from a set of three monochrome images automatically chosen
from a set of over two hundred images, each representing a different spectral band.
In this case the band combination was chosen to optimize differentiation of certain
water and shoreline classes while suppressing differentiation of land vegetation classes.
Training regions for each class are outlined by the red, yellow, purple, and blue rectangles
with each color representing a different class. Note how land vegetation regions are combined
into a single class whose training regions are outlined by the green rectangles.
The resulting WAY-2C interpretation is shown in the lower image.
If the objective had been to distinguish a variety of land-based vegetation classes
the system would have automatically selected an entirely different set of spectral bands.
WAY-2C's ability to be instantly trained to meet new conditions, combined with its ability to recognize anomalous color distributions in all or selected parts of an image, makes it ideal for automatic anomaly detection in video surveillance systems.
Best Focus, Time-Series Monitoring and Other Pattern Recognition Applications
The technology now contained in WAY-2C was originally inspired by the need
to analyze noisy complicated geophysical and environmental monitoring data.
The first demonstrations were on data sets ranging from automatic classification
of sea-floor topography based on sonar data to language identification from text files.
We even developed a method for automatically determining best focus in an image.
However, business opportunities quickly led us to concentrate on color and
multispectral-based recognition for machine-vision, sorting and process control.
Now that WAY-2C has become a mature software product, we've begin to find new
non-imaging applications where it offers significant advantages.
Accounting and transaction records often show characteristic, organization specific,
statistical patterns. WAY-2C's anomaly recognition capability can be applied to
such records to search for transactions that fall outside the expected range.
Physiological, motion monitoring etc. signals can often provide information
on a subject's current activity. WAY-2C's powerful relevance spectroscopy
and classification tools have been successfully applied to determining
optimum sensor combinations for automated activity classification.
Like color-based recognition, these applications are characterized by
data distributions that are unpredictable, usually not Gaussian,
and most often multimodal. We've found that we can apply the same rapid
train-by-example recognition methods to the analysis of a wide variety of
both quantitative and qualitative data; all we have to do for a new data
format is to add an interface routine to cast the incoming data into the form
of an image in computer memory.
Non-image data sets which WAY-2C has successfully classified range from
acoustic spectrograms to activity state from 3-axis accelerometers.
If you have data analysis applications that might benefit from robust
automated statistical pattern recognition methods, contact Robert McConnell
for a free confidential evaluation.
Thermal Transient Analysis
By assigning thermal images from a single scene obtained at different times to different image planes, WAY-2C can classify regions of the resulting "color" images on the basis of their thermal properties.
Live Clams and Mud Clams
One of these clams is alive, the other is a "mudder", dead and partially filled with mud.
Both have about the same density.
Can you tell them apart? Not from this image! However, we have invented a proprietary
method for differentiating them. If you process millions of clams or similar shellfish
per day, and are interested in automatically detecting and rejecting the dead ones,
please contact us.