Home
Our Tech
Examples
Applications
FAQ
Index
Sales

Request Info

Free Trial

Color Classification of Non-Uniform Baked and Roasted Foods

Robert K. McConnell, Jr., Henry H. Blau, Jr.

Based on paper presented at: FPAC IV Conference 3-5 November 1995 Chicago, Illinois
Sponsored by: Food and Process Engineering Institute A Unit of ASAE
2950 Niles Road St. Joseph, MI 49085

Abstract

Although the color of baked and roasted materials is used as an indication of quality or degree of cooking, the nature of many of these materials renders them ill-suited for traditional color based automated classification. Here we identify some of the problems and show how off the shelf software, using minimum description based color classification can be used to produce results generally in good agreement with "subjective" human classification.

Introduction

As long as food has been baked or roasted, color has presumably been an important guide to whether it has been properly cooked. Even if cooking is adequate, if the color does not conform with the customer's preconceptions the chances of a purchase are greatly diminished.

The advantages of having reliable, automated, full color inspection systems whose decisions are consistent with human perception have long been recognized. Now that inexpensive color cameras and color frame grabber boards are readily available, the major obstacle to widespread use of color has been the lack of suitable general classification methods.

To mimic human color recognition a machine vision system must be able to operate in a single three-dimensional color space, learn quickly, and handle multi-modal, overlapping, color distributions easily. At the same time it must remain sensitive to anomalous characteristics, even under non-uniform lighting conditions.

Typical Applications in Baking and Roasting

Muffin, doneness = 3 Muffin, doneness = 4 Muffin, doneness = 5 Muffin, doneness = 6 Figure 1a shows a group of four blueberry muffin images which serve as references of different degrees of doneness from a single batch. The numbers increase with cooking time and therefore darkness. Muffin #3 is undercooked, muffin #4 is optimum, muffin #5 is slightly overcooked and muffin #6 is substantially overcooked.

 Fig. 1b, Muffin,  doneness to be measured The fifth muffin (Figure 1b), to be graded by comparison with the references, is from a different batch.

 Fig. 2a Peanut Reference #53  Fig. 2b Peanut Reference #65  Fig. 2c Peanut test sample to be classified Figure 2 represents a similar situation. The first two images (samples #53 and #65) show reference samples of peanuts. The third image is to be classified as to which of the reference samples it most closely resembles.

 Fig. 3a,
Pizza, to be mapped Figure 3 illustrates a slightly different class of problem. It shows part of a tomato, cheese and mushroom pizza. Except for a small amount of tomato sauce exposed on the extreme right of the image, most of the tomato sauce is overlain by cheese which is in turn topped by scattered mushrooms. The objective was to map the pizza surface to determine which of the three ingredients was exposed at each location and the total amount of each appearing in the image.

Inspection Issues

All of the required classifications are easily performed by humans. The objective of this study was to see how the results from, WAY-2C, our color machine vision system based on the minimum description approach would compare.

Figure 4.   Hardware for a typical color machine vision system The hardware for a typical vision system (Fig. 4) is relatively simple: lighting, camera, and computer. The computer contains a frame grabber board with dual ported memory which captures the signal from the computer and makes it available to the classification software. A digital I/O capability accepts a trigger signal to tell the system when to begin inspection and outputs the results for process control purposes.

The challenge is in the color classification software. To classify objects of relatively pure color a number of traditional methods work quite well. These include: table lookup, thresholding, and nearest neighbor classification. Unfortunately baked and roasted foods, like the muffins and peanuts shown here, may be far from uniform in color. In such cases the underlying assumptions of the traditional methods may be violated causing the traditional classification methods to fail (McConnell, Massa, and Blau, 1995).

Fig. 5: Muffin, doneness = 4 Figure 5 shows muffin #4 from Fig. 1. The scene contains several clusters of pixels in color space. These include the cake cluster (mostly light browns), the blueberry cluster (dark), foil clusters (primarily grays and whites) and the shadow cluster (generally dark). The total number of pixels in the cake cluster is irrelevant but the relative number in each position in the color space is important. As the muffin is cooked the position of the pixels in the cake cluster tend to shift in color space. Meanwhile the colors of the blueberries, foil and shadows remain essentially constant providing no information on the degree of doneness.

 Fig. 6 Typical peanuts Fig. 6. shows a closeup of one of the peanut images. Here again there are several basic color clusters: peanuts without skins, peanuts with skins, shadows and glints. For this application only the color distribution of the skinless peanuts was of interest.

The muffin and peanut images, like most images, are multimodal: they have several distinct color clusters. In both sets of images one of the clusters varies in quality with the property of interest (doneness) but remains relatively constant in quantity. The other color clusters, which do not reflect the property of interest, are relatively constant in quality but may vary in quantity.

Since the quantity of the irrelevant colors may vary from sample to sample, any method sensitive to changes in the quantity of these components, such as those based on mean values or thresholds, must isolate the irrelevant colors before performing the classification. Historically this has involved specialized algorithm development for classification thus effectively limiting use to a few high volume applications which can justify the development cost.

Over the past several years we have developed a simple general classification method, which we refer to as minimum description which effectively handles the above problems without the necessity of any special algorithm development.

Minimum Description Analysis

The theoretical basis for use of minimum description analysis for color based classification is discussed extensively elsewhere (see for example McConnell and Blau, 1992, 1994, McConnell, Massa and Blau, 1995). This simple, general approach provides particularly efficient color classification using only a reference histogram, based on the color distribution of each class of interest and a test histogram, representing the color distribution of the object to be classified. We measure the dissimilarity of distributions for a variety of likely reference color distributions and then choose the reference class with the smallest measured dissimilarity.

The approach can be shown to be closely related to a Bayesian maximum likelihood classification and usually produces results in good agreement with those of human inspectors.

The system is trained much like a human by simply showing it examples of the various classes of interest from which it builds the reference histograms. Training time is typically of the order of a few seconds. The system can then be placed in classification mode where it performs the inspection and reports the results with little or no human intervention.

This classification method is now being successfully used in a wide variety of inspection and process control applications the electronic, automotive, food products and recycling industries. The results generally coincide well with those made by a human inspector.

Muffin Doneness Classification

We trained WAY-2C, our minimum description based color classification system, on each of the reference muffins in Fig. 1a using a square region of interest approximately equal in area to the muffin top. The training region included examples of cake, blueberries, shadow and foil. When shown the test muffin in Fig. 1b the system classified it as being most similar to reference muffin #4.

Peanut Color Matching

Two separate sets of tests were performed on the peanuts.

For the first tests we poured approximately 200g of each sample into a 10x10cm rectangular plastic box. We then trained WAY-2C using four different images of each of samples #65 and #53. Between acquisition of each image the sample was rotated. At least once during the process it was mixed and then gently shaken to level it. Training time for the complete set was under 3 minutes.

When the training was complete we repeatedly classified each of the three samples, rotating, shaking, and stirring them from time to time to determine the effect of these actions.

The second set of tests used moving samples. For this set we used a rotating turntable containing approximately 500g of peanuts in a 22cm diameter pie plate. The training and inspection area was the same size as that used in the stationary tests.

Because of the color variations at scales of several inches, even though each of our measurements sampled an area of about 60 square centimeters, fluctuations were still large enough for occasional regions in sample #53 to most resemble sample #65 and vice versa.

 Fig. 2a Peanut Reference #53  Fig. 2b Peanut Reference #65  Fig. 2c Peanut test sample to be classified In spite of the occasional fluctuations, the results of all the tests consistently indicated that the surface color distribution of the unknown sample more closely resembled that of sample #53 than sample #65.

After hearing that laboratory experiments by the sample provider based on colorimetry found the "unknown" sample more closely resembled Sample #65 we repeated the original tests, did some careful quantitative interpolation, and even had seven different unbiased human observers visually rank the samples in order of color under a variety of artificial and natural lighting conditions. All of these agreed with our original results: the "unknown" sample most resembles Sample #53. In other words the minimum description approach was in agreement with the human classification and in disagreement with a method which utilized the mean color of the peanuts.

Pizza Mapping

The third test demonstrates the use of the minimum description method for image interpretation.  Fig. 3a,Pizza, to be mapped  Fig. 3b,Pizza map To map the pizza, the system was first trained by building reference histograms from representative regions of cheese, mushroom and tomato sauce. It was then instructed to classify this image at a 4x4 pixel resolution. The resulting map is shown. The derived statistics for surface exposure of the constituents are 67 percent cheese, 21 percent mushroom and 12 percent tomato sauce.

Conclusions

Based on the above results, and the performance in similar applications where the system is in routine operation, one can conclude that minimum description color classification is well suited to classifying of baked and roasted foods similar to those tested here.

References

1. McConnell, R.K. and H.H. Blau. 1992. A powerful, inexpensive approach to real-time color classification. Proceedings Soc. Mfg. Engs. Applied Machine Vision Conference '92, June 1-4, 1992, Atlanta, SME Technical Paper MS92- 164, Society of Manufacturing Engineers, Dearborn, Michigan.
2. McConnell, R.K. and H.H. Blau. 1994. Minimum description classification: a new tool for machine vision color inspection, Proc. FPAC III Conference, February 9-12, 1994, Orlando, Amer. Soc. Agr. Engs., St. Joseph, Michigan.
3. McConnell, R.K., R.A. Massa and H.H. Blau. 1995. Color machine vision. Proceedings Sensors Expo Boston, May 16-18, 1995.


  Applications    FAQ    Home    Index    Examples    Info    Free Trial    Sales    Top  
Last Updated 11/12/06