Next: References

# HOPFIELD NETS AND BINARIZATION TECHNIQUES OF GREYSCALE IMAGES

Ute Matecki
Universität Osnabrück
FB Mathematik /Informatik
Albrechtstr. 28
49069 Osnabrück
ute@informatik.Uni-Osnabrueck.de

Silke Seehusen
Fachhochschule Lübeck
FB Elektrotechnik
Stephensonstr. 3
23562 Lübeck
silke@acm.org

Keywords: Image reconstruction, Hopfield nets, greyscale images, binarization techniques

The general idea behind the research presented is to investigate the combination of the capabilities of conventional techniques with the capabilities of Hopfield nets. By conventional techniques in this context we mean techniques not based on the neural approach. As in different application areas different conventional techniques have come out to be most beneficial, we suppose, this is also true for the combination of these techniques with Hopfield nets. Therefore experiments have to be conducted in each application area to find out the well suited combinations of conventional and neural techniques.

The application area we focuss on is the reconstruction of faces of people. Photographs of different people were taken and converted to 8-bit greyscale images (in TIFF-Format) by the means of a scanner. Because Hopfield nets can process only bipolar input data, the idea is, not only to convert the greylevel images to a bilevel format but to reduce the information contained by them to the essential parts. Three conventional binarization techniques were selected:

• binarization with fixed threshold
• binarization with threshold dependant on greylevel distribution
8-Bit greylevel images consisting of N pixels containing greylevel values 0 to 255 with probabilities p(0),..,p(255), where

 $\sum$i=0255p(i) = N

The destination images contain the two greylevel values 0 and 255 with probabilities $g\left(0\right) \le G$

and $g\left(255\right) = 1 - g\left(0\right)$ . The threshold $&thetas;$ , for binarization is chosen, so that the following equations hold
 $g\left(0\right) = \sum$i=0&thetas;- 1p(i) ≤G $g\left(255\right)=\sum$i=&thetas;255p(i)

Figure 1:   Original greylevel image without preprocessing

Figure 2:   Binarization with threshold dependant on greylevel distribution

Figure 3:   Binarization with local dependant threshold

• binarization with a local-dependant threshold with a 3x3-Window

As Fig. 2 shows, the binarization technique described above detects closed areas. However, the technique to compute a local-dependant threshold is an edge detecting technique (see Fig. 3). For each pixel f(x,y) of the source image, the mean greylevel-value of an MxM-operator window is computed and used as threshold $&thetas;$ , as shown in equation 4:  $&thetas;\left(x,y\right) = \left\{1D\right\}\sum$i=YaYb∑j=XaXbf(x-i,y-j)

where

 $D=\left(X$b-Xa)(Yb-Ya)

At the border of the image only the appropriate part of the operator window is computed.

For the Hopfield Net three different learning rules were selected for the experiments, because there is no best learning rule known so far. Especially the combination with other techniques has not been investigated to a sufficient extent. The selected learning rules are:

• The Perceptron learning rule
• a simplified Boltzmann learning rule
• Boltzmann learning followed by Perceptron learning
The idea of this technique is, to "expand" the basins of attraction of the learnt attractors.

In the first two cases the Hebb learning rule was applied as an initialization step. The learning phase with Perceptron and Boltzmann learning was restricted by the number of learning cycles due to CPU time limitations. The Hopfield net was run in asynchronous mode, e.g. the neurons were tested and eventually updated sequentially. During the learning phase data is collected with respect to consumed CPU time.

After the learning phase the learnt images and distored versions of them are presented to the Hopfield net for recognition. In this phase the consumed CPU-time, the Hamming distance between the expected and the obtained result and the number of firing and nonfiring neurons per relaxation are measured. Furthermore the consumed CPU-time is measured. The latter numbers give some hints about the (remaining) activity of the net. The binarization techniques and the simulation of the Hopfield net with all learning rules are implemented in C++. The experiments are run on Sun Sparc 10 and Sun Sparc IPX Workstations. As was expected, the consumed CPU time during the learning phase is very high (up to two days) and the CPU time during recognition neglectable low (up to three seconds response time).

The following results can already be stated:

• The experiments with the Perceptron learning rule showed remarkable differences between the selected preprocessing techniques already in the learning phase. The variation of the consumed CPU-times per pattern was about a factor of 100 between patterns encoded by the local dependant threshold and the other two techniques.
The recognition phase showed different results, due to the number of learnt patterns:
• Considering a low number of patterns, the results did not show serious differences with respect to the binarization technique.
• Considering a higher number of patterns, the recognition phase showed, as we expected, that the recognition rate of patterns preprocessed by the technique with local dependant threshold was the worst. (Recognition rate in this context means the mean Hamming distance between the expected and the obtained result). However, patterns encoded by the threshold dependant on the greylevel distribution already at a low number of learning cycles (ca. 40) showed a high recognition rate.

 number of binarization mean value learning cycles method hamming distance 40 local dependant 221.25 threshold 40 histogram dependant 77.75 threshold

Table 1:   hamming distances between expected and obtained results

• The experiments with the Boltzmann learning rule showed that even if a high number of learning cycles had been applied, the recognition rate is worse than those obtained with the Perceptron learning rule.

• The experiments with the Boltzmann learning rule and a second learning phase delivered interesting results but are not finished yet. After finishing the experiments and evaluating the results, further experiments are planned to investigate in more detail the most promising combination of techniques.

### Conclusions:

The experiments done so far show the following results:
With respect to the learning rules, the Perceptron learning rule turned out to be the best. The results of the experiments done so far show, that in this case a much lower number of learning cycles is needed to lead to success. With respect to the combination of neural and conventional techniques, the combination of the Perceptron learning rule and the binarization by threshold dependant on the greylevel distribution turned out to be the best.

Next: References