next up previous
Next: References

HOPFIELD NETS AND BINARIZATION TECHNIQUES OF GREYSCALE IMAGES

Ute Matecki gif
Universität Osnabrück
FB Mathematik /Informatik
Albrechtstr. 28
49069 Osnabrück
ute@informatik.Uni-Osnabrueck.de

Silke Seehusen
Fachhochschule Lübeck
FB Elektrotechnik
Stephensonstr. 3
23562 Lübeck
silke@acm.org

Keywords: Image reconstruction, Hopfield nets, greyscale images, binarization techniques

The general idea behind the research presented is to investigate the combination of the capabilities of conventional techniques with the capabilities of Hopfield nets. By conventional techniques in this context we mean techniques not based on the neural approach. As in different application areas different conventional techniques have come out to be most beneficial, we suppose, this is also true for the combination of these techniques with Hopfield nets. Therefore experiments have to be conducted in each application area to find out the well suited combinations of conventional and neural techniques.

The application area we focuss on is the reconstruction of faces of people. Photographs of different people were taken and converted to 8-bit greyscale images (in TIFF-Format) by the means of a scanner. Because Hopfield nets can process only bipolar input data, the idea is, not only to convert the greylevel images to a bilevel format but to reduce the information contained by them to the essential parts. Three conventional binarization techniques were selected:

At the border of the image only the appropriate part of the operator window is computed.

For the Hopfield Net three different learning rules were selected for the experiments, because there is no best learning rule known so far. Especially the combination with other techniques has not been investigated to a sufficient extent. The selected learning rules are:

In the first two cases the Hebb learning rule was applied as an initialization step. The learning phase with Perceptron and Boltzmann learning was restricted by the number of learning cycles due to CPU time limitations. The Hopfield net was run in asynchronous mode, e.g. the neurons were tested and eventually updated sequentially. During the learning phase data is collected with respect to consumed CPU time.

After the learning phase the learnt images and distored versions of them are presented to the Hopfield net for recognition. In this phase the consumed CPU-time, the Hamming distance between the expected and the obtained result and the number of firing and nonfiring neurons per relaxation are measured. Furthermore the consumed CPU-time is measured. The latter numbers give some hints about the (remaining) activity of the net. The binarization techniques and the simulation of the Hopfield net with all learning rules are implemented in C++. The experiments are run on Sun Sparc 10 and Sun Sparc IPX Workstations. As was expected, the consumed CPU time during the learning phase is very high (up to two days) and the CPU time during recognition neglectable low (up to three seconds response time).

The following results can already be stated:

Conclusions:

The experiments done so far show the following results:
With respect to the learning rules, the Perceptron learning rule turned out to be the best. The results of the experiments done so far show, that in this case a much lower number of learning cycles is needed to lead to success. With respect to the combination of neural and conventional techniques, the combination of the Perceptron learning rule and the binarization by threshold dependant on the greylevel distribution turned out to be the best.





next up previous
Next: References



WWW-Administration
Thu Jul 6 15:01:02 MET DST 1995