In all cases, the units of a Hopfield network have states, which are described by numbers belonging to the set of possible pattern values. In the binary case the states are either or 1, or +1 and -1. In the continuous case, the states are members of [a,b]. Patterns are input to the network by setting the states of the units to the appropriate values, according to some mapping from the components of the input vector to the units.
The Hopfield network is not trained in the way a Backpropagation network is. Instead, it resembles the Probabilistic Neural Network of Specht in that a set of exemplar patterns are chosen and used to initialize the weights of the network. Once this is done, any pattern can be presented to the network, which will respond by displaying the exemplar pattern that is in some sense similar to the input pattern. The output pattern can be read off from the network by reading the states of the units in the order determined by the mapping of the components of the input vector to the units.
The topology of Hopfield network differs from those of the two networks mentioned in the previous paragraph. There are no distinct layers; every unit is connected to every other unit. In addition, the connections are bidirectional (information flows in both directions along them), and symmetric: there is a weight assigned to each connection which is applied to data moving in either direction.
The figure shows a Hopfield network with six units.