Next: Theory of the
Up: Pruning Algorithms
Previous: Pruning Algorithms
Pruning algorithms try to make neural networks smaller by pruning
unnecessary links or units, for different reasons:
- It is possible to find a fitting architecture this way.
- The cost of a net can be reduced (think of runtime, memory and
cost for hardware implementation).
- The generalization
can (but
need not) be improved.
- Unnecessary input units can be pruned in order to give evidence
of the relevance of input values.
Pruning algorithms can be rated according to two criterions:
- What will be pruned? We distinguish weight pruning and
node pruning. Special types of node pruning are input pruning and
hidden unit pruning.
- How will be pruned? The most common possibilities are
penalty term algorithms (like Backpropagation with Weight Decay,
see section
) and sensitivity algorithms which are
described in this chapter.
Sensitivity algorithms perform training and pruning of a neural net
alternately, according to the algorithm in figure
.
Figure: Algorithm for sensitivity algorithms
Niels.Mache@informatik.uni-stuttgart.de
Tue Nov 28 10:30:44 MET 1995