- 1.
- Generate a square grid such as that in the
background of Fig.2.6, if necessary
using pen and paper, and mark in some pixels.
Repeat with another grid and mark the origin
in both
grids. Now compute carefully the dilation and
erosion of one by the other. Write a program
to
dilate and or erode one pixel array by another
and display all three arrays.
- 2.
- With the program of the last exercise, explore
alternate erosions and dilations on some .tif
files obtained from a scanner or from disk.
- 3.
- Use erosion techniques to count the objects
in Fig.2.10 or Fig.2.12.
- 4.
- Write a program to find the lines of text
and the spaces between them in some scanned image
of text such as Fig.2.4. You may use any
method you choose.
- 5.
- Extend the previous program so as to separate
out each line of text into words.
- 6.
- Extend the previous program so as to isolate
single characters. What does it do to an image
such as Fig.2.2?
- 7.
- Write a boundary tracing program and try
it out on a .tif file of your choice. Alternatively,
you might like to start on a bitmap file if you
have access to a unix workstation and some
familiarity with the beast. See the .c files
on disk for a helping hand.
- 8.
- By hook or by crook put a box around each
of some characters in a printed word, either
from
Fig.2.4 or somewhere else. Normalise the
boxes to a standard size, having first worked
out
what a good standard size is. Print your normalised
images and ask whether you could recognise them
by eye! Recall that a 2 by 2 matrix applied to
each pixel will give the effect of a linear
transformation. It is recommended that you expand
or shrink each axis separately to a standard
size
as your first move.
- 9.
- How many scan lines would you think necessary
to distinguish the characters in your sample?
Turn each character into a vector by scanline
methods. Can you tell by looking at the vectors
what
the characters are? (It might be useful to start
of with a set of digits only, and then to
investigate to see how things change a the alphabet
size increases.)
- 10.
- Are there any alternative mask shapes you
feel tempted to use in order to convert each
character into a vector?
- 11.
- Segment by hand an image such as Fig.2.12
and normalise to occupy the unit disk
with the centroid of the function representing
the region at the origin. Divide the disk up
into
sectors and some the quantity of grey in each
sector. Can the resulting vectors distinguish
shapes
by eye? Can you devise any modifications of this
simple scheme to improve the recognition?
- 12.
- Obtain images of shapes which have some
kind of structure which is complicated; orient
them
in some canonical way and look at them hard. Can
you devise a suitable collection of masks to
discriminate the shapes? Characters under grey
levels present an interesting problem set. Try
getting camera images of stamped digits (easily
made with modelling clay and a set of house
numbers) under different illuminations.
- 13.
- Compute the central moments of one of your
characters up to order two by hand. Write a
program for doing it automatically and confirm
it gives the right answer. Extend your program
to
compute central moments up to any order. Compute
them up to order n for all your characters, and
check by eye to see if you can tell them apart
by looking at the vectors, for different values
of
n.
- 14.
- Write a program to normalise any array of
characters into the unit disk with the centroid
as
origin.
- 15.
- Write a program to compute the first 12
Zernike moments for a pixel array which has been
normalised into the unit disk.
- 16.
- Use a border following algorithm to extract
external borders of your characters and then
apply the last program to see if you can still
recognise the characters by looking at the vectors
obtained from taking moments of the boundaries.
- 17.
- Find an algorithm for finding edges in greyscale
images and use it to differentiate an image,
bringing the borders of objects into relief. The
disk programs might be places to start looking,
but you may find other books of value.
- 18.
- Write a program which smooths a time series
of real numbers by applying a moving average
filter. The program should ask for the filter
coefficients from -n to n after finding out what
n
is. To be more explicit, let x(n) be a time
series of real numbers; to get a good one, take
the
daily exchange rate for the Dollar in terms of
the Yen for the last few weeks from the
past issues of any good newspaper. Let g(n)
be defined to be zero for |n| > 5 and assign
positive values summing to 1 for
. You could make them all equal, or have a
hump in the middle, or try some negative values
if you feel brave.
Your program should generate a new time series
y(n) defined by

and plot both x and y.
- 19.
- Modify your program to deal with a two dimensional
filter and experiment with it; get it to
remove `salt and pepper' noise introduced into
a grey scale image by hand.
- 20.
- Beef up the above program even further,
and use it for
differentiating images. Try to segment images
like Fig.2.12 by these means.
- 21.
- Generate a binary waveform x(n) as follows:
if the preceding value is black, make the
current value white, and if the preceding value
is white, make the current one black. Initialise
at
random. This gives a trivial alternating sequence,
so scrap that one and go back the preceding two
values.If the two preceding values are both black,
make the current value white; if the last two
values were black then white, make the current
value white; if the last two values were white
make
the current value black, and if the last two values
were white then black, make the current value
black. Well, this is still pretty trivial, so
to jazz it up, make it probabilistic. Given the
two
previous values, make a random choice of integer
between 0 and 9. If both the preceding values
were black, and the integer is less than 8 make
the current value white, if it is 8 or 9 make
it
black. Make up similar rules for the other three
cases. Now run the sequence from some initial
value and see what you get.
- 22.
- Could you, given a sequence and knowing
that it was generated according to probabilities
based on the preceding k values as in the last
problem, work out what the probabilities are?
- 23.
- Instead of having k = 2, does it get more
or less interesting if k is increased?
- 24.
- Instead of generating a sequence of binary
values, generate a sequence of grey scale values
between 0 and 1 by the same idea.
- 25.
- Can you generalise this to the case of two
dimensional functions? On a rectangular pixel
array, make up some rules which fix the value
of a pixel at 0 or 1 depending on the values
of
pixels in the three cells to the North, West and
North-West. Put a border around the array and
see
if you can find rules for generating a chess-board
pattern. Make it probabilistic and see if you
can generate an approximate texture. Do it with
grey levels instead of just black and white.
- 26.
- Make up a probabilistic rule which decides
what a cell is going to have as its value given
all the neighbouring pixel values. Generate a
pixel array at random. Now mutate this initial
array
by putting a three by three mask on, and using
the surrounding cells to recompute the middle
one.
Put your mask down at random repeatedly. Do the
following: always make the centre pixel the average
of the surrounding pixels. Now make the top and
left edge black, the bottom and right edge white,
and randomise the original array. Can you see
what must happen if you repeat the rewrite operation
indefinitely? Run this case on your program if
you can't see it.
- 27.
- Can you, in the last set of problems, go
back and infer the probabilistic rules from looking
at the cell values? Suppose one part of an array
was obtained by using one set of values and
another by using a different set, can you tell?
Can you find a way of segmenting regions by texture?
- 28.
- Take an image of some gravel on white paper,
then sieve the mixture and take another image
of
what gets through the sieve and another of what
gets left in the sieve. Repeat six times.
Can you sort the resulting images by any kind
of texture analysis? Try both high and low angle
illumination in collecting your data.