
242 Evaluating information systems in SMEs J Ballantine et al
Research method
The IS/IT evaluation practices of four SMEs are dis-
cussed in this paper. Evaluation practices were ascer-
tained by conducting interviews with relevant individ-
uals, including top management and IS personnel.
Interviews were semi-structured, addressing inter alia
the following: the extent to which investments in IS/IT
were subject to evaluation; the criteria used to make
investment decisions; the extent to which an IS/IT and
business strategy existed, the influence of major cus-
tomers and the IS skills available within the organiza-
tions.
The four SMEs discussed here are manufacturers,
three supply the motor industry, and the fourth
assembles light fittings. Those supplying the motor
industry are a precision tool manufacturer, a manufac-
turer of automotive springs and a company which makes
a variety of tube or wire-based products (for example,
clutch shafts and pedal assemblies). The companies have
between 24 and 285 employees, and turnover ranges
from £1.6 million to £12 million. They are all family-
owned businesses which have either been taken over by
a larger group (in both cases overseas companies) or
which have introduced general managers to widen the
decision-making base. All the firms have introduced
information systems to aid production activities and use
IBM AS400 hardware, with a variety of materials
requirements planning (MRP) software.
The next section of the paper synthesises the literature
on IS/IT evaluation to establish expectations for the
SME evidence which follows. The section considers the
background to the evaluation problem; approaches avail-
able to deal with this problem; and the use of such
approaches in practice.
Evaluation of information
systems/technology
IS/IT evaluation is a major organizational issue, with
many studies highlighting concerns over IS effectiveness
measurement, cost justification and cost containment
(Dickson & Nechis, 1984; Niederman et al, 1991). The
issue has, to some extent, come to the forefront of man-
agement attention due to the level of capital expenditure
involved. Investment in IS/IT is currently estimated at
betwen 1–3% of turnover (Willcocks, 1992), putting it
on a par with spending on research and development.
IS/IT evaluation is problematic, principally as a result
of the difficulties inherent in measuring the benefits and
costs associated with such investments (Ballantine et al,
1995a). The IS literature has begun to acknowledge the
extent of these problems, and has recognised the need to
consider the wider organizational context within which
evaluation takes place. As a result, a number of evalu-
ation techniques which attempt to take a broader per-
spective than the traditional financially-oriented tech-
niques have emerged. Despite these developments, there
is widespread agreement that there is a lack of evaluation
tools which effectively address the problems.
Symons and Walsham (1988) argue that IS/IT evalu-
ation is difficult because of the multi-dimensionality of
cause and effect and multiple, often divergent, evaluator
perspectives. Evaluation, they argue, should not simply
be viewed as a set of tools and techniques, but a process
which must be understood fully in order to be effective.
Angell and Smithson (1991) echo this by arguing that
‘evaluation of an IS cannot be an objective deterministic
process based on a positivist “scientific” paradigm, such
a “hard” quantitative approach to social systems is mis-
conceived’.
Approaches to evaluation
In an IS context, evaluation has been defined as the pro-
cess of ‘establishing by quantitative and/or qualitative
means the worth of IT to the organization’ (Willcocks,
1992). However, multiple purposes of evaluation are
recognised. Farbey et al (1993), for example, identify
that evaluation serves a number of objectives: as a means
of justifying investments; to enable organizations to
decide between competing projects, particularly if capi-
tal rationing is an issue; as a control mechanism,
enabling authority to be exercised over expenditure,
benefits, and the development and implementation of
projects; and as a learning device enabling improved
evaluation and systems development to take place in the
future. Others (Dawes, 1987; Ginzberg & Zmud, 1988)
identify similar rationales for IS/IT evaluation: to gain
information for project planning; to determine the rela-
tive merits of alternative projects; to ensure that systems
continue to perform well; and to enable decisions con-
cerning expansion, improvement, or postponement of
projects to be taken.
There are many documented approaches to IS/IT
evaluation. Powell (1992), for instance, develops a
classification which recognises objective and subjective
evaluation methods. Objective methods attempt to attach
values to system inputs and outputs, while subjective
methods consider wider aspects of evaluation such as
user attitudes. Objective methods (cost–benefit analysis
(CBA), value analysis, multiple criteria approaches,
simulation techniques, etc), he argues, endeavour to cat-
egorise the costs associated with information systems.
On the other hand, subjective methods (user attitude sur-
veys, event logging, delphi evidence, etc) ‘try to quantify
in order to differentiate between systems, but the quanti-
fication is of feelings, attitudes and perceptions’.
While a wealth of evaluation techniques are to be
found in the IS literature, many of these have focused
historically on quantitative, as opposed to qualitative,
approaches. Hirschheim and Smithson (1988) spell out
the dangers of such approaches which, they argue, can