Evaluating PR effects
UK public relations practice, and the need for new effectiveness models
Tom Watson
This article describes the sometimes primitive evaluation methods that have set PR activity in Britain apart from other (and more lavishly researched) elements in the communications mix. While a substantial corpus of respectable theory and new practice has evolved in the US, the UK has clung to ideas like counting column inches, and expressing these in terms of 'free advertising'. Noting the profound obstacles to a new objectives-based professionalism - eg short-termism, no client money for research - the author identifies practical ways in which PR practice could now briskly move ahead.
As public relations practice develops around the world, the issue of evaluation is becoming more important. In May, the International Public Relations Association devoted a sizeable part of its Professional Development Conference in Hong Kong to the issue and in the US it is widely discussed. Yet in the UK, evaluation is still in the realm of the huckster and the column inch measurer.
In a recent edition of Marketing (April 23, 1992), two measures offered by consultancies for proving the effectiveness of campaigns were that 'it reached 110 million people last year' (in the UK with its 60 million population) and '...received the equivalent of £75,000 worth of advertising, nearly three times the budget. A further £9,000 of free advertising was achieved.' Both measures are utterly worthless. The first because it is a very loose approximation of the potential readership of the media in which the campaign appeared and is thus an output yardstick and not evaluation: the second because it too is an output measure (worked out on the back of a fag packet) but gives no indication of whether the campaign messages got through to the target audience.
In the US a substantial amount of research has been undertaken into methods of analysing effectiveness of public relations practices. In the UK, the main tests offered by consultancies and corporate PR practitioners are still simplistic and often downright ridiculous. By raising this issue, it is hoped that the UK PR sector may become rather more honest with itself and its clients, and develop more acceptable practices.
WHAT SORT OF EVALUATION?
To begin at the beginning, PR practitioners (like others in the communications industry) must develop the parameters for evaluation research on starting points, objectives, strategy and tactics.
Historically, the measurement of column inches of press cuttings or 'mentions' on electronic media were seen as simple evaluation techniques. For example, the grand old man of PR education, Frank Jefkins (1) has propounded this case through years of courses and several books which are widely used in CAM courses and other training arenas. These measures were supplemented by ad hoc measures such as experience and observation.
Experimental methods are 'what changes have affected the situation which was assessed before planning the campaign', while observation is seeing that 'some changes will be physically apparent or visible'.
(These methods do not include any of the approaches developed since the 1940s by behavioural psychologists, such as those at Yale, which are familiar to many marketeers and planners in advertising.)
Jefkins also points to 'scientific' methods of evaluation such as media coverage rating charts, judgements on the perceived value of 'PR media', volume of enquiries or reader response, and the 'tone' of the media coverage.
While these types of assessment are widely practised, they are not methods of evaluation undertaken with any consistency or objectivity.
They fail because they are subjective and can be skewed by the personalities undertaking the judgement and also because they cannot be replicated. Some are little more than sales leads measures. Others, which consider 'tone', opportunities to see or media ratings are judgements made to suit the client/employer rather than to measure the effectiveness of reaching target markets. Objectivity, while difficult to achieve, is not sought as a first principle.
Too often, the evaluation is set after the campaign has been set in motion. Weiss (2) says that the '... purpose (of evaluation) should be clearly stated and measurable goals must be formulated before the questions can be devised and the evaluation design chosen.'
This is an argument put forcefully by almost all writers on evaluation. It is the counter to the old Irishman's advice on how to get to Killarney - 'not to start from here'.
THE PROBLEM OF PRECISE AND MEASURABLE PR OBJECTIVES
If the start point is defined, and the objectives are set, as part of the programme design, waypoints can be measured and the effectiveness or impact assessed. White and colleagues in an unpublished paper (3) argue that 'setting precise and measurable objectives at the outset of a programme is a prerequisite for later evaluation.'
This is often easier said than done but Swinehart (4) writes that the objectives of a campaign or programme should be closely related to the research design and data collection as well as to the campaign methods and strategy used. He proposes that five areas of questioning should be applied to objectives.
What is the content of the objective?
What is the target population?
When should the intended change occur?
Are the intended changes unitary or multiple?
How much effect is desired?
By posing these questions, it may be seen that the simplistic media measurement or reader response analyses advocated by Jefkins only consider output - volume of mentions - and not effects. Objectives or, say, more mentions in the Financial Times, which may all too often be sought by a quoted industrial company, are little more than a stick to beat the public relations (more correctly, press relations) practitioner.
Dozier (5) refers to this approach as 'pseudo-planning' and 'pseudo-evaluation'. Pseudo-planning is the allocation of resources to communications activities, where the goal is communication itself. Pseudo-evaluation is '... simply counting news release placements, and other communications'.
Swinehart divides evaluation into four categories - process, quality, intermediate objectives and ultimate objectives. He suggests that there is more to evaluation than impact. He also paves the way for effects-based planning theories:
Process is '... the nature of the activities involved in the preparation and dissemination of material'.
Quality is '... the assessment of materials or programs in terms of accuracy, clarity, design, production values.
Intermediate objectives are 'sub-objectives necessary for a goal to be achieved', eg placement of news.
Ultimate objectives comprise 'changes in the target audience's knowledge, attitudes and behaviour'.
This analysis is still rather mechanistic and lacks the flow that campaign planning and evaluation needs. However, it also highlights the need for planning and evaluation to be linked. The simpler approaches - Jefkins and the 'media mentions calculators' - outlined above separated planning from the campaign and added evaluation on later.
Swinehart's questions about the objectives and his four areas of evaluation are valuable for practitioners in assembling a checklist. They do not promote a unified approach to campaign development and continuous monitoring. They do, however, separate processes such as news placement from objectives and effects.
EFFECTS-BASED PLANNING
A more complete approach to planning and subsequent evaluation are the 'effects-based planning' theories put forward by James VanLeuven and his colleagues (6). Underlying this approach is the premise that a campaign's intended communication and behavioural effects serve as the basis from which all other campaign decisions can be made. It is, to take the Irish analogy, using a map and compass to get to Killarney.
The process involves setting separate campaign objectives and sub-objectives for each public. The authors argue that the campaign planning becomes more consistent by having to justify programme and creative decisions on the basis of their intended communication and behavioural effects.
It also serves as a continuing evaluation process because the search for consistency means that monitoring is continuous and the process of discussion demands evidence on which to reach decisions. This means that a constant loop can be created between planning, action and evaluation which is always feeding back to further planning and action.
Experience of developing and managing long-term campaigns on contentious issues leads me to the belief that consistency is one of the most difficult practical issues faced by the public relations professional. A more disciplined approach will allow the parameters of the campaign to be more closely defined and for continuous monitoring to replace after-the-event evaluation.
BARRIERS
Although evaluation is a hot topic, it is a practice about which much more is spoken than ever undertaken. The barriers to the evaluation of campaigns being carried out as an unquestioned element of every public relations programme are formidable.
The barriers include low levels of professional training among practitioners, client ignorance or desire to 'count the cuttings', experimental judgements by clients and practitioners, and lack of budget for evaluation.
Lack of budget is probably the largest single factor, because public relations expenditure is still seen as a junior partner to advertising and other marketing services. Clients and in-house budget controllers won't pay for the research and tracking studies. They prefer to rely on cuttings and 'gut feeling'.
Thus it may be necessary for practitioners to adopt new approaches to programme design which include the evaluation procedures within the programmes themselves. Apart from the attractions of the total coverage of effects-based planning, it a