The control group is likely to have remained unchanged, while any change noted in the exposed group presumably can be linked to exposure to the public relations tactic--the key difference between the two groups. For example the transit system noted above might also compare before-and-after ridership figures with those of a transit system in a similar city in another state (the control group),where riders were not exposed to the promotional campaign.
Remember that research design is always a trade-off. Strategic planners must make choices that consider the importance of the program, the accuracy and reliability of the information to be received,and the needed resources(time,personnel,financial and the like).They also should look at the whole picture, focusing not on each tactic in isolation but on how the various tactics together have achieved their objectives.
Also be aware of extraneous factors that can mask your evaluation efforts.Not every change in a public's awareness,acceptance or action may be caused by your public relations programming.Try to account for other activities and influences that the publics have been exposed to.
Let's return to the example of the transit system.If,a few days after the ridership campaign begins, an international political crisis sends oil priced up 30 percent ,you probably would notice a lot more riders on the trains and buses. But you shouldn't attribute this to your public relations campaign. It's more likely that motorists are reacting to the higher cost of gasoline at the pumps,and your research report must note this