UWES Design
Initial Scale Development. The initial scale was developed by designing questions that correspond to each domain and factor in the model that guided the scale development and participant demographic characteristics. The content of the individual survey items were guided by theoretical constructs of the HSM, empirical evidence from the Coloma and Henderson study (2011), and university website content studies (discussed above). The wording of the items was based on principals of sound question design, guided by recommendations from The science of asking questions by Schaeffer and Presser (2003) and Scale development: Theory and application by DeVellis (2003). At least three items were constructed per factor, and at least three factors were constructed per subdomain. Each domain, or element of the HSM, consisted of two or three subdomains. Domains were deconstructed this way in order to examine which items best represented themes. Once domains, subdomains, and item labels were determined, several different wording choices were selected to construct each item, developed with
Running Head: UWES 33
the intention of later statistically testing the alternate wordings for sound data properties (e.g. normal distribution) and the most optimal contribution to item consistency within factors, subdomains, and domains. Please see Table 2 for an example or Appendix C for complete list of domain breakdown.
Expert Consultation. Once the initial draft was completed, I received IRB approval to consult with design experts and consultants on the survey design. Two rounds of expert consultation took place. First, I asked two web design experts to look over the items by domain, subdomain, and item labels and provide feedback via a talk-aloud method (Austin & Delaney, 1998), discussing their impressions as they read over each survey item. I took notes on their verbal feedback and gave them movie tickets to thank them for their time and effort. A second draft was created, with each item edited based on their feedback. Reverse scoring items were created to reduce agreement bias, as recommended by DeVellis (2003) and Schaeffer and Presser (2003).
The second draft of the scale in MS Word form was sent via email to four new consultants (one web designer; two students currently seeking graduate programs; and one international student seeking undergraduate programs). Each consultant was asked to consider each survey item, revise it for clarity and use of semantics, and provide feedback. The consultants were given the option to provide feedback on MS Word or by phone. The draft survey was then edited to reflect consultant’s feedback.
The result was a 106-item scale with a Likert response format (1- Strongly disagree, 2-Disagree, 3- Neither agree nor disagree, 4- Agree, 5- Strongly agree) and 19 additional demographic questions inquiring about participants’ state of residence, employment status, occupation, level of education, gender, race, ethnicity, disability
Running Head: UWES 34
status, age, affiliation with the University, whether they have used the University website in the last two years (see Appendix A).
This particular University website, which will remain undisclosed, was selected for its multiple offerings in disciplines of study, with multiple degree offerings (such as masters, doctoral, and continuing education) and multiple campuses (U.S, and international) – characteristic of current strategic direction of many universities competing for prospective students. In order to gauge how relevant the University website was to our sample, participants were asked about their interest in graduate school, professional school, and in the programs they saw offered on the University website. Once approved by the IRB, the scale was ready for pilot testing.
UWES Pilot Testing
The purpose of the pilot test was to gather enough data to conduct a statistical analysis of the individual scale items in terms of the statistical properties of each item, item consistency within the scales, and differentiation between factors. Table 5 displays the final decisions of items retained in the scale, sorted by domain, subdomain, and by item label (or factor), with mean, standard deviation, and Cronbach’s alpha coefficients.
Participants. Once IRB-SF approval was secured, 96 adult participants were recruited via word-of-mouth through personal contacts and colleagues both inside and outside of the University, and through Facebook and Twitter network contacts via recruitment emails (See Appendices D), which offered potential participants the opportunity to enter in a monthly raffle for $100 Visa gift cards redeemable online after the survey was completed.