Many online retailers and other product-oriented websites allow people to post product reviews for use by
shoppers. While research indicates that these reviews influence consumers' shopping attitudes and behaviors,
questions remain about how consumers evaluate the product reviews themselves. With the current research,
we introduce a newmethodology for identifying the review factors that shoppers use to evaluate reviewhelpfulness,
and we integrate prior literature to provide a framework that explains how these factors reflect readers'
general concerns about the diagnosticity (uncertainty and equivocality) and credibility (trust and expertise) of
electronic word-of-mouth. Based on this framework, we offer predictions about how the relative importance
of diagnosticity and credibility should vary systematically across search and experience product types. By analyzing
secondary data consisting of over 8000 helpfulness ratings from product reviews posted by shoppers on
Amazon.com, we find that, while review content affects helpfulness in complex ways, these effects are well
explained by the proposed framework. Interestingly, the data suggest that reviewwriterswho explicitly attempt
to enhance reviewdiagnosticity or credibility are often ineffective or systematically unhelpful. Our findings have
implications for both IS developers and retailers for designing online decision support systems to optimize
communication practices and better manage consumer-generated content and interactions among consumers.