1. INTRODUCTION
Trust is commonly defined as an individual’s willingness to depend on another party because of the characteristics of the other party [Rousseau et al. 1998]. This study concentrates on the latter half of this definition, the characteristics or attributes of the trustee, usually termed “trust” or “trusting beliefs.” Research has found trust not only useful, but also central [Golembiewski and McConkie 1975] to understanding individual behavior in diverse domains such as work group interaction [Jarvenpaa and Leidner 1998; Mayer et al. 1995] or commercial relationships [Arrow 1974]. For example, Jarvenpaa and Leidner [1998] report swift trust influences how “virtual peers” interact in globally distributed teams. Trust is crucial to almost any type of situation in which either uncertainty exists or undesirable outcomes are possible [Fukuyama 1995; Luhmann 1979].
Within the Information Systems (IS) domain, as in other fields, trust is usually examined and defined in terms of trust in people without regard for trust in the technology itself. IS trust research primarily examines how trust in people affects IT acceptance. For example, trust in specific Internet vendors [Gefen et al. 2003; Kim 2008; Lim et al. 2006; McKnight et al. 2002; Stewart 2003] has been found to influenceWeb consumers’
beliefs and behavior [Clarke 1999]. Additionally, research has used a subset of trust in people attributes, that is, ability, benevolence, and integrity, to study trust inWeb sites [Vance et al. 2008] and trust in online recommendation agents [Wang and Benbasat 2005]. In general, Internet research provides evidence that trust in another actor (i.e., a Web vendor or person) and/or trust in an agent of another actor (i.e., a recommendation agent) influences individual decisions to use technology. Comparatively little research directly examines trust in a technology, that is, in an IT artifact.
To an extent, the research on trust in Recommendation Agents (RAs) answers the call to focus on the IT artifact [Orlikowski and Iacono 2001]. RAs qualify as IT artifacts since they are automated online assistants that help users decide among products. Thus, to study an RA is to study an IT artifact. However, RAs tend to imitate human characteristics and interact with users in human-like ways. They may even look human-like.
Because of this, RA trust studies have measured trust in RAs using trust-in-people scales. Thus, the RA has not actually been studied regarding its technological trust traits, but rather regarding its human trust traits (i.e., an RA is treated as a human surrogate).
The primary difference between this study and prior studies is that we focus on trust in the technology itself instead of trust in people, organizations, or human surrogates. The purpose of this study is to develop trust in technology definitions and measures and to test how they work within a nomological network. This helps address the problem that IT trust research focused on trust in people has not profited from additionally considering trust in the technology itself. Just as the Technology Acceptance Model’s (TAM) perceived usefulness and ease-of-use concepts directly focus on the attributes of the technology itself, so our focus is on the trust-related attributes of the technology itself. This study more directly examines the IT artifact than past studies, answering Orlikowski and Iacono’s [2001] call. Our belief is that by focusing on trust in the technology,
we can better determine what it is about technology that makes the technology itself trustworthy, irrespective of the people and human structures that surround the technology. This focus should yield new insights into the nature of how trust works in a technological context.
To gain a more nuanced view of trust’s implications for IT use, MIS research needs to examine how users’ trust in the technology itself relates to value-added postadoption use of IT. In this study, technology is defined as the IT software artifact, with whatever functionality is programmed into it. By focusing on the technology itself, trust researchers can evaluate how trusting beliefs regarding specific attributes of the technology relate to individual IT acceptance and postadoption behavior. By so doing, research will help extend understanding of individuals’ value-added technology use after an IT “has been installed, made accessible to the user, and applied by the user in accomplishing his/her work activities” [Jasperson et al. 2005].
In order to link trust to value-added applications of existing workplace IT, this article advances a conceptual definition and operationalization of trust in technology. In doing so, we explain how trust in technology differs from trust in people. Also, we develop a model that explains how trust in technology predicts the extent to which individuals continue using that technology. This is important because scant research has examined how technology-oriented trusting beliefs relate to behavioral beliefs that shape postadoption technology use [Thatcher et al. 2011]. Thus, to further understanding of trust and individual technology use, this study addresses the following research questions: What is the nomological network surrounding trust in technology? What is the influence of trust in technology on individuals’ postadoptive technology use behaviors?
In answering these questions, this study draws on IS literature on trust to develop a taxonomy of trust in technology constructs that extend research on trust in the context of IT use. By distinguishing between trust in technology and trust in people, our work affords researchers an opportunity to tease apart how beliefs towards a vendor, such as Microsoft or Google, relate to cognitions about features of their products. By providing a literature-based conceptual and operational definition of trust in technology, our work provides research and practice with a framework for examining the interrelationships among different forms of trust and postadoption technology use.