computers do? For example, will we trust driverless transportation in the future to move our children to and from school? Likewise, will we trust computers to undertake medical intervention? Computers already play a major role in safety-critical systems such as air traffic control and nuclear power plants, but do we feel it is acceptable that they also begin to take on more social roles in society? In Japan, some are now proposing that robots be developed as companions for the elderly. If this is acceptable, how should we design them so that we do not completely abdicate responsibility? We need to decide. We also need to consider the consequences of a world inhabited by independent computers that we have less control over. A sense of control over our own environment is a key human value. Will clever computer systems undermine or enhance this? Part of this sense of control is related to how we account for our activities. We treat being responsible for what we do as a measure of sophistication and knowledge; this is why children and adolescents are not subject to criminal proceedings in the same way as adults. Such systems of accountability are not confined to matters of criminality of course but also suffuse our professional and personal actions. This, in turn, drives many broader societal relations and understandings. As computing takes on more roles in our activities and as our environment becomes constructed and controlled by computers that we might not even be aware of, these systems of etiquette, accountability and responsibility will be affected. How will we know that this is happening? Who will judge what the consequences might be? Questions for interaction and design What will be an appropriate style of interaction with clever computers? What kinds of tasks will be appropriate for computers, and when should humans be in charge? How can clever computers be designed to be trustworthy, reliable and acting in the interests of their owners? Questions of broader impact To what extent will society allow clever computers the trust we currently give to trained and qualified professionals? Is it proper to assign what used to be human roles to computers? For example, is it acceptable to allow robots to be companions for the elderly or infirm? Who will we hold accountable when things go wrong with autonomous systems? What are the implications for society of having clever computers reasoning and acting on our behalf?