Robot transparency, trust and utility


Wortham, R. H. and Theodorou, A., 2017. Robot transparency, trust and utility. Connection Science, 29 (3), pp. 242-248.

Related documents:

[img] PDF (transparency) - Repository staff only until 31 May 2018 - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (202kB) | Contact Author

    Official URL:


    As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where, depending on the application and purpose of the robot, transparency may have a wider range of effects on trust and utility. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.


    Item Type Articles
    CreatorsWortham, R. H.and Theodorou, A.
    Uncontrolled Keywordsepor, transparency, agents, ethics, roboethics, robotics
    DepartmentsFaculty of Science > Computer Science
    Publisher Statementtransparency.pdf: “This is an Accepted Manuscript of an article published by Taylor & Francis in Connection Science on 30th May 2017, available online:”
    ID Code55027
    Additional InformationConnection Science Special Issue: Ethical Principles of Robotics (Part 2 of 2)


    Actions (login required)

    View Item