Robot Transparency, Trust and Utility


Wortham, R. H., Theodorou, A. and Bryson, J. J., 2016. Robot Transparency, Trust and Utility. In: AISB Workshop on Principles of Robotics, 2016-04-04 - 2016-04-04, University of Sheffield.

Related documents:

PDF (WorthamTheodorouTransparencyTrustUtilityAISB) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (91kB) | Preview


    As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, nonspecialist users find it hard to create useful mental models of robot reasoning solely from observed behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where transparency may have a wider range of effects on trust and utility depending on the application and purpose of the robot. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.


    Item Type Conference or Workshop Items (Paper)
    CreatorsWortham, R. H., Theodorou, A. and Bryson, J. J.
    Uncontrolled Keywordsartificial intelligence,ai,robot,robotics,roboethics,ethics,transparency
    DepartmentsFaculty of Science > Computer Science
    Research CentresCentre for Mathematical Biology
    ID Code49714


    Actions (login required)

    View Item

    Document Downloads

    More statistics for this item...