Research

Robot Transparency, Trust and Utility


Reference:

Wortham, R. H., Theodorou, A. and Bryson, J. J., 2016. Forthcoming. Robot Transparency, Trust and Utility. In: AISB Workshop on Principles of Robotics, 2016-04-04 - 2016-04-04, University of Sheffield.

Related documents:

[img]
Preview
PDF (WorthamTheodorouTransparencyTrustUtilityAISB) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (91kB) | Preview

    Abstract

    As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, nonspecialist users find it hard to create useful mental models of robot reasoning solely from observed behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where transparency may have a wider range of effects on trust and utility depending on the application and purpose of the robot. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.

    Details

    Item Type Conference or Workshop Items (Paper)
    CreatorsWortham, R. H., Theodorou, A. and Bryson, J. J.
    Uncontrolled Keywordsartificial intelligence,ai,robot,robotics,roboethics,ethics,transparency
    DepartmentsFaculty of Science > Computer Science
    Research CentresCentre for Mathematical Biology
    RefereedNo
    StatusIn Press
    ID Code49714

    Export

    Actions (login required)

    View Item

    Document Downloads

    More statistics for this item...