Please use this identifier to cite or link to this item: https://t2-4.bsc.es/jspui/handle/123456789/59884
Title: Calibrating Trust Towards An Autonomous Image Classifier, 2019
Keywords: TRUST
AUTOMATION
AUTONOMOUS IMAGE CLASSIFIER
2021
Description: Successful adoption of autonomous systems requires appropriate trust from human users, with trust calibrated to reflect true system performance. Autonomous image classifiers are one such example and can be used in a variety of settings to independently identify the contents of image data. We investigated users’ trust when collaborating with an autonomous image classifier system that we created using the AlexNet model (Krizhevsky et al., 2012). Participants collaborated with the classifier during an image classification task in which the classifier provided labels that either correctly or incorrectly described the contents of images. This task was complicated by the quality of the images processed by the human-classifier team: 50% of the trials featured images that were cropped and blurred, thereby partially obscuring their contents. Across 160 single-image trials, we examined trust towards the classifier, while we also looked at how participants complied with the classifier by accepting or rejecting the labels it provided. Furthermore, we investigated whether trust towards the classifier could be improved by increasing the transparency of the classifier’s interface, by displaying system confidence information in three different ways, which were compared to a control interface without confidence information. Results showed that trust towards the classifier was primarily based on system performance, yet this also was influenced by the quality of the images and individual differences amongst participants. While participants typically preferred classifier interfaces that presented confidence information, it did not appear to improve participants’ trust towards the classifier.<p>The project will seek to investigate which parameters influence trust between artificial intelligences and human users. Our partner for this project, Qumodo, are a company dedicated to helping people interface with artificial intelligence; we will examine their Intelligent Iris system. Intelligent Iris is a modular data analysis system which is designed to facilitate human users in extracting meaningful results from large sets of data, including images (such as photos, medical scans, military sensor data etc.). The visual nature of this task makes it challenging as humans bring a wealth of social expectancies and uniquely human visual processes to understand an image. Fostering trust within man-machine teams is expected to improve both mental health and productivity. Guided by recent research into trust from domains like autonomous vehicles and social robotics, we will perform experiments to examine which parameters influence the calibration of trust when interacting with the image understanding software. We hope to advance a conceptual understanding of trust between man and machine and identify effective strategies to adjust system parameters to properly calibrate trust. These results will be valuable in advancing product development at Qumodo and will importantly inform the wider debate over how to design intelligent systems.</p>
URI: https://t2-4.bsc.es/jspui/handle/123456789/59884
Other Identifiers: 854151
10.5255/UKDA-SN-854151
https://doi.org/10.5255/UKDA-SN-854151
Appears in Collections:Cessda

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.