Robust Intelligence and Trust in Autonomous Systems

von: Ranjeev Mittu, Donald Sofge, Alan Wagner, W.F. Lawless

Springer-Verlag, 2016

ISBN: 9781489976680 , 277 Seiten

Format: PDF, OL

Kopierschutz: Wasserzeichen

Windows PC,Mac OSX geeignet für alle DRM-fähigen eReader Apple iPad, Android Tablet PC's Online-Lesen für: Windows PC,Mac OSX,Linux

Preis: 96,29 EUR

Mehr zum Inhalt

Robust Intelligence and Trust in Autonomous Systems


 

Preface

6

AAAI-2014 Spring Symposium Organizers

7

AAAI-2014 Spring Symposium: Keynote Speakers

7

Symposium Program Committee

8

Contents

12

1 Introduction

14

1.1 The Intersection of Robust Intelligence (RI) and Trust in Autonomous Systems

14

1.2 Background of the 2014 Symposium

15

1.3 Contributed Chapters

17

References

22

2 Towards Modeling the Behavior of Autonomous Systems and Humans for Trusted Operations

24

2.1 Introduction

24

2.2 Understanding the Value of Context

26

2.3 Context and the Complexity of Anomaly Detection

26

2.3.1 Manifolds for Anomaly Detection

27

2.4 Reinforcement Learning for Anomaly Detection

28

2.4.1 Reinforcement Learning

29

2.4.2 Supervised Autonomy

30

2.4.3 Feature Identification and Selection

31

2.4.4 Approximation Error for Alarming and Analysis

32

2.4.5 Illustration

33

2.4.5.1 Synthetic Domain

33

2.4.5.2 Real-World Domain

35

2.5 Predictive and Prescriptive Analytics

39

2.6 Capturing User Interactions and Inference

39

2.7 Challenges and Opportunities

41

2.8 Summary

42

References

43

3 Learning Trustworthy Behaviors Using an Inverse Trust Metric

45

3.1 Introduction

45

3.2 Related Work

47

3.2.1 Human-Robot Trust

47

3.2.2 Behavior Adaptation

47

3.3 Agent Behavior

49

3.4 Inverse Trust Estimate

49

3.5 Trust-Guided Behavior Adaptation

51

3.5.1 Evaluated Behaviors

52

3.5.2 Behavior Adaptation

53

3.6 Evaluation

54

3.6.1 eBotworks Simulator

55

3.6.2 Experimental Conditions

55

3.6.3 Evaluation Scenarios

56

3.6.3.1 Movement Scenario

56

3.6.3.2 Patrolling Scenario

58

3.6.4 Trustworthy Behaviors

59

3.6.5 Efficiency

62

3.6.6 Discussion

63

3.7 Conclusions

63

References

64

4 The “Trust V”: Building and Measuring Trust in Autonomous Systems

66

4.1 Introduction

66

4.2 Autonomy, Automation, and Trust

68

4.3 Dimensions of Trust

73

4.3.1 Trust Dimensions Arising from Automated Systems Attributes

73

4.3.2 Trust Dimensions Arising from Autonomous Systems Attributes

74

4.3.3 Another Trust Dimension: SoS

74

4.4 Creating Trust

75

4.4.1 Building Trust In

76

4.5 The Systems Engineering V-Model

77

4.6 The Trust V-Model

78

4.6.1 The Trust V Representation: Graphic

79

4.6.2 The Trust V Representation: Array

80

4.6.3 Trust V “Toolbox”

81

4.7 Specific Trust Example: Chatter

83

4.8 Measures of Effectiveness

84

4.9 Conclusions and Next Steps

86

A.1 Appendix

87

References10

87

5 Big Data Analytic Paradigms: From Principle Component Analysis to Deep Learning

89

5.1 Introduction

89

5.2 Wind Data Description

90

5.3 Wind Power Forecasting Via Nonparametric Models

90

5.3.1 Advanced Neural Network Architectures Application

91

5.3.2 Wind Speed Results

93

5.4 Introduction to Deep Architectures

94

5.4.1 Training Deep Architectures

100

5.4.2 Training Restricted Boltzmann Machines

100

5.4.3 Training Autoencoders

102

5.5 Conclusions

104

References

105

6 Artificial Brain Systems Based on Neural Network Discrete Chaotic Dynamics. Toward the Development of Conscious and Rational Robots

106

6.1 Introduction

106

6.2 Background

108

6.3 Numerical Simulations

114

6.4 Conclusion

121

References

122

7 Modeling and Control of Trust in Human-Robot Collaborative Manufacturing

123

7.1 Introduction

123

7.2 Trust Model

126

7.2.1 Time-Series Trust Model for Dynamic HRC Manufacturing

126

7.2.2 Robot Performance Model

127

7.2.3 Human Performance Model

127

7.3 Neural Network Based Robust Intelligent Controller

129

7.4 Control Approaches: Intersection of Trust and Robust Intelligence

130

7.4.1 Manual Mode

131

7.4.2 Autonomous Mode

131

7.4.3 Collaborative Mode

132

7.5 Simulation

132

7.5.1 Manual Mode

133

7.5.2 Autonomous Mode

135

7.5.3 Collaborative Mode

135

7.5.4 Comparison of Control Schemes

135

7.6 Experimental Validation

136

7.6.1 Experimental Test Bed

136

7.6.2 Experimental Design

136

7.6.2.1 Experiment Scenario

137

7.6.2.2 Controlled Behavioral Study

139

7.6.2.3 Imposing Fatigue

139

7.6.2.4 Experiment Procedure

141

7.6.2.5 Measurements and Scales

141

7.6.3 Experimental Results

142

7.6.3.1 Trust Model Identification Procedure

142

7.6.3.2 Manual Mode

142

7.6.3.3 Autonomous Mode

143

7.6.3.4 Collaborative Mode

144

7.6.4 Comparison and Conclusion

145

7.7 Conclusion

147

References

147

8 Investigating Human-Robot Trust in Emergency Scenarios: Methodological Lessons Learned

150

8.1 Introduction

150

8.2 Conceptualizing Trust

151

8.2.1 Conditions for Situational Trust

153

8.3 Related Work on Trust and Robots

155

8.4 Crowdsourced Narratives in Trust Research

155

8.4.1 Iterative Development of Narrative Phrasing

157

8.5 Crowdsourced Robot Evacuation

162

8.5.1 Single Round Experimental Setup

162

8.5.2 Multi-Round Experimental Setup

163

8.5.3 Asking About Trust

164

8.5.4 Measuring Trust

165

8.5.5 Incentives to Participants

165

8.5.6 Communicating Failed Robot Behavior

168

8.6 Conclusion

170

References

171

9 Designing for Robust and Effective Teamwork in Human-Agent Teams

174

9.1 Introduction

174

9.2 Related Work

175

9.2.1 Team Structure

175

9.2.2 Shared Mental Model and Team Situation Awareness

176

9.2.3 Communication

177

9.3 Experiment 1: Team Structure and Robustness

178

9.3.1 Testbed

178

9.3.2 Experiment Design

180

9.3.3 Results

181

9.3.3.1 Duplicated Work

181

9.3.3.2 Under Utilization of Vehicles

183

9.3.3.3 Infrequent Communication

184

9.4 Experiment 2: Information-Sharing

185

9.4.1 Independent Variables

185

9.4.2 Dependent Variables

187

9.4.3 Participants

187

9.4.4 Procedure

188

9.4.5 Results

188

9.4.5.1 Team Performance

188

9.4.5.2 Team Coordination

190

9.4.5.3 Workload

193

9.4.5.4 User Preference and Comments

194

9.5 Discussion

195

9.6 Conclusion

195

References

196

10 Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI”

198

10.1 Introduction

198

10.2 Creation of an Item Pool

200

10.3 Initial Item Pool Reduction

202

10.3.1 Experimental Method

203

10.3.2 Experimental Results

204

10.3.3 Key Findings and Changes

205

10.4 Content Validation

205

10.4.1 Experimental Method

206

10.4.2 Experimental Results

207

10.5 Task-Based Validity Testing: Does the Score Change Over Time with an Intervention?

210

10.5.1 Experimental Method

211

10.5.2 Experimental Results

212

10.5.2.1 Individual Item Analysis

212

10.5.2.2 Trust Score Validation

212

10.5.2.3 40 Items Versus 14 Items

214

10.6 Task-Based Validity Testing: Does the Scale Measure Trust?

215

10.6.1 Experimental Method

215

10.6.2 Experimental Results

216

10.6.2.1 Correlation Analysis of the Three Scales

216

10.6.2.2 Pre-post Interaction Analysis

217

10.6.2.3 Differences Across Scales and Conditions

218

10.6.3 Experimental Discussion

219

10.7 Conclusion

219

10.7.1 The Trust Perception Scale-HRI

219

10.7.2 Instruction for Use

221

10.7.3 Current and Future Applications

222

References

223

11 Methods for Developing Trust Models for Intelligent Systems

226

11.1 Introduction

226

11.2 Prior Work in the Development of Trust Models

228

11.2.1 Trust Models

230

11.2.2 Trust in Human-Robot Interaction (HRI)

231

11.3 The Use of Surveys as a Method for Developing Trust Models

233

11.3.1 Methodology

234

11.3.2 Results and Discussion

235

11.3.3 Modeling Trust

242

11.4 Robot Studies as a Method for Developing Trust Models

243

11.4.1 Methodology

243

11.4.2 Results and Discussion

250

11.4.2.1 Reducing Situation Awareness (SA)

250

11.4.2.2 Providing Feedback

251

11.4.2.3 Reducing Task Difficulty

253

11.4.2.4 Long-Term Interaction

254

11.4.2.5 Impact of Timing of Periods of Low Reliability

256

11.4.2.6 Impact of Age

256

11.4.3 Modeling Trust

257

11.5 Conclusions and Future Work

258

References

259

12 The Intersection of Robust Intelligence and Trust: Hybrid Teams, Firms and Systems

262

12.1 Introduction

262

12.1.1 Background

263

12.2 Theory

265

12.3 Outline of the Mathematics

267

12.3.1 Field Model

267

12.3.2 Interdependence

269

12.3.3 Incompleteness and Uncertainty

269

12.4 Evidence of Incompleteness for Groups

270

12.4.1 The Evidence from Studies of Organizations

271

12.4.2 Modeling Competing Groups with Limit Cycles

271

12.5 Gaps

273

12.6 Conclusions

274

References

275