In Age of Robotics, Can Humans be Trusted?

Northwestern researchers design devices that learn from users

August 10, 2017

Whether it’s the confidence in knowing a computer will start up when asked, or faith in an autopilot system to safely guide a jet from Boston and Berlin, society frequently places its trust in automation.

And robots often reciprocate that faith, giving humans the latitude to operate autonomously when circumstances change: a car on auto-drive, for instance, may cede control in rainy conditions where a human’s ability to see and steer might be superior. 

But how do robots know when to trust us, and are humans all that trustworthy to begin with? Northwestern researchers are exploring these kinds of questions as part of the University’s Trust Project, a cross-disciplinary initiative launched in 2016 and designed to strengthen research, practice, and understanding of trust in society.

“For any shared-control system, it’s critical for the trust to be bidirectional,” says Brenna Argall, electrical engineering and computer science, physical medicine and rehabilitation, and mechanical engineering. “The idea that humans can be unclear — often unintentionally — in the signals they send is a salient consideration.”

Argall is working to bridge the machine-human divide at the Shirley Ryan AbilityLab. One of her lab’s goals is to add in automation selectively to develop assistive technologies that operate within a range of control: from self-directed by the user to minimal human input.

In the rehabilitation setting, Argall says, individuals suffering from sensory motor impairment can often experience deficiencies of varying degree.

“That’s why we want machines to learn to adapt, to account for say a stronger signal in the morning but one that is maybe influenced by pain or fatigue later on,” she says. “And if a person is recovering, getting stronger along the way, the system needs to adjust and optimize shared control, or even cede full control to the patient.”

That’s in part, where Todd Murphey, mechanical engineering, comes in. Argall and Murphey are frequent collaborators and both are part of the Trust Project, a Kellogg School of Management-led initiative.

Murphey’s research group develops algorithms for human-robot physical interaction, a consideration critical for assistive devices like wheelchairs or exoskeletons used in physical therapy settings. His team also uses a combination of mathematics, biological inspiration, and experimentation to study robotics in unstructured environments — robots that have to learn how to physically interact with their environment without knowing about it ahead of time. 

“When robots interact with people, they are interacting with something that may not want to follow rules and that has a very rich set of possible behaviors,” Murphey says.

Using the car analogy: a vehicle with an automatic braking system needs know that even though a driver has a foot on the gas pedal, the car needs to be slowed to avoid a collision.

“At some point, machines shouldn't trust people. That is, machines should ignore people if they provide instructions that the automation knows to be unsafe,” Murphey says.

Regulating that tension between what the automation wants to do and what the person wants to do is basis of his work with Argall.

“We do not view trust as something that the person can have independently of a machine,” Murphey says. “Instead, trust has to be negotiated between the person and machine based on experience on both parts. Trust has to be earned over time.”

The ultimate benefactors of these advances, humans, seem well primed for advances in shared-control systems, Argall believes. She points out that just five or 10 years ago, the idea of driverless cars or auto-park programming was scary for most people; now they’re seen as desirable features.

Among other projects, Argall’s lab is developing a “smart wheelchair,” which relies on a set of sensors to help steer. The innovation could dramatically improve mobility in patients with paralysis, including those stricken by amyotrophic lateral sclerosis, multiple sclerosis, Parkinson’s disease, and traumatic brain or spinal cord injury.

By using robotic autonomy and intelligence to shift the burden of control from patient to machine, Argall hopes to restore mobility to individuals in new ways.

But first, each side of that equation needs to be certain they can trust the other.