The Cybernetic Revolution: The Influence of Metrology on the User Experience in Human-Robot Interaction

NIST engineer Jeremy Marvel adjusts a robotic arm used to study human-robot interactions. According to a NIST economic study on advanced robotics and automation—one of four reports on advanced manufacturing—Marvel’s work is the type of research needed to fortify and facilitate this emerging field.Credit: F. Webber/NIST

I was told there would be robots.

We are living in a world in which we are surrounded by technology tailored to our needs. Our clothes are treated with nanoparticles to resist wrinkles and stains. We have sent probes beyond the farthest reaches of our solar system, and we have selfie-taking machines exploring celestial neighbors and conducting revolutionary experiments that alter our understanding of the universe. Artificial intelligence is omnipresent and impacts the way we drive, entertain ourselves, read the news and make dinner. We can even carry out complex social relationships through online video games without ever having to physically meet another human being. The world’s knowledge can be accessed in mere seconds on computers we carry in our pockets. In our pockets! Clearly, we are living in the future so frequently and fancifully predicted in popular culture.

But … where is the plastic pal who’s fun to be with that I was told would be waiting for me?

Robots are becoming increasingly prevalent in the manufacturing, medical and service fields. Robots are purposefully designed to work around and with people. Robots are even marketed as being “collaborative” in that they are supposedly safer and easier to use than ever. In every case, robots are custom-tailored for their users’ needs. Such trends imply robotics are becoming consumer products.

Researcher moves differently shaped and colored plastic blocks on a table as a robotic arm looks on
An industrial robot arm collaborates with a human operator in this test evaluating the performance of vision systems for human-robot interaction.Credit: M. Zimmerman/NIST

In the home, however, robots are largely limited to hobbyist projects, STEM toys and single-purpose cleaning appliances. Revolutionary and sociable robots are being introduced to an eager market, only to fall short of the capabilities of the simpler, task-built devices that merely sit on a shelf. So why the discrepancy?

In reality, there is no discrepancy. It’s the task and the utility of a given robot that allows it to be custom-designed for the end-user. Specific tasks get specific robots that are built to be user-friendly. General tasks get … something else. When the task is unknown or ill-defined, the manufacturer must anticipate all possible—or at least all supported—applications and design around that.

All robots are purpose-built, principally because there is a trade-off between simplicity and functionality. To be usable and useful, the interfaces connecting people and machines must carefully traverse the path that is flanked by “too complex” and “too simple.” The real challenge lies in the realization that experts in the field don’t accurately know where that path is, how wide it is or to where it leads. The purpose of the interface is to facilitate communication and drive interaction. It relays important information to the person working with the machine, and it provides a mechanism for expressing the user’s desired actions. The challenge, however, is in balancing usability for a broad spectrum of users while simultaneously providing useful products. To find that Goldilocks “just right” mix of comfort and functionality often requires a lot of trial and error, especially if the ultimate application of the machine is unknown.

And that’s if all is working as it should be.

When things start to go wrong, it can be extremely difficult to diagnose the problem or predict how bad things will get. More intelligence is needed to assess the situation and provide a good prognosis. Assuming that such a prognosis is found and that it’s accurate, how is the robot supposed to share this information such that an untimely fate is avoided? That’s the interface’s job.

A good interface can enhance a user’s experience, while a bad interface can render a machine completely unusable. Thus, the interface drives the experience. Similarly, the means by which we interact with the machines dictates their utility. By changing the interface, one can effectively change how a given robot is used … or if it’s used.

Researcher uses a virtual reality headset to manipulate a robotic arm. In the corner is a box showing the VR headset display, including the IP address, pick up and place object commands, and gravity compensation commands.
A test evaluating the performance of an augmented reality interface for interacting with an industrial robot arm for a collaborative assembly task.Credit: S. Bagchi/NIST

As such, an interface that can efficiently adapt to a user or a task is theoretically more useful for that task than an interface that attempts to accommodate all possible tasks or behaviors. To be able to accomplish this, however, the robot needs to be aware of its environment and the user.

While we have some basic tenets to help us differentiate good graphical interfaces from bad ones, there are no metrics by which vendors can measure the effectiveness and efficiency of the interaction between people and robots before the robots are sold and used. Nor are there any standardized means by which we can measure how much a given interface or interaction will be better than another. Currently, the best metrics for measuring the effectiveness of human-machine interactions are through subjective, qualitative, user-volunteered reports. There are few objective, quantitative measures by which a given interaction can be assessed.

If such quantitative metrics existed, however, a robot could adjust its behaviors to match the user and the application, working as a collaborative tool to enable the efficient completion of a task. Similarly, if such adjustments are perceived by the people working with the robot to be both intentional and appropriate, then their confidence in the performance of the robot is strengthened and they can, in turn, respond accordingly. This mutual situational awareness is critical for effective teaming, regardless if it’s on the factory floor or in your kitchen at home. If the interaction breaks down, so too does the team’s performance.

This is the basis for a new research project at NIST, the Performance of Human-Robot Interaction, which seeks to establish test methods and metrics for assessing and assuring the effective teaming of humans and machines. The provision of these metrics and test methods enables the benchmarking and advancement of technology and establishes a baseline of maintaining trust in the capabilities of the robot. Part of this project’s efforts includes reaching out to the world’s experts in human-robot interaction to develop a standardized measurement methodology.

In the recent workshop, Test Methods and Metrics for Effective HRI in Collaborative Human-Robot Teams, NIST researchers and world experts established both the need and means by which human-robot interaction can be objectively measured and replicated. These needs take into account both applications and intercultural issues that drive the user experience and mechanisms for interaction. Ultimately, this workshop kick-started a concerted effort to advance collaborative robot technologies into the future.

So, perhaps someday soon, we’ll get those robots.