Das Bild zeigt eine menschliche Figur mit Brille und einen humanoiden Roboter. Der Mensch steht auf der linken Seite, der Roboter auf der rechten Seite, mit einem hellen Hintergrund, der Technologie symbolisiert. Diese Darstellung verdeutlicht die Interaktion zwischen Mensch und künstlicher Intelligenz.

In Tech We Trust?

Why ”Human and Machine“ is first and foremost a question of task sharing

Automation makes our lives easier and our work increasingly efficient. In some cases more, in others less. As is so often the case, the right level of trust is key when it comes to human-machine interaction, says our author Prof. Dr. Christoph Bartneck. That is why he is calling on manufacturers to clearly label who is in control in any given situation, whether it is human or machine.

WE INTERACT WITH TECHNOLOGY EVERY DAY, AND ALTHOUGH I WISH IT WAS EASY, FLUENT AND INTUITIVE, IT BARELY EVER IS.

In particular, interacting with autonomous systems can be confusing. But I am getting ahead of myself here. Let’s start with a simple example. During winter, my beloved partner starts her car and immediately sets the electric heating system to the maximum. Over the years, I have learned not to comment on this, a lesson many engineers must learn the hard way.

In my head, I occasionally try to come up with a new way to explain the functioning of the thermostat to her. While she treats the dial as a temperature dial, it is much more. She assumes that to heat the car quickly, it is best to blast hot air into the cabin. Once it is nice and cozy, she sets the dial to a lower setting. This works. But it is not optimal. And nothing gets an engineer thinking like when encountering a situation where there could be a better solution.

“Both TOO LITTLE AND TOO
MUCH TRUST in an autonomous
system CAN CAUSE PROBLEMS.“

The electrical heating system is a relatively simple autonomous system. It senses the temperature in the car, and when it is below the desired setting, it switches on the heater. I am simplifying the situation slightly, but the heating system senses, computes and acts. The heater element itself only has two settings: on and off. The thermostat will blast with the maximum heat until it approaches the target. It then starts to alternate between on and off so that an intermediate temperature emerges. Relying on the thermostat takes the exact same time to heat the car as trying to regulate it manually. However, you never have to change the temperature setting when using the thermostat as intended. The vehicle will automatically heat the cabin as quickly as possible and will then maintain the desired temperature. We can learn from this that the technology users do not always have the correct mental model for its function. They often believe they can do better than the machine when they actually cannot. I have also learned that mansplaining is far less desirable than explainable AI.

One way to ensure optimal operation is to remove the users’ ability to control the system. I had the pleasure of working in a climate-controlled office building for many years. The autonomous building had many types of sensors. It could also open windows, blow cold air and operate the blinds. The building even won some architectural awards. As a result, I had no control over my office. The system would decide when to open the windows or flow cold air from my feed. This would be fantastic if it worked, but as so often, the engineers did not understand its users.

Problems occurred during meetings. The engineers had not considered how much noise the motors make when operating the windows. They would frequently make adjustments that were so loud that we could not hear each other. After our frequent complaints to facility management remained unanswered, a competent programmer in our team hacked into the system and gave us back manual control. Eventually, the university opened the official control system to its inhabitants.

“What can we learn from all this?
Well, having two adaptive systems
trying to adapt to each OTHER
OFTEN RESULTS IN CHAOS.”

This can be two machines adapting to each other or a human and a machine engaging in mutually corrupting behavior. The engineers cannot predict the complexities of the environments and the ingenuity of users. People will start to work around autonomous systems if they are dissatisfied with them. Withholding control only results in potentially dangerous workarounds. I will soon come to another example of the ingenuity of humans overcoming autonomous systems.

However, this does not mean that the users are always right. My desire for fresh air in my office might conflict with my colleague’s desire for a warm and cozy environment. Moreover, during my absence, the autonomous building could and should switch off the lights and reduce the temperature. If nobody is in the office on the weekend, it makes no sense to heat it. What we need is a competent system that fully understands the task it is designed for in all of its complexity. It needs to be able to communicate with its users so that they can monitor its operation and can intervene when necessary. After all, the world is full of surprises.

Expensive mistakes

My examples so far have focused on autonomous systems in which the users do not have enough trust in the system or simply do not agree to its
operation. This may cause discomfort and occasional quarrels between partners and colleagues. When larger machines are involved, and the users trust the machines too much, the consequences can be more severe. In my home country of New Zealand, we recently had two accidents involving the autopilots of two large ships. On June 21, 2024, the Interislander ferry Aratere ran aground only a few minutes after leaving port. Nobody was hurt, and the ferry could eventually be refloated. The Royal New Zealand Navy’s Manawanui was less lucky. It grounded on a reef on the southern side of Samoa on Saturday, October 5, 2024. Everybody was evacuated, but the vessel sank. While the Aratere was repaired, the 147 million New Zealand dollars Manawanui was lost. The costs of salvaging the wreck will add another 40 million New Zealand dollars to the price of the incident.

In both cases, the autopilot was engaged and the crew struggled to regain control. These were welltrained experts. Captains and Masters. Both times they observed their ships entering dangerous waters, and both times their attempts to turn the ship around using the helm failed. It would be easy to simply blame the helmsman, but this would not capture the complexity of the interaction between the autonomous machine and its operator.

The crew on the Aratere were aware that the autopilot was engaged and even pressed the disengage button to switch it off. They did not remember that they had to push the button for five seconds. Preventing an accidental trigger of this important button is maybe a good idea, but there would have been other ways to warn the crew. A prominent visual display could indicate its state. Auditory feedback could signal a change in its state. All this could have been done to make the five-second press unnecessary. In a stressful situation, users don’t always remember all the details. It only took the Aratere one minute to deviate from its path and run aground. There was very little time for the crew to intervene.

The crew of the Manawanui was unaware of the engagement of the autopilot and when the ship did not respond to the helm, they concluded that thruster control failed. I cannot imagine how stressful it must be when you turn the helm and nothing happens. Should ships, therefore, refrain from using autopilots? Certainly not. People make mistakes as well, and autopilots can be much better at steering a vessel. It can even be argued that a ship should automatically activate the autopilot if the crew becomes incapacitated.

What we have here is a multidimensional problem. The crews trusted the autopilot too much and attempted to switch it off too late. Once they struggled with regaining control, the poorly designed systems prevented them from drawing the correct conclusions and taking back control. Both too little and too much trust in an autonomous system can cause problems. Handing control to the system and taking it back is essential and needs to be carefully designed so that it works in stressful situations.

Where the ferry captains have placed too much trust in their autonomous systems, operators of tunnel boring machines sometimes lack this trust. Decades of experience can prevent operators from trusting autonomous systems, which can lead to efficiency losses. Building trust between the operators and the autonomous machines will take time.

Hidden autonomy

But what happens if the crew operating a vessel is unaware of the presence of an autonomous system to start with? This occurred when Boeing implemented the Maneuvering Characteristics Augmentation System (MCAS) in its 737 Max airplanes. Its presence was removed from the flight manual. The MCAS system played an important role in the Lion Air accident (October 29, 2018) and the Ethiopian Airlines accident (March 10, 2019). Many lives were lost. Again, the pilots were fighting for control over the aircraft with an autonomous system. At times, the pilots were not even fully aware of the specific system they were fighting. As a result, the fleet of 737 Max airplanes was grounded, and a criminal investigation followed. The costs are estimated to go into the billions.

Where autonomy works on a large scale

You might think this would be the most dangerous autonomous system, but the aircraft industry is a shining light regarding safety, engineering and quality control. What keeps me awake at night is the fleet of autonomous vehicles (AVs) that are entering our roads. We typically distinguish between five different levels of autonomy in cars. Mercedes was the first manufacturer in 2024 to obtain permission to sell level three vehicles in the U.S.. This means that the driver does not always have to watch the road. Despite several robotic taxi trials, we are still far from a large-scale deployment of fully autonomous vehicles.

This does not stop careless drivers from attaching weights to their steering wheel to trick its torque sensor into believing the driver has a hand on the wheel. After several drivers were caught fully asleep behind the wheel, some AV manufacturers also decided to use internal cameras to monitor the driver. We are in a situation in which the car cannot trust the driver, and the driver cannot trust the vehicle. What makes this problem so much more important is its scale. More than a billion cars populate our roads worldwide, and we lose more than a million lives on the road every year. It would be wrong to assume that AVs would simply add to the road toll. They also have the potential to prevent many accidents. Alcohol and drugs are a significant factor in fatal crashes. AVs can easily abstain from alcohol, and driving within the speed limit can be set. 

We will have to balance the lives we save against the ones we lose. Death by robot will become an official cause of death more than 45 years after its first occurrence (editor’s note: when U.S. factory worker Robert Williams was killed by a robot arm).

When the machine looks human

A less dramatic but still fascinating aspect of trust’s role in the interaction between humans and machines is when the machines take on the human form. We are currently experiencing a sharp increase in the development of humanoid robots. They still look like machines, but Tesla and others are making great efforts to make people believe their robots are fully autonomous. They are not. Remote operators often have to control them, particularly in the messy and uncertain environments we call a party. During the ”We Robot“ event, several Optimus robots mingled with the guests and engaged them in conversations. Allowing such a robot to operate autonomously in this environment would be irresponsible. A robot could accidentally hurt a guest. Not just with a cruel joke, but physically. The dynamic and chaotic nature of human gatherings is a nightmare for engineers and their robots. Still, by taking on the human form, users anthropomorphize them. We tend to treat them as if they were social actors capable of fluent interaction with humans. We all share this vision. It would be great to have a robotic butler that does all the housework for you. But we are still far away from this.

Make clear who’s in control!

What we need right now is honesty and transparency. Companies calling their system Autopilot and Full Self Driving is misleading at best. Manufacturers of autonomous machines must tell us precisely what their machines can and cannot do in all their communications. Otherwise, there is a good chance we trust them too much or too little. Mixed control approaches are notoriously tricky, and under stress, humans make mistakes. We have to design our autonomous systems so that managing who is in control is clear. I am looking forward to the time when neither I nor my partner must steer the car, and we can both enjoy the incredible landscapes in New Zealand.

Author

Professor Dr. Christoph Bartneck

Dr. Christoph Bartneck is a professor in the Department of Computer Science and Software Engineering at the University of Canterbury in New Zealand. He has a background in Industrial Design and Human-Computer Interaction, and his projects and studies have been published in leading journals, newspapers, and conferences. His interests lie in the fields of Human-Computer Interaction, Science and Technology Studies, and Visual Design.

Your contact person Contact us

Steffen Dubé President and General Manager Herrenknecht Tunnelling Systems USA Inc.
Gerhard Goisser Commercial Manager Herrenknecht Tunnelling Systems USA, Inc.