Advertisement

The Boeing Crashes And Managing Algorithms So They Don't Manage Us

A Boeing 737 MAX 8 being built for Oman Air taxis past a Boeing hanger after landing at Boeing Field, Friday, March 22, 2019, in Seattle. In a blow for Boeing, Indonesia's flag carrier is seeking the cancellation of a multibillion dollar order for 49 of the manufacturer's 737 Max 8 jets, citing a loss of confidence after two crashes within five months. (Ted S. Warren/AP)
A Boeing 737 MAX 8 being built for Oman Air taxis past a Boeing hanger after landing at Boeing Field, Friday, March 22, 2019, in Seattle. In a blow for Boeing, Indonesia's flag carrier is seeking the cancellation of a multibillion dollar order for 49 of the manufacturer's 737 Max 8 jets, citing a loss of confidence after two crashes within five months. (Ted S. Warren/AP)

The deaths of 346 people from the crashes of two Boeing 737 MAX aircraft in the last six months is painful. The investigation of these tragedies is ongoing, and now, as investigators examine whether a new automated system in the planes is partly responsible for the crashes, isn't the time to assign blame.

Countless jobs outside aviation are intricately intertwined with algorithms, putting both lives and livelihoods at stake. We must make sure that algorithms — the step-by-step process computers use to solve problems — ultimately enhance our work, rather than impair it.

We know that when Boeing updated its workhorse aircraft, the 737, it also added software called MCAS. This compensated for the 737 MAX’s tendency to fly with the plane's nose too high during maneuvers like takeoff by automatically moving a control surface on the back of the plane to bring the nose down. Boeing did not tell all pilots about the MCAS software when the 737 MAX went into service.

Few workplaces have the same risk of mass casualties as commercial airliners, but decisions made by algorithms are common. Parole boards rely on algorithms to predict the likelihood that convicted criminals will reoffend to help them decide whether to release a convicted criminal. Police departments use algorithms to predict crime “hot spots” and decide where to deploy patrols. Colleges have algorithms that can check student papers for plagiarism. Uber drivers get paid based on one algorithm, and riders are charged based on a different one. All of these systems risk error: of the algorithm going awry, of incorrect information getting fed to the algorithm, of producing erroneous outputs that influence human decision-making.

[A] parole algorithm uses crime statistics generated by the disproportionate policing of poor people of color.

Legal scholars like Frank Pasquale have called for algorithmic transparency precisely so that we can know how these algorithms, which affect our lives and choices, operate. Otherwise, they remain a “black box": a system where the inputs and outputs are visible, but what transforms one into the other is not. In the parole algorithm, this is like knowing that the type of crime committed (an input) is one factor used to calculate the likelihood of reoffending (the output), but not knowing the weight given to that factor, or to other factors.

No computer operates 100 percent accurately 100 percent of the time. And even a fully accurate computer exists in a human context. For example, a parole algorithm uses crime statistics generated by the disproportionate policing of poor people of color. Its computations aren’t wrong, but basing decisions on them may entrench and amplify racial bias in the criminal justice system.

When human work and machine work are combined, we should undertake two best practices.

*User input at the design stage. Software doesn’t operate in a vacuum but becomes part of a network where humans and machines influence each other. People who work alongside algorithms might not be experts in computer code, but they are experts in their own situations, which coders are seeking to influence. Users input at the design stage can expand coders’ awareness of unintended consequences their interventions may cause, helping to mitigate them before they occur. Boeing is just now taking this step, inviting pilots from five different airlines to test its software update to the 737 MAX before it rolls it out.

*Algorithmic transparency. This means opening up the black box of computer code by translating it into statements that people who are not software engineers can understand.

We are surrounded by software algorithms that determine what we see, what options we have and how the technology we use responds to us.

The absence of these best practices is why pilots and pilots' unions expressed outrage over the addition of a software system to the 737 MAX that could control important components of the plane, but was not described to them when it was put into service. This meant pilots were working beside an algorithm that they didn’t understand. One hypothesis about the two crashes is that an erroneous input (angle of attack sensor) caused an undesirable output (nose down). If pilots didn’t know, however, that the MCAS was transforming the input into the output, they might first attribute the effect to aerodynamics, not software. This might explain the graphs of vertical airspeed before both crashes in Indonesia and Ethiopia, which show the planes oscillating up and down in intervals of around 20 seconds.

If incorrect data led the MCAS to conclude the plane's nose was too high, risking a stall, it would have directed the nose down. If the pilots responded by pulling the nose up, but didn't realize they had to cut electrical power to the component the MCAS controlled, then the software would have continued to counteract them at the intervals it was programmed to operate. This would explain the up and down movement of the planes, and why a jackscrew showed the angle of the component the MCAS controlled in a nose-down position in Ethiopia.

We cannot know if the MCAS software played any role in the 737 MAX tragedies until the investigations are completed. But we know that as humans and algorithms increasingly work together, the nature of our accidents will change. We are surrounded by software algorithms that determine what we see, what options we have and how the technology we use responds to us. We should be talking about and implementing — through collective bargaining, through legislation or regulation — best practices for algorithmic workplaces, because more and more of us are in them.

Related:

Headshot of H. C. Robinson

H. C. Robinson Cognoscenti contributor
H. C. Robinson is associate professor of law and sociology at Northeastern University, where she studies interactions between technology, workers and the law.

More…

Advertisement

More from WBUR

Listen Live
Close