Category: Ethics and Law

Man With Motor Neuron Disease Presents His Case for Euthanasia

At a hearing, a man with a deteriorating condition from motor neuron disease (MND), laid out why he wants the right to choose patient-assisted suicide.

Diethelm Harck, 71, is seeking a change in the laws surrounding euthanasia in South Africa. He is presenting his case alongside Dr Suzanne Walter, a palliative care specialist who has multiple myeloma. Both may not live long enough to see the outcome of their application, and have set up trust funds to assist the overturning of laws surround euthanasia and patient-assisted suicide in South Africa.

In his evidence, he said that he loves life “but my biggest fear is that when my love of life reaches the stage of fearing life, I will not be able to die.”

In the hearing via Zoom, Mr Harck said that he used to exercise daily but now takes three hours to complete a simple routine like getting up and making breakfast.Mr Harck said that his deterioration would be progressive; his muscles are weakening eventually he will be unable to breathe as his diaphragm will stop working.

“From what I have seen and witnessed, MND death is not peaceful,” he said. “I have seen a number of colleagues and [support] group members pass away. They had no way to communicate. And they could not breathe easily.”

Mr Harck continued, “We once visited a young girl suffering from MND, who was totally paralysed. She could only speak with the help of an eye gaze machine. When Lynn [Mr Harck’s life partner] asked her what she feared the most, she said not being able to die.”

Their application is being opposed by The Health Professions Council of South Africa and the Ministers of Health, Justice and the National Director of Public Prosecutions. They claimed that palliative care was available to most South Africans and that the right to life must be protected by the ban on euthanasia.

Source: Eyewitness News

Social Cues Impacts on Human Decision-making in Emergencies

Man at the wheel of a car. Photo by why kei on Unsplash.

A study showed that, when participants in a simulated crash of an autonomous vehicle were told that others had chosen to crash into a wall to save pedestrians, their own willingness to do so went up by two-thirds.

As autonomous vehicles become more commonplace, and the need to program them for safety emerges, a better understanding of how humans react in such situations is needed. Study author Jonathan Gratch, the principal investigator for this project, and a computer scientist at the USC Institute for Creative Technologies, said that current models for humans in life-or-death situations, humans think differently to how they do in reality. There are no moral absolutes, rather ” it is more nuanced”.

Seeking to understand how humans make decisions in life-or-death situations and how to apply them to the programming of autonomous vehicles and robots, researchers presented a modified trolley problem to participants using a modified ‘trolley problem’.

The trolley problem is a classic hypothetical scenario psychologists use to investigate human decision-making. Essentially, it involves the decision to divert a tram to hit one person or to leave it on its track and hit five, and it has a number of variations. In one medical variation of the trolley problem, one person could be killed and their organs harvested to save five terminally ill patients — a choice that is overwhelmingly rejected.  

In three of four simulations presented to them, the participants had to choose whether to tell their autonomous vehicle to hit a wall, risking harm to themselves, or hit five pedestrians. The higher the likelihood of injury to pedestrians, the more likely the participants were to choose hitting the wall and risking self harm. The authors showed that in so doing, people balance the risk of injury to self against the potential of injury to others as a guideline.

In the fourth scenario, a social element was added, where participants were told that their peers had chosen to save the pedestrians. In this case, the proportion of participants electing to save the pedestrian went up from 30% to 50%.

However, Gratch there is a reverse as well: “Technically there are two forces at work. When people realize their peers don’t care, this pulls people down to selfishness. When they realize they care, this pulls them up.”

The researchers showed that using the trolley problem as a basis for decision-making is insufficient, as it fails to capture the complexity of human decision-making. The researchers also concluded that transparency in the programming of autonomous machines was important for the public, as well as human operators assuming control in the event of an emergency.

Source: News-Medical.Net

Journal information: de Melo, C. M., et al. (2021) Risk of Injury in Moral Dilemmas With Autonomous Vehicles. Frontiers in Robotics and AI. doi.org/10.3389/frobt.2020.572529.