The Double-Edged Sword of Neuroscience Advances
The emerging ethical dilemmas we're facing.
Posted Aug 10, 2018
Original link: https://www.psychologytoday.com/us/blog/the-social-brain/201808/the-double-edged-sword-neuroscience-advances
By
The Ohio State University Wexner Medical Center’s Neuroscience Research
Institute and The Stanley D. and Joan H. Ross Center for Brain Health and Performance
New research into the brain is fueling breakthroughs in fields as diverse as healthcare and computer science. At the same time, these advances may lead to ethical dilemmas in the coming decades—or, in some cases, much sooner. Neuroethics was the subject of a panel discussion at the recent Brain Health and Performance Summit, presented by The Ohio State University Wexner Medical Center’s Neuroscience Research Institute and The Stanley D. and Joan H. Ross Center for Brain Health and Performance.
John Banja, Ph.D., Professor in the Department of Rehabilitation Medicine and a medical ethicist at the Center for Ethics at Emory University, explained how insights from neuroscience could make it possible to develop hyper-intelligent computer programs. Simultaneously, our deepening understanding of the brain exposes the inherent shortcomings of even the most advanced artificial intelligence (AI).
“How will we ever program a computer to have the kind of learning experiences and navigational knowledge that people have in life, itself?” Banja asked. He questioned whether it would ever be possible to create (AI) that is capable of human-level imagination or moral reasoning. Indeed, would it ever be possible for a computer program to reproduce the processes that the human brain applies to complex situations, Banja queried. As an example, he posed an ethical dilemma to the audience: Should a hospital respect a wife’s desire to preserve her dead husband’s sperm even if the husband never consented to such a procedure? By show of hands, the question split the audience filled with scientists and medical personnel. Banja doubted whether a computer could be trusted to resolve issues that divide even the most qualified human beings. “How are we ever going to program a computer to think like that?,” Banja said, referring to the process of working through his hypothetical. “They're good at image recognition, but they’re not very good at tying a shoelace.”
The moral shortcomings of AI raise a number of worrying possibilities, especially since the technology needed to create high-functioning computers will soon be a reality. “Artificial super-intelligence might be the last invention that humans ever make,” warned Banja. Hyper-intelligent computers could begin to see human life as a threat and then acquire the means of exterminating it—without ever being checked by human feelings of doubt or remorse.
According to Eran Klein, MD, Ph.D., a neurologist and ethicist at the Oregon Health & Science University and the University of Washington's Center for Sensorimotor Neural Engineering, there are far less abstract questions that now confront neuroscientists and other brain health professionals. He believes that the AI apocalypse is still a far-off, worst-case scenario. But patients are already being given non-pharmaceutical therapies that can alter their mood and outlook, like brain implants meant to combat depression. The treatments potentially could be life-changing, as well as a safer and more effective than traditional drugs. However, they could also skew a patient’s sense of identity. “Patients felt these devices allowed them to be more authentic,” Klein explained. “It allowed them to be the person they always wanted to be or didn’t realize they could be.”
Still, the treatments had distorted some patients’ conception of their own selfhood, making them unsure of the boundaries between the brain implants and their own free will. “There were concerns about agency,” Klein said. “Patients are not sure if if what they’re feeling is because of themselves or because of the device.” For example, Klein described one patient attending a funeral and not being able to cry. “He didn’t know if it was because the device was working or because he didn’t love this person as much as he thought he did,” Klein explained. As technology improves, Klein anticipates that patients and doctors will have to balance the benefits of certain techniques against their possible effect on the sense of self.
That is not where the big questions will end. For James Giordano, Ph.D., Chief of the Neuroethics Studies Program of the Pellegrino Center for Clinical Bioethics at the Georgetown University Medical Center, neuroscience could change how society approaches crucial questions of human nature—something that could have major implications for law, privacy, and other areas that would not appear to have a direct connection to brain health. Giordano predicted that a new field of “neuro-law” could emerge, with scientists and legal scholars helping to determine the proper status of neuroscience in the legal system.
When, for instance, should neurological understandings of human behavior be an admissible argument for a defendant's innocence? Neuroscience allows for a granular understanding of how individual brains work—that creates a wealth of information that the medical field could conceivably abuse. “Are the brain sciences prepared to protect us or in some way is our privacy being impugned?” Giordano asked. Echoing Klein, Giordano wondered whether brain science could make it perilously easy to shape a person’s personality and sense of self, potentially against a patient’s will or absent of an understanding of the implications of a given therapy. “Can we ‘abolish’ pain, sadness, suffering and expand cognitive emotional or moral capability?” Giordano asked. Neuroscience could create new baselines of medical or behavioral normalcy, thus shifting our idea of what is and is not acceptable. “What will the new culture be when we use neuroscience to define what is normal and abnormal, who is functional and dysfunctional?”
Giordano’s warned that with technology rapidly improving, the need for answers will become ever more urgent. “Reality check,” Giordano said, “This stuff is coming.”
New research into the brain is fueling breakthroughs in fields as diverse as healthcare and computer science. At the same time, these advances may lead to ethical dilemmas in the coming decades—or, in some cases, much sooner. Neuroethics was the subject of a panel discussion at the recent Brain Health and Performance Summit, presented by The Ohio State University Wexner Medical Center’s Neuroscience Research Institute and The Stanley D. and Joan H. Ross Center for Brain Health and Performance.
John Banja, Ph.D., Professor in the Department of Rehabilitation Medicine and a medical ethicist at the Center for Ethics at Emory University, explained how insights from neuroscience could make it possible to develop hyper-intelligent computer programs. Simultaneously, our deepening understanding of the brain exposes the inherent shortcomings of even the most advanced artificial intelligence (AI).
“How will we ever program a computer to have the kind of learning experiences and navigational knowledge that people have in life, itself?” Banja asked. He questioned whether it would ever be possible to create (AI) that is capable of human-level imagination or moral reasoning. Indeed, would it ever be possible for a computer program to reproduce the processes that the human brain applies to complex situations, Banja queried. As an example, he posed an ethical dilemma to the audience: Should a hospital respect a wife’s desire to preserve her dead husband’s sperm even if the husband never consented to such a procedure? By show of hands, the question split the audience filled with scientists and medical personnel. Banja doubted whether a computer could be trusted to resolve issues that divide even the most qualified human beings. “How are we ever going to program a computer to think like that?,” Banja said, referring to the process of working through his hypothetical. “They're good at image recognition, but they’re not very good at tying a shoelace.”
The moral shortcomings of AI raise a number of worrying possibilities, especially since the technology needed to create high-functioning computers will soon be a reality. “Artificial super-intelligence might be the last invention that humans ever make,” warned Banja. Hyper-intelligent computers could begin to see human life as a threat and then acquire the means of exterminating it—without ever being checked by human feelings of doubt or remorse.
According to Eran Klein, MD, Ph.D., a neurologist and ethicist at the Oregon Health & Science University and the University of Washington's Center for Sensorimotor Neural Engineering, there are far less abstract questions that now confront neuroscientists and other brain health professionals. He believes that the AI apocalypse is still a far-off, worst-case scenario. But patients are already being given non-pharmaceutical therapies that can alter their mood and outlook, like brain implants meant to combat depression. The treatments potentially could be life-changing, as well as a safer and more effective than traditional drugs. However, they could also skew a patient’s sense of identity. “Patients felt these devices allowed them to be more authentic,” Klein explained. “It allowed them to be the person they always wanted to be or didn’t realize they could be.”
Still, the treatments had distorted some patients’ conception of their own selfhood, making them unsure of the boundaries between the brain implants and their own free will. “There were concerns about agency,” Klein said. “Patients are not sure if if what they’re feeling is because of themselves or because of the device.” For example, Klein described one patient attending a funeral and not being able to cry. “He didn’t know if it was because the device was working or because he didn’t love this person as much as he thought he did,” Klein explained. As technology improves, Klein anticipates that patients and doctors will have to balance the benefits of certain techniques against their possible effect on the sense of self.
That is not where the big questions will end. For James Giordano, Ph.D., Chief of the Neuroethics Studies Program of the Pellegrino Center for Clinical Bioethics at the Georgetown University Medical Center, neuroscience could change how society approaches crucial questions of human nature—something that could have major implications for law, privacy, and other areas that would not appear to have a direct connection to brain health. Giordano predicted that a new field of “neuro-law” could emerge, with scientists and legal scholars helping to determine the proper status of neuroscience in the legal system.
When, for instance, should neurological understandings of human behavior be an admissible argument for a defendant's innocence? Neuroscience allows for a granular understanding of how individual brains work—that creates a wealth of information that the medical field could conceivably abuse. “Are the brain sciences prepared to protect us or in some way is our privacy being impugned?” Giordano asked. Echoing Klein, Giordano wondered whether brain science could make it perilously easy to shape a person’s personality and sense of self, potentially against a patient’s will or absent of an understanding of the implications of a given therapy. “Can we ‘abolish’ pain, sadness, suffering and expand cognitive emotional or moral capability?” Giordano asked. Neuroscience could create new baselines of medical or behavioral normalcy, thus shifting our idea of what is and is not acceptable. “What will the new culture be when we use neuroscience to define what is normal and abnormal, who is functional and dysfunctional?”
Giordano’s warned that with technology rapidly improving, the need for answers will become ever more urgent. “Reality check,” Giordano said, “This stuff is coming.”