Search This Blog

Sunday, July 22, 2018

Parallel universes, the Matrix, and superintelligence

June 26, 2003 by Michio Kaku
Original link:  http://www.kurzweilai.net/parallel-universes-the-matrix-and-superintelligence
 Published on KurzweilAI.net June 26, 2003.

Physicists are converging on a “theory of everything,” probing the 11th dimension, developing computers for the next generation of robots, and speculating about civilizations millions of years ahead of ours, says Dr. Michio Kaku, author of the best-sellers Hyperspace and Visions and co-founder of String Field Theory, in this interview by KurzweilAI.net Editor Amara D. Angelica.


What are the burning issues for you currently?

Well, several things. Professionally, I work on something called Superstring theory, or now called M-theory, and the goal is to find an equation, perhaps no more than one inch long, which will allow us to "read the mind of God," as Einstein used to say.

In other words, we want a single theory that gives us an elegant, beautiful representation of the forces that govern the Universe. Now, after two thousand years of investigation into the nature of matter, we physicists believe that there are four fundamental forces that govern the Universe.

Some physicists have speculated about the existence of a fifth force, which may be some kind of paranormal or psychic force, but so far we find no reproducible evidence of a fifth force.

Now, each time a force has been mastered, human history has undergone a significant change. In the 1600s, when Isaac Newton first unraveled the secret of gravity, he also created a mechanics. And from Newton’s Laws and his mechanics, the foundation was laid for the steam engine, and eventually the Industrial Revolution.

So, in other words, in some sense, a byproduct of the mastery of the first force, gravity, helped to spur the creation of the Industrial Revolution, which in turn is perhaps one of the greatest revolutions in human history.

The second great force is the electromagnetic force; that is, the force of light, electricity, magnetism, the Internet, computers, transistors, lasers, microwaves, x-rays, etc.

And then in the 1860s, it was James Clerk Maxwell, the Scottish physicist at Cambridge University, who finally wrote down Maxwell’s equations, which allow us to summarize the dynamics of light.

That helped to unleash the Electric Age, and the Information Age, which have changed all of human history. Now it’s hard to believe, but Newton’s equations and Einstein’s equations are no more than about half an inch long.

Maxwell’s equations are also about half an inch long. For example, Maxwell’s equations say that the “four-dimensional divergence of an antisymmetric, second-rank tensor equals zero.” That’s Maxwell’s equations, the equations for light. And in fact, at Berkeley, you can buy a T-shirt which says, "In the beginning, God said the four-dimensional divergence of an antisymmetric, second rank tensor equals zero, and there was Light, and it was good."

So, the mastery of the first two forces helped to unleash, respectively, the Industrial Revolution and the Information Revolution.

The last two forces are the weak nuclear force and the strong nuclear force, and they in turn have helped us to unlock the secret of the stars, via Einstein’s equations E=mc2, and many people think that far in the future, the human race may ultimately derive its energy not only from solar power, which is the power of fusion, but also fusion power on the Earth, in terms of fusion reactors, which operate on seawater, and create no copious quantities of radioactive waste.

So, in summary, the mastery of each force helped to unleash a new revolution in human history.

Today, we physicists are embarking upon the greatest quest of all, which is to unify all four of these forces into a single comprehensive theory. The first force, gravity, is now represented by Einstein’s General Theory of Relativity, which gives us the Big Bang, black holes, and expanding universe. It’s a theory of the very large; it’s a theory of smooth, space-time manifolds like bedsheets and trampoline nets.

The second theory, the quantum theory, is the exact opposite. The quantum theory allows us to unify the electromagnetic, weak and strong force. However, it is based on discrete, tiny packets of energy called quanta, rather than smooth bedsheets, and it is based on probabilities, rather than the certainty of Einstein’s equations. So these two theories summarize the sum total of all physical knowledge of the physical universe.

Any equation describing the physical universe ultimately is derived from one of these two theories. The problem is these two theories are diametrically opposed. They are based on different assumptions, different principles, and different mathematics. Our job as physicists is to unify the two into a single, comprehensive theory. Now, over the last decades, the giants of the twentieth century have tried to do this and have failed.

For example, Niels Bohr, the founder of atomic physics and the quantum theory, was very skeptical about many attempts over the decades to create a Unified Field Theory. One day, Wolfgang Pauli, Nobel laureate, was giving a talk about his version of the Unified Field Theory, and in a very famous story, Bohr stood up in the back of the room and said, "Mr. Pauli, we in the back are convinced that your theory is crazy. What divides us is whether your theory is crazy enough."

So today, we realize that a true Unified Field Theory must be bizarre, must be fantastic, incredible, mind-boggling, crazy, because all the sane alternatives have been studied and discarded.

Today we have string theory, which is based on the idea that the subatomic particles we see in nature are nothing but notes we see on a tiny, vibrating string. If you kick the string, then an electron will turn into a neutrino. If you kick it again, the vibrating string will turn from a neutrino into a photon or a graviton. And if you kick it enough times, the vibrating string will then mutate into all the subatomic particles.

Therefore we no longer in some sense have to deal with thousands of subatomic particles coming from our atom smashers, we just have to realize that what makes them, what drives them, is a vibrating string. Now when these strings collide, they form atoms and nuclei, and so in some sense, the melodies that you can write on the string correspond to the laws of chemistry. Physics is then reduced to the laws of harmony that we can write on a string. The Universe is a symphony of strings. And what is the mind of God that Einstein used to write about? According to this picture, the mind of God is music resonating through ten- or eleven-dimensional hyperspace, which of course begs the question, "If the universe is a symphony, then is there a composer to the symphony?" But that’s another question.

Parallel worlds

What do you think of Sir Martin Rees’ concerns about the risk of creating black holes on Earth in his book, Our Final Hour?

I haven’t read his book, but perhaps Sir Martin Rees is referring to many press reports that claim that the Earth may be swallowed up by a black hole created by our machines. This started with a letter to the editor in Scientific American asking whether the RHIC accelerator in Brookhaven, Long Island, will create a black hole which will swallow up the earth. This was then picked up by the Sunday London Times who then splashed it on the international wire services, and all of a sudden, we physicists were deluged with hundreds of emails and telegrams asking whether or not we are going to destroy the world when we create a black hole in Long Island.

However, you can calculate that in outer space, cosmic rays have more energy than the particles produced in our most powerful atom smashers, and black holes do not form in outer space. Not to mention the fact that to create a black hole, you would have to have the mass of a giant star. In fact, an object ten to fifty times the mass of our star may in fact form a black hole. So the probability of a black hole forming in Long Island is zero.

However, Sir Martin Rees also has written a book, talking about the Multiverse. And that is also the subject of my next book, coming out late next year, called Parallel Worlds. We physicists no longer believe in a Universe. We physicists believe in a Multiverse that resembles the boiling of water. Water boils when tiny particles, or bubbles, form, which then begin to rapidly expand. If our Universe is a bubble in boiling water, then perhaps Big Bangs happen all the time.

Now, the Multiverse idea is consistent with Superstring theory, in the sense that Superstring theory has millions of solutions, each of which seems to correspond to a self-consistent Universe. So in some sense, Superstring theory is drowning in its own riches. Instead of predicting a unique Universe, it seems to allow the possibility of a Multiverse of Universes.

This may also help to answer the question raised by the Anthropic Principle. Our Universe seems to have known that we were coming. The conditions for life are extremely stringent. Life and consciousness can only exist in a very narrow band of physical parameters. For example, if the proton is not stable, then the Universe will collapse into a useless heap of electrons and neutrinos. If the proton were a little bit different in mass, it would decay, and all our DNA molecules would decay along with it.

In fact, there are hundreds, perhaps thousands, of coincidences, happy coincidences, that make life possible. Life, and especially consciousness, is quite fragile. It depends on stable matter, like protons, that exists for billions of years in a stable environment, sufficient to create autocatalytic molecules that can reproduce themselves, and thereby create Life. In physics, it is extremely hard to create this kind of Universe. You have to play with the parameters, you have to juggle the numbers, cook the books, in order to create a Universe which is consistent with Life.

However, the Multiverse idea explains this problem, because it simply means we coexist with dead Universes. In other Universes, the proton is not stable. In other Universes, the Big Bang took place, and then it collapsed rapidly into a Big Crunch, or these Universes had a Big Bang, and immediately went into a Big Freeze, where temperatures were so low, that Life could never get started.

So, in the Multiverse of Universes, many of these Universes are in fact dead, and our Universe in this sense is special, in that Life is possible in this Universe. Now, in religion, we have the Judeo-Christian idea of an instant of time, a genesis, when God said, "Let there be light." But in Buddhism, we have a contradictory philosophy, which says that the Universe is timeless. It had no beginning, and it had no end, it just is. It’s eternal, and it has no beginning or end.

The Multiverse idea allows us to combine these two pictures into a coherent, pleasing picture. It says that in the beginning, there was nothing, nothing but hyperspace, perhaps ten- or eleven-dimensional hyperspace. But hyperspace was unstable, because of the quantum principle. And because of the quantum principle, there were fluctuations, fluctuations in nothing. This means that bubbles began to form in nothing, and these bubbles began to expand rapidly, giving us the Universe. So, in other words, the Judeo-Christian genesis takes place within the Buddhist nirvana, all the time, and our Multiverse percolates universes.

Now this also raises the possibility of Universes that look just like ours, except there’s one quantum difference. Let’s say for example, that a cosmic ray went through Churchill’s mother, and Churchill was never born, as a consequence. In that Universe, which is only one quantum event away from our Universe, England never had a dynamic leader to lead its forces against Hitler, and Hitler was able to overcome England, and in fact conquer the world.

So, we are one quantum event away from Universes that look quite different from ours, and it’s still not clear how we physicists resolve this question. This paradox revolves around the Schrödinger’s Cat problem, which is still largely unsolved. In any quantum theory, we have the possibility that atoms can exist in two places at the same time, in two states at the same time. And then Erwin Schrödinger, the founder of quantum mechanics, asked the question: let’s say we put a cat in a box, and the cat is connected to a jar of poison gas, which is connected to a hammer, which is connected to a Geiger counter, which is connected to uranium. Everyone believes that uranium has to be described by the quantum theory. That’s why we have atomic bombs, in fact. No one disputes this.

But if the uranium decays, triggering the Geiger counter, setting off the hammer, destroying the jar of poison gas, then I might kill the cat. And so, is the cat dead or alive? Believe it or not, we physicists have to superimpose, or add together, the wave function of a dead cat with the wave function of a live cat. So the cat is neither dead nor alive.

This is perhaps one of the deepest questions in all the quantum theory, with Nobel laureates arguing with other Nobel laureates about the meaning of reality itself.

Now, in philosophy, solipsists like Bishop Berkeley used to believe that if a tree fell in the forest and there was no one there to listen to the tree fall, then perhaps the tree did not fall at all. However, Newtonians believe that if a tree falls in the forest, that you don’t have to have a human there to witness the event.

The quantum theory puts a whole new spin on this. The quantum theory says that before you look at the tree, the tree could be in any possible state. It could be burnt, a sapling, it could be firewood, it could be burnt to the ground. It could be in any of an infinite number of possible states. Now, when you look at it, it suddenly springs into existence and becomes a tree.

Einstein never liked this. When people used to come to his house, he used to ask them, "Look at the moon. Does the moon exist because a mouse looks at the moon?" Well, in some sense, yes. According to the Copenhagen school of Neils Bohr, observation determines existence.

Now, there are at least two ways to resolve this. The first is the Wigner school. Eugene Wigner was one of the creators of the atomic bomb and a Nobel laureate. And he believed that observation creates the Universe. An infinite sequence of observations is necessary to create the Universe, and in fact, maybe there’s a cosmic observer, a God of some sort, that makes the Universe spring into existence.

There’s another theory, however, called decoherence, or many worlds, which believes that the Universe simply splits each time, so that we live in a world where the cat is alive, but there’s an equal world where the cat is dead. In that world, they have people, they react normally, they think that their world is the only world, but in that world, the cat is dead. And, in fact, we exist simultaneously with that world.

This means that there’s probably a Universe where you were never born, but everything else is the same. Or perhaps your mother had extra brothers and sisters for you, in which case your family is much larger. Now, this can be compared to sitting in a room, listening to radio. When you listen to radio, you hear many frequencies. They exist simultaneously all around you in the room. However, your radio is only tuned to one frequency. In the same way, in your living room, there is the wave function of dinosaurs. There is the wave function of aliens from outer space. There is the wave function of the Roman Empire, because it never fell, 1500 years ago.

All of this coexists inside your living room. However, just like you can only tune into one radio channel, you can only tune into one reality channel, and that is the channel that you exist in. So, in some sense it is true that we coexist with all possible universes. The catch is, we cannot communicate with them, we cannot enter these universes.

However, I personally believe that at some point in the future, that may be our only salvation. The latest cosmological data indicates that the Universe is accelerating, not slowing down, which means the Universe will eventually hit a Big Freeze, trillions of years from now, when temperatures are so low that it will be impossible to have any intelligent being survive.

When the Universe dies, there’s one and only one way to survive in a freezing Universe, and that is to leave the Universe. In evolution, there is a law of biology that says if the environment becomes hostile, either you adapt, you leave, or you die.

When the Universe freezes and temperatures reach near absolute zero, you cannot adapt. The laws of thermodynamics are quite rigid on this question. Either you will die, or you will leave. This means, of course, that we have to create machines that will allow us to enter eleven-dimensional hyperspace. This is still quite speculative, but String theory, in some sense, may be our only salvation. For advanced civilizations in outer space, either we leave or we die.

That brings up a question. Matrix Reloaded seems to be based on parallel universes. What do you think of the film in terms of its metaphors?

Well, the technology found in the Matrix would correspond to that of an advanced Type I or Type II civilization. We physicists, when we scan outer space, do not look for little green men in flying saucers. We look for the total energy outputs of a civilization in outer space, with a characteristic frequency. Even if intelligent beings tried to hide their existence, by the second law of thermodynamics, they create entropy, which should be visible with our detectors.

So we classify civilizations on the basis of energy outputs. A Type I civilization is planetary. They control all planetary forms of energy. They would control, for example, the weather, volcanoes, earthquakes; they would mine the oceans, any planetary form of energy they would control. Type II would be stellar. They play with solar flares. They can move stars, ignite stars, play with white dwarfs. Type III is galactic, in the sense that they have now conquered whole star systems, and are able to use black holes and star clusters for their energy supplies.

Each civilization is separated by the previous civilization by a factor of ten billion. Therefore, you can calculate numerically at what point civilizations may begin to harness certain kinds of technologies. In order to access wormholes and parallel universes, you have to be probably a Type III civilization, because by definition, a Type III civilization has enough energy to play with the Planck energy.

The Planck energy, or 1019 billion electron volts, is the energy at which space-time becomes unstable. If you were to heat up, in your microwave oven, a piece of space-time to that energy, then bubbles would form inside your microwave oven, and each bubble in turn would correspond to a baby Universe.

Now, in the Matrix, several metaphors are raised. One metaphor is whether computing machines can create artificial realities. That would require a civilization centuries or millennia ahead of ours, which would place it squarely as a Type I or Type II civilization.

However, we also have to ask a practical question: is it possible to create implants that could access our memory banks to create this artificial reality, and are machines dangerous? My answer is the following. First of all, cyborgs with neural implants: the technology does not exist, and probably won’t exist for at least a century, for us to access the central nervous system. At present, we can only do primitive experiments on the brain.

For example, at Emory University in Atlanta, Georgia, it’s possible to put a glass implant into the brain of a stroke victim, and the paralyzed stroke victim is able to, by looking at the cursor of a laptop, eventually control the motion of the cursor. It’s very slow and tedious; it’s like learning to ride a bicycle for the first time. But the brain grows into the glass bead, which is placed into the brain. The glass bead is connected to a laptop computer, and over many hours, the person is able to, by pure thought, manipulate the cursor on the screen.

So, the central nervous system is basically a black box. Except for some primitive hookups to the visual system of the brain, we scientists have not been able to access most bodily functions, because we simply don’t know the code for the spinal cord and for the brain. So, neural implant technology, I believe is one hundred, maybe centuries away from ours.

Will robots take over?

On the other hand, we have to ask yet another metaphor raised by the Matrix, and that is, are machines dangerous? And the answer is, potentially, yes. However, at present, our robots have the intelligence of a cockroach, in the sense that pattern recognition and common sense are the two most difficult, unsolved problems in artificial intelligence theory. Pattern recognition means the ability to see, hear, and to understand what you are seeing and understand what you are hearing. Common sense means your ability to make sense out of the world, which even children can perform.

Those two problems are at the present time largely unsolved. Now, I think, however, that within a few decades, we should be able to create robots as smart as mice, maybe dogs and cats. However, when machines start to become as dangerous as monkeys, I think we should put a chip in their brain, to shut them off when they start to have murderous thoughts.

By the time you have monkey intelligence, you begin to have self-awareness, and with self-awareness, you begin to have an agenda created by a monkey for its own purposes. And at that point, a mechanical monkey may decide that its agenda is different from our agenda, and at that point they may become dangerous to humans. I think we have several decades before that happens, and Moore’s Law will probably collapse in 20 years anyway, so I think there’s plenty of time before we come to the point where we have to deal with murderous robots, like in the movie 2001.

So you differ with Ray Kurzweil’s concept of using nanobots to reverse-engineer and upload the brain, possibly within the coming decades?

Not necessarily. I’m just laying out a linear course, the trajectory where artificial intelligence theory is going today. And that is, trying to build machines which can navigate and roam in our world, and two, robots which can make sense out of the world. However, there’s another divergent path one might take, and that’s to harness the power of nanotechnology. However, nanotechnology is still very primitive. At the present time, we can barely build arrays of atoms. We cannot yet build the first atomic gear, for example. No one has created an atomic wheel with ball bearings. So simple machines, which even children can play with in their toy sets, don’t yet exist at the atomic level. However, on a scale of decades, we may be able to create atomic devices that begin to mimic our own devices.

Molecular transistors can already be made. Nanotubes allow us to create strands of material that are super-strong. However, nanotechnology is still in its infancy and therefore, it’s still premature to say where nanotechnology will go. However, one place where technology may go is inside our body. Already, it’s possible to create a pill the size of an aspirin pill that has a television camera that can photograph our insides as it goes down our gullet, which means that one day surgery may become relatively obsolete.

In the future, it’s conceivable we may have atomic machines that enter the blood. And these atomic machines will be the size of blood cells and perhaps they would be able to perform useful functions like regulating and sensing our health, and perhaps zapping cancer cells and viruses in the process. However, this is still science fiction, because at the present time, we can’t even build simple atomic machines yet.

Are we living in a simulation?

Is there any possibility, similar to the premise of The Matrix, that we are living in a simulation?

Well, philosophically speaking, it’s always possible that the universe is a dream, and it’s always possible that our conversation with our friends is a by-product of the pickle that we had last night that upset our stomach. However, science is based upon reproducible evidence. When we go to sleep and we wake up the next day, we usually wind up in the same universe. It is reproducible. No matter how we try to avoid certain unpleasant situations, they come back to us. That is reproducible. So reality, as we commonly believe it to exist, is a reproducible experiment, it’s a reproducible sensation. Therefore in principle, you could never rule out the fact that the world could be a dream, but the fact of the matter is, the universe as it exists is a reproducible universe.

Now, in the Matrix, a computer simulation was run so that virtual reality became reproducible. Every time you woke up, you woke up in that same virtual reality. That technology, of course, does not violate the laws of physics. There’s nothing in relativity or the quantum theory that says that the Matrix is not possible. However, the amount of computer power necessary to drive the universe and the technology necessary for a neural implant is centuries to millennia beyond anything that we can conceive of, and therefore this is something for an advanced Type I or II civilization.

Why is a Type I required to run this kind of simulation? Is number crunching the problem?

Yes, it’s simply a matter of number crunching. At the present time, we scientists simply do not know how to interface with the brain. You see, one of the problems is, the brain, strictly speaking, is not a digital computer at all. The brain is not a Turing machine. A Turing machine is a black box with an input tape and an output tape and a central processing unit. That is the essential element of a Turing machine: information processing is localized in one point. However, our brain is actually a learning machine; it’s a neural network.

Many people find this hard to believe, but there’s no software, there is no operating system, there is no Windows programming for the brain. The brain is a vast collection, perhaps a hundred billion neurons, each neuron with 10,000 connections, which slowly and painfully interacts with the environment. Some neural pathways are genetically programmed to give us instinct. However, for the most part, our cerebral cortex has to be reprogrammed every time we bump into reality.

As a consequence, we cannot simply put a chip in our brain that augments our memory and enhances our intelligence. Memory and thinking, we now realize, is distributed throughout the entire brain. For example, it’s possible to have people with only half a brain. There was a documented case recently where a young girl had half her brain removed and she’s still fully functional.

So, the brain can operate with half of its mass removed. However, you remove one transistor in your Pentium computer and the whole computer dies. So, there’s a fundamental difference between digital computers–which are easily programmed, which are modular, and you can insert different kinds of subroutines in them–and neural networks, where learning is distributed throughout the entire device, making it extremely difficult to reprogram. That is the reason why, even if we could create an advanced PlayStation that would run simulations on a PC screen, that software cannot simply be injected into the human brain, because the brain has no operating system.

Ray Kurzweil’s next book, The Singularity is Near, predicts that possibly within the coming decades, there will be super-intelligence emerging on the planet that will surpass that of humans. What do you think of that idea?

Yes, that sounds interesting. But Moore’s Law will have collapsed by then, so we’ll have a little breather. In 20 years time, the quantum theory takes over, so Moore’s Law collapses and we’ll probably stagnate for a few decades after that. Moore’s Law, which states that computer power doubles every 18 months, will not last forever. The quantum theory giveth, the quantum theory taketh away. The quantum theory makes possible transistors, which can be etched by ultraviolet rays onto smaller and smaller chips of silicon. This process will end in about 15 to 20 years. The senior engineers at Intel now admit for the first time that, yes, they are facing the end.

The thinnest layer on a Pentium chip consists of about 20 atoms. When we start to hit five atoms in the thinnest layer of a Pentium chip, the quantum theory takes over, electrons can now tunnel outside the layer, and the Pentium chip short-circuits. Therefore, within a 15 to 20 year time frame, Moore’s Law could collapse, and Silicon Valley could become a Rust Belt.

This means that we physicists are desperately trying to create the architecture for the post-silicon era. This means using quantum computers, quantum dot computers, optical computers, DNA computers, atomic computers, molecular computers, in order to bridge the gap when Moore’s Law collapses in 15 to 20 years. The wealth of nations depends upon the technology that will replace the power of silicon.

This also means that you cannot project artificial intelligence exponentially into the future. Some people think that Moore’s Law will extend forever; in which case humans will be reduced to zoo animals and our robot creations will throw peanuts at us and make us dance behind bars. Now, that may eventually happen. It is certainly consistent within the laws of physics.

However, the laws of the quantum theory say that we’re going to face a massive problem 15 to 20 years from now. Now, some remedial methods have been proposed; for example, building cubical chips, chips that are stacked on chips to create a 3-dimensional array. However, the problem there is heat production. Tremendous quantities of heat are produced by cubical chips, such that you can fry an egg on top of a cubical chip. Therefore, I firmly believe that we may be able to squeeze a few more years out of Moore’s Law, perhaps designing clever cubical chips that are super-cooled, perhaps using x-rays to etch our chips instead of ultraviolet rays. However, that only delays the inevitable. Sooner or later, the quantum theory kills you. Sooner or later, when we hit five atoms, we don’t know where the electron is anymore, and we have to go to the next generation, which relies on the quantum theory and atoms and molecules.

Therefore, I say that all bets are off in terms of projecting machine intelligence beyond a 20-year time frame. There’s nothing in the laws of physics that says that computers cannot exceed human intelligence. All I raise is that we physicists are desperately trying to patch up Moore’s Law, and at the present time we have to admit that we have no successor to silicon, which means that Moore’s Law will collapse in 15 to 20 years.

So are you saying that quantum computing and nanocomputing are not likely to be available by then?

No, no, I’m just saying it’s very difficult. At the present time we physicists have been able to compute on seven atoms. That is the world’s record for a quantum computer. And that quantum computer was able to calculate 3 x 5 = 15. Now, being able to calculate 3 x 5 = 15 does not equal the convenience of a laptop computer that can crunch potentially millions of calculations per second. The problem with quantum computers is that any contamination, any atomic disturbance, disturbs the alignment of the atoms and the atoms then collapse into randomness. This is extremely difficult, because any cosmic ray, any air molecule, any disturbance can conceivably destroy the coherence of our atomic computer to make them useless.

Unless you have redundant parallel computing?

Even if you have parallel computing you still have to have each parallel computer component free of any disturbance. So, no matter how you cut it, the practical problems of building quantum computers, although within the laws of physics, are extremely difficult, because it requires that we remove all in contact with the environment at the atomic level. In practice, we’ve only been able to do this with a handful of atoms, meaning that quantum computers are still a gleam in the eye of most physicists.

Now, if a quantum computer can be successfully built, it would, of course, scare the CIA and all the governments of the world, because it would be able to crack any code created by a Turing machine. A quantum computer would be able to perform calculations that are inconceivable by a Turing machine. Calculations that require an infinite amount of time on a Turing machine can be calculated in a few seconds by a quantum computer. For example, if you shine laser beams on a collection of coherent atoms, the laser beam scatters, and in some sense performs a quantum calculation, which exceeds the memory capability of any Turing machine.

However, as I mentioned, the problem is that these atoms have to be in perfect coherence, and the problems of doing this are staggering in the sense that even a random collision with a subatomic particle could in fact destroy the coherence and make the quantum computer impractical.

So, I’m not saying that it’s impossible to build a quantum computer; I’m just saying that it’s awfully difficult.

SETI: looking in the wrong direction

When do you think we might expect SETI [Search for Extraterrestrial Intelligence] to be successful?

I personally think that SETI is looking in the wrong direction. If, for example, we’re walking down a country road and we see an anthill, do we go down to the ant and say, "I bring you trinkets, I bring you beads, I bring you knowledge, I bring you medicine, I bring you nuclear technology, take me to your leader"? Or, do we simply step on them? Any civilization capable of reaching the planet Earth would be perhaps a Type III civilization. And the difference between you and the ant is comparable to the distance between you and a Type III civilization. Therefore, for the most part, a Type III civilization would operate with a completely different agenda and message than our civilization.

Let’s say that a ten-lane superhighway is being built next to the anthill. The question is: would the ants even know what a ten-lane superhighway is, or what it’s used for, or how to communicate with the workers who are just feet away? And the answer is no. One question that we sometimes ask is if there is a Type III civilization in our backyard, in the Milky Way galaxy, would we even know its presence? And if you think about it, you realize that there’s a good chance that we, like ants in an anthill, would not understand or be able to make sense of a ten-lane superhighway next door.

So this means there that could very well be a Type III civilization in our galaxy, it just means that we’re not smart enough to find one. Now, a Type III civilization is not going to make contact by sending Captain Kirk on the Enterprise to meet our leader. A Type III civilization would send self-replicating Von Neumann probes to colonize the galaxy with robots. For example, consider a virus. A virus only consists of thousands of atoms. It’s a molecule in some sense. But in about one week, it can colonize an entire human being made of trillions of cells. How is that possible?

Well, a Von Neumann probe would be a self-replicating robot that lands on a moon; a moon, because they are stable, with no erosion, and they’re stable for billions of years. The probe would then make carbon copies of itself by the millions. It would create a factory to build copies of itself. And then these probes would then rocket to other nearby star systems, land on moons, to create a million more copies by building a factory on that moon. Eventually, there would be sphere surrounding the mother planet, expanding at near-light velocity, containing trillions of these Von Neumann probes, and that is perhaps the most efficient way to colonize the galaxy. This means that perhaps, on our moon there is a Von Neumann probe, left over from a visitation that took place million of years ago, and the probe is simply waiting for us to make the transition from Type 0 to Type I.

The Sentinel.

Yes. This, of course, is the basis of the movie 2001, because at the beginning of the movie, Kubrick interviewed many prominent scientists, and asked them the question, "What is the most likely way that an advanced civilization would probe the universe?" And that is, of course, through self-replicating Von Neumann probes, which create moon bases. That is the basis of the movie 2001, where the probe simply waits for us to become interesting. If we’re Type 0, we’re not very interesting. We have all the savagery and all the suicidal tendencies of fundamentalism, nationalism, sectarianism, that are sufficient to rip apart our world.

By the time we’ve become Type I, we’ve become interesting, we’ve become planetary, we begin to resolve our differences. We have centuries in which to exist on a single planet to create a paradise on Earth, a paradise of knowledge and prosperity.

Evolution of the brain

From Wikipedia, the free encyclopedia
 
The principles that govern the evolution of brain structure are not well understood. Brain to body size does not scale isometrically (in a linear fashion) but rather allometrically. The brains and bodies of mammals do not scale linearly. Small bodied mammals have relatively large brains compared to their bodies and large mammals (such as whales) have small brains; similar to growth.
 
If brain weight is plotted against body weight for primates, the regression line of the sample points can indicate the brain power of a primate species. Lemurs for example fall below this line which means that for a primate of equivalent size, we would expect a larger brain size. Humans lie well above the line indicating that humans are more encephalized than lemurs. In fact, humans are more encephalized than all other primates.

Early history of brain development

Scientists can infer that the first brain structure appeared at least 550 million years ago, with fossil brain tissue present in sites of exceptional preservation.[1]
A trend in brain evolution according to a study done with mice, chickens, monkeys and apes concluded that more evolved species tend to preserve the structures responsible for basic behaviors. A long term human study comparing the human brain to the primitive brain found that the modern human brain contains the primitive hindbrain region – what most neuroscientists call the protoreptilian brain. The purpose of this part of the brain is to sustain fundamental homeostatic functions. The pons and medulla are major structures found there. A new region of the brain developed in mammals about 250 million years after the appearance of the hindbrain. This region is known as the paleomammalian brain, the major parts of which are the hippocampi and amygdalas, often referred to as the limbic system. The limbic system deals with more complex functions including emotional, sexual and fighting behaviors. Of course, animals that are not vertebrates also have brains, and their brains have undergone separate evolutionary histories.[2]

The brainstem and limbic system are largely based on nuclei, which are essentially balled-up clusters of tightly-packed neurons and the axon fibers that connect them to each other, as well as to neurons in other locations. The other two major brain areas (the cerebrum and cerebellum) are based on a cortical architecture. At the outer periphery of the cortex, the neurons are arranged into layers (the number of which vary according to species and function) a few millimeters thick. There are axons that travel between the layers, but the majority of axon mass is below the neurons themselves. Since cortical neurons and most of their axon fiber tracts don't have to compete for space, cortical structures can scale more easily than nuclear ones. A key feature of cortex is that because it scales with surface area, "more" of it can be fit inside a skull by introducing convolutions, in much the same way that a dinner napkin can be stuffed into a glass by wadding it up. The degree of convolution is generally greater in more evolved species, which benefit from the increased surface area.

The cerebellum, or "little brain," is behind the brainstem and below the occipital lobe of the cerebrum in humans. Its purposes include the coordination of fine sensorimotor tasks, and it may be involved in some cognitive functions, such as language. Human cerebellar cortex is finely convoluted, much more so than cerebral cortex. Its interior axon fiber tracts are called the arbor vitae, or Tree of Life.

The area of the brain with the greatest amount of recent evolutionary change is called the cerebrum, or neocortex. In reptiles and fish, this area is called the pallium, and is smaller and simpler relative to body mass than what is found in mammals. According to research, the cerebrum first developed about 200 million years ago. It's responsible for higher cognitive functions - for example, language, thinking, and related forms of information processing.[3] It's also responsible for processing sensory input (together with the thalamus, a part of the limbic system that acts as an information router). Most of its function is subconscious, that is, not available for inspection or intervention by the conscious mind. Neocortex is an elaboration, or outgrowth, of structures in the limbic system, with which it is tightly integrated.

Randomizing access and scaling brains up

Some animal groups have gone through major brain enlargement through evolution (e.g. vertebrates and cephalopods both contain many lineages in which brains have grown through evolution) but most animal groups are composed only of species with extremely small brains. Some scientists argue that this difference is due to vertebrate and cephalopod neurons having evolved ways of communicating that overcome the scalability problem of neural networks while most animal groups have not. They argue that the reason why traditional neural networks fail to improve their function when they scale up is because filtering based on previously known probabilities cause self-fulfilling prophecy-like biases that create false statistical evidence giving a completely false worldview and that randomized access can overcome this problem and allow brains to be scaled up to more discriminating conditioned reflexes at larger brains that lead to new worldview forming abilities at certain thresholds. This is explained by randomization allowing the entire brain to eventually get access to all information over the course of many shifts even though instant privileged access is physically impossible. They cite that vertebrate neurons transmit virus-like capsules containing RNA that are sometimes read in the neuron to which it is transmitted and sometimes passed further on unread which creates randomized access, and that cephalopod neurons make different proteins from the same gene which suggests another mechanism for randomization of concentrated information in neurons, both making it evolutionarily worth scaling up brains.[4][5][6]

Brain re-arrangement

With the use of in vivo Magnetic resonance imaging (MRI) and tissue sampling, different cortical samples from members of each hominoid species were analyzed. In each species, specific areas were either relatively enlarged or shrunken, which can detail neural organizations. Different sizes in the corticol areas can show specific adaptations, functional specializations and evolutionary events that were changes in how the hominoid brain is organized. In early prediction it was thought that the frontal lobe, a large part of the brain that is generally devoted to behavior and social interaction, predicted the differences in behavior between hominoid and humans. Discrediting this theory was evidence supporting that damage to the frontal lobe in both humans and hominoids show atypical social and emotional behavior; thus, this similarity means that the frontal lobe was not very likely to be selected for reorganization. Instead, it is now believed that evolution occurred in other parts of the brain that are strictly associated with certain behaviors. The reorganization that took place is thought to have been more organizational than volumetric; whereas the brain volumes were relatively the same but specific landmark position of surface anatomical features, for example, the lunate sulcus suggest that the brains had been through a neurological reorganization.[7] There is also evidence that the early hominin lineage also underwent a quiescent period, which supports the idea of neural reorganization.

Dental fossil records for early humans and hominins show that immature hominins, including australopithecines and members of Homo, reveal that these species have a quiescent period (Bown et al. 1987). A quiescent period is a period in which there are no dental eruptions of adult teeth; at this time the child becomes more accustomed to social structure, and development of culture. During this time the child is given an extra advantage over other hominoids, devoting several years into developing speech and learning to cooperate within a community.[8] This period is also discussed in relation to encephalization. It was discovered that chimpanzees do not have this neutral dental period and suggest that a quiescent period occurred in very early hominin evolution. Using the models for neurological reorganization it can be suggested the cause for this period, dubbed middle childhood, is most likely for enhanced foraging abilities in varying seasonal environments. To understand the development of human dentition, taking a look at behavior and biology.[9]

Genetic factors contributing to modern evolution

Bruce Lahn, the senior author at the Howard Hughes Medical Center at the University of Chicago and colleagues have suggested that there are specific genes that control the size of the human brain. These genes continue to play a role in brain evolution, implying that the brain is continuing to evolve. The study began with the researchers assessing 214 genes that are involved in brain development. These genes were obtained from humans, macaques, rats and mice. Lahn and the other researchers noted points in the DNA sequences that caused protein alterations. These DNA changes were then scaled to the evolutionary time that it took for those changes to occur. The data showed the genes in the human brain evolved much faster than those of the other species. Once this genomic evidence was acquired, Lahn and his team decided to find the specific gene or genes that allowed for or even controlled this rapid evolution. Two genes were found to control the size of the human brain as it develops. These genes are Microcephalin and Abnormal Spindle-like Microcephaly (ASPM). The researchers at the University of Chicago were able to determine that under the pressures of selection, both of these genes showed significant DNA sequence changes. Lahn's earlier studies displayed that Microcephalin experienced rapid evolution along the primate lineage which eventually led to the emergence of Homo sapiens. After the emergence of humans, Microcephalin seems to have shown a slower evolution rate. On the contrary, ASPM showed its most rapid evolution in the later years of human evolution once the divergence between chimpanzees and humans had already occurred.[10]

Each of the gene sequences went through specific changes that led to the evolution of humans from ancestral relatives. In order to determine these alterations, Lahn and his colleagues used DNA sequences from multiple primates then compared and contrasted the sequences with those of humans. Following this step, the researchers statistically analyzed the key differences between the primate and human DNA to come to the conclusion, that the differences were due to natural selection. The changes in DNA sequences of these genes accumulated to bring about a competitive advantage and higher fitness that humans possess in relation to other primates. This comparative advantage is coupled with a larger brain size which ultimately allows the human mind to have a higher cognitive awareness.[11]

Human brain size in the fossil record

The evolutionary history of the human brain shows primarily a gradually bigger brain relative to body size during the evolutionary path from early primates to hominids and finally to Homo sapiens. Human brain size has been trending upwards since 2 million years ago, with a 3 factor increase. Early australopithecine brains were little larger than chimpanzee brains. The increase has been seen as larger human brain volume as we progressed along the human timeline of evolution (see Homininae), starting from about 600 cm3 in Homo habilis up to 1600 cm3 in Homo neanderthalensis (male averages). The increase in brain size topped with Neanderthals; Brain size of Homo sapiens varies significantly between population (races), with male averages ranging between about 1,200 to 1,450 cm3.

Evolutionary neuroscience

From Wikipedia, the free encyclopedia

Evolutionary neuroscience is the scientific study of the evolution of nervous systems. Evolutionary neuroscientists investigate the evolution and natural history of nervous system structure, functions and emergent properties. The field draws on concepts and findings from both neuroscience and evolutionary biology. Historically, most empirical work has been in the area of comparative neuroanatomy, and modern studies often make use of phylogenetic comparative methods. Selective breeding and experimental evolution approaches are also being used more frequently.

Conceptually and theoretically, the field is related to fields as diverse as cognitive genomics, neurogenetics, developmental neuroscience, neuroethology, comparative psychology, evo-devo, behavioral neuroscience, cognitive neuroscience, behavioral ecology, biological anthropology and sociobiology.

Evolutionary neuroscientists examine changes in genes, anatomy, physiology, and behavior to study the evolution of changes in the brain.[2] They study a multitude of processes including the evolution of vocal, visual, auditory, taste, and learning systems as well as language evolution and development. [2] [3]In addition, evolutionary neuroscientists study the evolution of specific areas or structures in the brain such as the amigdala , forebrain and cerebellum as well as the motor or visual cortex. [2]

History

Studies of the brain began during ancient Egyptian times but studies in the field of evolutionary neuroscience began after the publication of Darwin's On the Origin of Species in 1859.[4] At that time, brain evolution was largely viewed at the time in relation to the incorrect scala naturae. Phylogeny and the evolution of the brain were still viewed as linear.[4] During the early 20th century, there were several prevailing theories about evolution. Darwinism was based on the principles of natural selection and variation, Lamarckism was based on the passing down of acquired traits, Orthogenesis was based on the assumption that tendency towards perfection steers evolution, and Saltationism argued that discontinuous variation creates new species.[4] Darwin’s became the most accepted and allowed for people to starting thinking about the way animals and their brains evolve.[4]

The 1936 book The Comparative Anatomy of the Nervous System of Vertebrates Including Man by the Dutch neurologist C.U. Ariëns Kappers (first published in German in 1921) was a landmark publication in the field. Following the Evolutionary Synthesis, the study of comparative neuroanatomy was conducted with an evolutionary view, and modern studies incorporate developmental genetics.[5][6] It is now accepted that phylogenetic changes occur independently between species over time and can not be linear.[4] It is also believed that an increase with brain size correlates with an increase in neural centers and behavior complexity.[7]

Major Arguments

Over time, there are several arguments that would come to define the history of evolutionary neuroscience. The first is the argument between Etienne Geoffro St. Hilaire and George Cuvier over the topic of "common plan versus diversity". [2] Geoffrey argued that all animals are built based off a single plan or archetype and he stressed the importance of homologies between organisms, while Cuvier believed that the structure of organs was determined by their function and that knowledge of the function of one organ could help discover the functions of other organs.[2][4] He argued that there were at least four different archetypes.[2] After Darwin, the idea of evolution was more accepted and Geoffrey's idea of homologous structures was more accepted.[2]The second major argument is that of the Scala Naturae (scale of nature) versus the phylogenetic bush.[2] The Scala Naturae, later also called the phylogenetic scale, was based on the premise that phylogenies are linear or like a scale while the phylogenetic bush argument was based on the idea that phylogenies were nonlinear and resembled a bush more than a scale.[2] Today it is accepted that phylogenies are nonlinear.[2] A third major argument dealt with the size of the brain and whether relative size or absolute size was more relevant in determining function.[2] In the late 18th century, it was determined that brain to body ratio reduces as body size increases.[2] However more recently, there is more focus on absolute brain size as this scales with internal structures and functions, with the degree of structural complexity, and with the amount of white matter in the brain, all suggesting that absolute size is much better predictor of brain function. [2] Finally, a fourth argument is that of natural selection (Darwinism) versus developmental constraints (concerted evolution).[2] It is now accepted that the evolution of development is what causes adult species to show differences and evolutionary neuroscientists maintain that many aspects of brain function and structure are conserved across species.[2]

Techniques

Throughout history, we see how evolutionary neuroscience has been dependent on developments in biological theory and techniques. [4]The field of evolutionary neuroscience has been shaped by the development of new techniques that allow for the discovery and examination of parts of the nervous system. In 1873, Camillo Gogi devised the silver nitrate method which allowed for the description of the brain at the cellular level as opposed to simply the gross level.[4] Santiago Ramon and Pedro Ramon used this method to analyze numerous parts of brains, broadening the field of comparative neuroanatomy.[4] In the second half of the 19th century, new techniques allowed scientists to identify neuronal cell groups and fiber bundles in brains.[4] In 1885,Vittorio Marchi discovered a staining technique that let scientists see induced axonal degeneration in myelinated axons, in 1950, the “original Nauta procedure” allowed for more accurate identification of degenerating fibers, and in the 1970s, there were several discoveries of multiple molecular tracers which would be used for experiments even today.[4] In the last 20 years, cladistics has also become a useful tool for looking at variation in the brain.[7]

Evolution of the Human Brain

Darwin's theory allowed for people to start thinking about the way animals and their brains evolve.

Electroconvulsive therapy

From Wikipedia, the free encyclopedia

Electroconvulsive therapy
NTM Eg Asyl ECT apparatus IMG 0977.JPG
ECT device produced by Siemens and used at the Eg Asyl psychiatric hospital in Kristiansand, Norway from the 1960s to the 1980s.
Synonyms Electroshock therapy, shock treatment
ICD-10-PCS GZB
ICD-9-CM 94.27
MeSH D004565
OPS-301 code 8-630
MedlinePlus 007474

Electroconvulsive therapy (ECT), formerly known as electroshock therapy, and often referred to as shock treatment, is a psychiatric treatment in which seizures are electrically induced in patients to provide relief from mental disorders. The ECT procedure was first conducted in 1938 and is the only currently used form of shock therapy in psychiatry. ECT is often used with informed consent as a last line of intervention for major depressive disorder, mania, and catatonia. ECT machines have been placed in the Class II category (special controls) by the United States Food and Drug Administration (FDA) since 1976.

A round of ECT is effective for about 50% of people with treatment-resistant major depressive disorder, whether it is unipolar or bipolar.[6] Follow-up treatment is still poorly studied, but about half of people who respond relapse within 12 months.[7] Aside from effects in the brain, the general physical risks of ECT are similar to those of brief general anesthesia.[8]:259 Immediately following treatment, the most common adverse effects are confusion and memory loss.[4][9] Among treatments for severely depressed pregnant women ECT is one of the least harmful to the gestating fetus.[10]

A usual course of ECT involves multiple administrations, typically given two or three times per week until the patient is no longer suffering symptoms. ECT is administered under anesthetic with a muscle relaxant.[11] Electroconvulsive therapy can differ in its application in three ways: electrode placement, frequency of treatments, and the electrical waveform of the stimulus. These three forms of application have significant differences in both adverse side effects and symptom remission. Placement can be bilateral, in which the electric current is passed across the whole brain, or unilateral, in which the current is passed across one hemisphere of the brain. Bilateral placement seems to have greater efficacy than unilateral, but also carries greater risk of memory loss.[12][13] After treatment, drug therapy is usually continued, and some patients receive maintenance ECT.[4]

ECT appears to work in the short term via an anticonvulsant effect mostly in the frontal lobes, and longer term via neurotrophic effects primarily in the medial temporal lobe.[14]

Medical use

ECT is used with informed consent[3] in treatment-resistant major depressive disorder, treatment-resistant catatonia, or prolonged or severe mania, and in conditions where "there is a need for rapid, definitive response because of the severity of a psychiatric or medical condition (e.g., when illness is characterized by stupor, marked psycho-motor retardation, depressive delusions or hallucinations, or life-threatening physical exhaustion associated with mania)."[4][15]

Major depressive disorder

For major depressive disorder, ECT is generally used only when other treatments have failed, or in emergencies, such as imminent suicide. ECT has also been used in selected cases of depression occurring in the setting of multiple sclerosis, Parkinson's disease, Huntington's chorea, developmental delay, brain arteriovenous malformations and hydrocephalus.[19]

Efficacy

A meta-analysis on the effectiveness of ECT in unipolar and bipolar depression was conducted in 2012. Results indicated that although patients with unipolar depression and bipolar depression responded to other medical treatments very differently, both groups responded equally well to ECT. Overall remission rate for patients given a round of ECT treatment was 51.5% for those with unipolar depression and 50.9% for those with bipolar depression. The severity of each patient’s depression was assessed at the same baseline in each group.[6]

There is little agreement on the most appropriate follow-up to ECT for people with major depressive disorder.[7] When ECT is followed by treatment with antidepressants, about 50% of people relapsed by 12 months following successful initial treatment with ECT, with about 37% relapsing within the first 6 months. About twice as many relapsed with no antidepressants. Most of the evidence for continuation therapy is with tricyclics; evidence for relapse prevention with newer antidepressants is lacking.[7]

In 2004, a meta-analytic review paper found in terms of efficacy, "a significant superiority of ECT in all comparisons: ECT versus simulated ECT, ECT versus placebo, ECT versus antidepressants in general, ECT versus TCAs and ECT versus MAOIs."[20]

In 2003, The UK ECT Review group published a systematic review and meta-analysis comparing ECT to placebo and antidepressant drugs. This meta-analysis demonstrated a large effect size (high efficacy relative to the mean in terms of the standard deviation) for ECT versus placebo, and versus antidepressant drugs.[21]

Compared with transcranial magnetic stimulation for people with treatment-resistant major depressive disorder, ECT relieves depression about twice as well, reducing the score on the Hamilton Rating Scale for Depression by about 15 points, while TMS reduced it by 9 points.[22]

Catatonia

ECT is generally a second-line treatment for people with catatonia who do not respond to other treatments, but is a first-line treatment for severe or life-threatening catatonia.[4][23] There is a lack of clinical evidence for its efficacy but "the excellent efficacy of ECT in catatonia is generally acknowledged".[23] For people with autism spectrum disorders who have catatonia, there is little published evidence about the efficacy of ECT; as of 2014 there were twelve case reports, and while ECT had "life saving" efficacy in some, results were mixed and temporary, and maintenance ECT was necessary to sustain any benefit.[24]

Mania

ECT is used to treat people who have severe or prolonged mania;[4] NICE recommends it only in life-threatening situations or when other treatments have failed[25] and as a second-line treatment for bipolar mania.[26][27]

Schizophrenia

ECT is rarely used in treatment-resistant schizophrenia, but is sometimes recommended for schizophrenia when short term global improvement is desired, or the subject shows little response to antipsychotics alone. It is useful in the case of severe exacerbations of catatonic schizophrenia, whether excited or stuporous.[4][25][28]

Effects

Aside from effects in the brain, the general physical risks of ECT are similar to those of brief general anesthesia; the U.S. Surgeon General's report says that there are "no absolute health contraindications" to its use.[8]:259 Immediately following treatment, the most common adverse effects are confusion and memory loss. It must be used very cautiously in people with epilepsy or other neurological disorders because by its nature it provokes small tonic-clonic seizures, and so would likely not be given to a person whose epilepsy is not well controlled.[29] Some patients experience muscle soreness after ECT. This is due to the muscle relaxants given during the procedure and rarely due to muscle activity. ECT, especially if combined with deep sleep therapy, may lead to brain damage if administered in such a way as to lead to hypoxia or anoxia in the patient. The death rate due to ECT is around 4 per 100,000 procedures.[33] There is evidence and rationale to support giving low doses of benzodiazepines or else low doses of general anesthetics which induce sedation but not anesthesia to patients to reduce adverse effects of ECT.[34]

While there are no absolute contraindications for ECT, there is increased risk for patients who have unstable or severe cardiovascular conditions or aneurysms; who have recently had a stroke; who have increased intracranial pressure (for instance, due to a solid brain tumor), or who have severe pulmonary conditions, or who are generally at high risk for receiving anesthesia.[9]:30

In adolescents, ECT is highly efficient for several psychiatric disorders, with few and relatively benign adverse effects.[35]

In a study published in 2017 which involved 30 National Health Service (NHS) patients from Worcestershire, 80% said they would readily have the treatment again although 37% said it was frightening.[36]

Cognitive impairment

Cognitive impairment is sometimes noticed after ECT.

Effects on memory

Retrograde amnesia occurs to some extent in almost all ECT recipients.[13] The American Psychiatric Association report (2001) acknowledges: “In some patients the recovery from retrograde amnesia will be incomplete, and evidence has shown that ECT can result in persistent or permanent memory loss”.[9] It is the purported effects of ECT on long-term memory that give rise to much of the concern surrounding its use.[41]

However, the methods used to measure memory loss are generally poor, and their application to people with depression, who have cognitive deficits including problems with memory, have been problematic.[42]

The acute effects of ECT can include amnesia, both retrograde (for events occurring before the treatment) and anterograde (for events occurring after the treatment).[43] Memory loss and confusion are more pronounced with bilateral electrode placement rather than unilateral, and with outdated sine-wave rather than brief-pulse currents. The use of either constant or pulsing electrical impulses also varied the memory loss results in patients. Patients who received pulsing electrical impulses as opposed to a steady flow seemed to incur less memory loss. The vast majority of modern treatment uses brief pulse currents.[43]

Retrograde amnesia is most marked for events occurring in the weeks or months before treatment, with one study showing that although some people lose memories from years prior to treatment, recovery of such memories was "virtually complete" by seven months post-treatment, with the only enduring loss being memories in the weeks and months prior to the treatment.[44][45] Anterograde memory loss is usually limited to the time of treatment itself or shortly afterwards. In the weeks and months following ECT these memory problems gradually improve, but some people have persistent losses, especially with bilateral ECT.[1][43] One published review summarizing the results of questionnaires about subjective memory loss found that between 29% and 55% of respondents believed they experienced long-lasting or permanent memory changes.[46] In 2000, American psychiatrist Sarah Lisanby and colleagues found that bilateral ECT left patients with more persistently impaired memory of public events as compared to RUL ECT.[41]

Effects on brain structure

Considerable controversy exists over the effects of ECT on brain tissue, although a number of mental health associations—including the American Psychiatric Association—have concluded that there is no evidence that ECT causes structural brain damage.[9][17] A 1999 report by the U.S. Surgeon General states: "The fears that ECT causes gross structural brain pathology have not been supported by decades of methodologically sound research in both humans and animals."[47]

Many expert proponents of ECT maintain that the procedure is safe and does not cause brain damage. Dr. Charles Kellner, a prominent ECT researcher and former chief editor of the Journal of ECT, stated in a 2007 interview[48] that, "There are a number of well-designed studies that show ECT does not cause brain damage and numerous reports of patients who have received a large number of treatments over their lifetime and have suffered no significant problems due to ECT." Dr. Kellner cites a study[49] purporting to show an absence of cognitive impairment in eight subjects after more than 100 lifetime ECT treatments. Dr. Kellner stated "Rather than cause brain damage, there is evidence that ECT may reverse some of the damaging effects of serious psychiatric illness."

Effects in pregnancy

If steps are taken to decrease potential risks, ECT is generally accepted to be relatively safe during all trimesters of pregnancy, particularly when compared to pharmacological treatments.[10][50] Suggested preparation for ECT during pregnancy includes a pelvic examination, discontinuation of nonessential anticholinergic medication, uterine tocodynamometry, intravenous hydration, and administration of a nonparticulate antacid. During ECT, elevation of the pregnant woman's right hip, external fetal cardiac monitoring, intubation, and avoidance of excessive hyperventilation are recommended.[10] In many instances of active mood disorder during pregnancy, the risks of untreated symptoms may outweigh the risks of ECT. Potential complications of ECT during pregnancy can be minimized by modifications in technique. The use of ECT during pregnancy requires thorough evaluation of the patient’s capacity for informed consent.[51]

Technique

Electroconvulsive therapy machine on display at Glenside Museum

ECT requires the informed consent of the patient.[1]:1880[3][4]

Whether psychiatric medications are terminated prior to treatment or maintained, varies.[1]:1885[52] However, drugs that are known to cause toxicity in combination with ECT, such as lithium, are discontinued, and benzodiazepines, which increase seizure thresholds,[53] are either discontinued, a benzodiazepine antagonist is administered at each ECT session, or the ECT treatment is adjusted accordingly.[1]:1879:1875

The placement of electrodes, as well as the dose and duration of the stimulation is determined on a per-patient basis.[1]:1881

In unilateral ECT, both electrodes are placed on the same side of the patient's head. Unilateral ECT may be used first to minimize side effects such as memory loss.

In bilateral ECT, the two electrodes are placed on both sides of the head. Invariably, the bitemporal placement is used, whereby the electrodes are placed on the temples. Uncommonly, the bifrontal placement in used; this involves positioning the electrodes on the patient's forehead, roughly above each eye.

Unilateral ECT is thought to cause fewer cognitive effects than bilateral treatment, but is less effective unless administered at higher doses.[1]:1881 Most patients in the US[54] and almost all in the UK[12][55][56] receive bilateral ECT.

The electrodes deliver an electrical stimulus. The stimulus levels recommended for ECT are in excess of an individual's seizure threshold: about one and a half times seizure threshold for bilateral ECT and up to 12 times for unilateral ECT.[1]:1881 Below these levels treatment may not be effective in spite of a seizure, while doses massively above threshold level, especially with bilateral ECT, expose patients to the risk of more severe cognitive impairment without additional therapeutic gains.[57] Seizure threshold is determined by trial and error ("dose titration"). Some psychiatrists use dose titration, some still use "fixed dose" (that is, all patients are given the same dose) and others compromise by roughly estimating a patient's threshold according to age and sex.[54] Older men tend to have higher thresholds than younger women, but it is not a hard and fast rule, and other factors, for example drugs, affect seizure threshold.

Immediately prior to treatment, a patient is given a short-acting anesthetic such as methohexital, etomidate, or thiopental,[1] a muscle relaxant such as suxamethonium (succinylcholine), and occasionally atropine to inhibit salivation.[1]:1882 In a minority of countries such as Japan,[58] India,[59] and Nigeria,[60] ECT may be used without anesthesia. The Union Health Ministry of India recommended a ban on ECT without anesthesia in India's Mental Health Care Bill of 2010 and the Mental Health Care Bill of 2013.[61][62] Some psychiatrists in India argued against the ban on unmodified ECT due to a lack of trained anesthesiologists available to administer ECT with anesthesia.[63] The practice was abolished in Turkey's largest psychiatric hospital in 2008.[64]

The patient's EEG, ECG, and blood oxygen levels are monitored during treatment.[1]:1882

ECT is usually administered three times a week, on alternate days, over a course of two to four weeks.[1]:1882–1883

An illustration depicting electroconvulsive therapy.

Devices

ECT machine from ca 1960.

Most modern ECT devices deliver a brief-pulse current, which is thought to cause fewer cognitive effects than the sine-wave currents which were originally used in ECT.[1] A small minority of psychiatrists in the US still use sine-wave stimuli.[54] Sine-wave is no longer used in the UK or Ireland.[56] Typically, the electrical stimulus used in ECT is about 800 milliamps and has up to several hundred watts, and the current flows for between one and 6 seconds.[57]

In the US, ECT devices are manufactured by two companies, Somatics, which is owned by psychiatrists Richard Abrams and Conrad Swartz, and Mecta.[65] In the UK, the market for ECT devices was long monopolized by Ectron Ltd, which was set up by psychiatrist Robert Russell.[66]

Mechanism of action

Despite decades of research, the exact mechanism of action of ECT remains elusive. Neuroimaging studies in people who have had ECT, investigating differences between responders and nonresponders, and people who relapse, find that responders have anticonvulsant effects mostly in the frontal lobes, which corresponds to immediate responses, and neurotrophic effects primarily in the medial temporal lobe. The anticonvulsant effects are decreased blood flow and decreased metabolism, while the neurotrophic effects are opposite - increased perfusion and metabolism, as well as increased volume of the hippocampus.[14]

Usage

As of 2001, it was estimated that about one million people received ECT annually.[67]

There is wide variation in ECT use between different countries, different hospitals, and different psychiatrists.[1][67] International practice varies considerably from widespread use of the therapy in many Western countries to a small minority of countries that do not use ECT at all, such as Slovenia.[68]

About 70 percent of ECT patients are women.[1] This may be due to the fact that women are more likely to be diagnosed with depression.[1][69] Older and more affluent patients are also more likely to receive ECT. The use of ECT is not as common in ethnic minorities.[69][70]

Sarah Hall reports, "ECT has been dogged by conflict between psychiatrists who swear by it, and some patients and families of patients who say that their lives have been ruined by it. It is controversial in some European countries such as the Netherlands and Italy, where its use is severely restricted".[71]

United States

ECT became popular in the US in the 1940s. At the time, psychiatric hospitals were overrun with patients whom doctors were desperate to treat and cure. Whereas lobotomies would reduce a patient to a more manageable submissive state, ECT helped to improve mood in those with severe depression. A survey of psychiatric practice in the late 1980s found that an estimated 100,000 people received ECT annually, with wide variation between metropolitan statistical areas.[72] Accurate statistics about the frequency, context and circumstances of ECT in the US are difficult to obtain because only a few states have reporting laws that require the treating facility to supply state authorities with this information.[73] In 13 of the 50 states, the practice of ECT is regulated by law.[74] One state which does report such data is Texas, where, in the mid-1990s, ECT was used in about one third of psychiatric facilities and given to about 1,650 people annually.[69] Usage of ECT has since declined slightly; in 2000–01 ECT was given to about 1500 people aged from 16 to 97 (in Texas it is illegal to give ECT to anyone under sixteen).[75] ECT is more commonly used in private psychiatric hospitals than in public hospitals, and minority patients are underrepresented in the ECT statistics.[1] In the United States, ECT is usually given three times a week; in the United Kingdom, it is usually given twice a week.[1] Occasionally it is given on a daily basis.[1] A course usually consists of 6–12 treatments, but may be more or fewer. Following a course of ECT some patients may be given continuation or maintenance ECT with further treatments at weekly, fortnightly or monthly intervals.[1] A few psychiatrists in the US use multiple-monitored ECT (MMECT), where patients receive more than one treatment per anesthetic.[1] Electroconvulsive therapy is not a required subject in US medical schools and not a required skill in psychiatric residency training. Privileging for ECT practice at institutions is a local option: no national certification standards are established, and no ECT-specific continuing training experiences are required of ECT practitioners.[76]

United Kingdom

In the UK in 1980, an estimated 50,000 people received ECT annually, with use declining steadily since then[77] to about 12,000 per annum in 2002.[78] It is still used in nearly all psychiatric hospitals, with a survey of ECT use from 2002 finding that 71 percent of patients were women and 46 percent were over 65 years of age. Eighty-one percent had a diagnosis of mood disorder; schizophrenia was the next most common diagnosis. Sixteen percent were treated without their consent.[78] In 2003, the National Institute for Health and Care Excellence, a government body which was set up to standardize treatment throughout the National Health Service in England and Wales, issued guidance on the use of ECT. Its use was recommended "only to achieve rapid and short-term improvement of severe symptoms after an adequate trial of treatment options has proven ineffective and/or when the condition is considered to be potentially life-threatening in individuals with severe depressive illness, catatonia or a prolonged manic episode".[79]

The guidance received a mixed reception. It was welcomed by an editorial in the British Medical Journal[80] but the Royal College of Psychiatrists launched an unsuccessful appeal.[81] The NICE guidance, as the British Medical Journal editorial points out, is only a policy statement and psychiatrists may deviate from it if they see fit. Adherence to standards has not been universal in the past. A survey of ECT use in 1980 found that more than half of ECT clinics failed to meet minimum standards set by the Royal College of Psychiatrists, with a later survey in 1998 finding that minimum standards were largely adhered to, but that two-thirds of clinics still fell short of current guidelines, particularly in the training and supervision of junior doctors involved in the procedure.[82] A voluntary accreditation scheme, ECTAS, was set up in 2004 by the Royal College, but as of 2006 only a minority of ECT clinics in England, Wales, Northern Ireland and the Republic of Ireland have signed up.[83]

The Mental Health Act 2007 allows people to be treated against their will. This law has extra protections regarding ECT. A patient capable of making the decision can decline the treatment, and in that case treatment cannot be given unless it will save that patient's life or is immediately necessary to prevent deterioration of the patient's condition. A patient may not be capable of making the decision (they "lack capacity"), and in that situation ECT can be given if it is appropriate and also if there are no advance directives that prevent the use of ECT.[84]

China

ECT was introduced in China in the early 1950s and while it was originally practiced without anesthesia, as of 2012 almost all procedures were conducted with it. As of 2012, there are approximately 400 ECT machines in China, and 150,000 ECT treatments are performed each year.[85] Chinese national practice guidelines recommend ECT for the treatment of schizophrenia, depressive disorders, and bipolar disorder and in the Chinese literature, ECT is an effective treatment for schizophrenia and mood disorders.[85] Although the Chinese government stopped classifying homosexuality as an illness in 2001, electroconvulsive therapy is still used by some establishments as a form of "conversion therapy".[86][87]

History

A Bergonic chair, a device "for giving general electric treatment for psychological effect, in psycho-neurotic cases", according to original photo description. World War I era.

As early as the 16th century, agents to induce seizures were used to treat psychiatric conditions. In 1785, the therapeutic use of seizure induction was documented in the London Medical Journal.[1][88][89] As to its earliest antecedents one doctor claims 1744 as the dawn of electricity's therapeutic use, as documented in the first issue of Electricity and Medicine. Treatment and cure of hysterical blindness was documented eleven years later. Benjamin Franklin wrote that an electrostatic machine cured "a woman of hysterical fits." In 1801, Giovanni Aldini used galvanism to treat patients suffering from various mental disorders.[90] G.B.C. Duchenne, the mid-19th century "Father of Electrotherapy", said its use was integral to a neurological practice.[91]

In the second half of the 19th century, such efforts were frequent enough in British asylums as to make it notable.[92]

Convulsive therapy was introduced in 1934 by Hungarian neuropsychiatrist Ladislas J. Meduna who, believing mistakenly that schizophrenia and epilepsy were antagonistic disorders, induced seizures first with camphor and then metrazol (cardiazol).[93][94] Meduna is thought to be the father of convulsive therapy.[95] In 1937, the first international meeting on convulsive therapy was held in Switzerland by the Swiss psychiatrist Muller. The proceedings were published in the American Journal of Psychiatry and, within three years, cardiazol convulsive therapy was being used worldwide.[94] Italian Professor of neuropsychiatry Ugo Cerletti, who had been using electric shocks to produce seizures in animal experiments, and his colleague Lucio Bini developed the idea of using electricity as a substitute for metrazol in convulsive therapy and, in 1938, experimented for the first time on a person. It was believed early on that inducing convulsions aided in helping those with severe schizophrenia but later found to be most useful with affective disorders such as depression. Cerletti had noted a shock to the head produced convulsions in dogs. The idea to use electroshock on humans came to Cerletti when he saw how pigs were given an electric shock before being butchered to put them in an anesthetized state.[96] Cerletti and Bini practiced until they felt they had the right parameters needed to have a successful human trial. Once they started trials on patients, they found that after 10-20 treatments the results were significant. Patients had much improved. A positive side effect to the treatment was retrograde amnesia. It was because of this side effect that patients could not remember the treatments and had no ill feelings toward it.[96] ECT soon replaced metrazol therapy all over the world because it was cheaper, less frightening and more convenient.[97] Cerletti and Bini were nominated for a Nobel Prize but did not receive one. By 1940, the procedure was introduced to both England and the US. In Germany and Austria, it was promoted by Friedrich Meggendorfer. Through the 1940s and 1950s, the use of ECT became widespread.

In the early 1940s, in an attempt to reduce the memory disturbance and confusion associated with treatment, two modifications were introduced: the use of unilateral electrode placement and the replacement of sinusoidal current with brief pulse. It took many years for brief-pulse equipment to be widely adopted.[98] In the 1940s and early 1950s ECT, was usually given in "unmodified" form, without muscle relaxants, and the seizure resulted in a full-scale convulsion. A rare but serious complication of unmodified ECT was fracture or dislocation of the long bones. In the 1940s, psychiatrists began to experiment with curare, the muscle-paralysing South American poison, in order to modify the convulsions. The introduction of suxamethonium (succinylcholine), a safer synthetic alternative to curare, in 1951 led to the more widespread use of "modified" ECT. A short-acting anesthetic was usually given in addition to the muscle relaxant in order to spare patients the terrifying feeling of suffocation that can be experienced with muscle relaxants.[98]

The steady growth of antidepressant use along with negative depictions of ECT in the mass media led to a marked decline in the use of ECT during the 1950s to the 1970s. The Surgeon General stated there were problems with electroshock therapy in the initial years before anesthesia was routinely given, and that "these now-antiquated practices contributed to the negative portrayal of ECT in the popular media."[99] The New York Times described the public's negative perception of ECT as being caused mainly by one movie: "For Big Nurse in One Flew Over the Cuckoo's Nest, it was a tool of terror, and, in the public mind, shock therapy has retained the tarnished image given it by Ken Kesey's novel: dangerous, inhumane and overused".[100]

In 1976, Dr. Blatchley demonstrated the effectiveness of his constant current, brief pulse device ECT. This device eventually largely replaced earlier devices because of the reduction in cognitive side effects, although as of 2012 some ECT clinics still were using sine-wave devices.[67] The 1970s saw the publication of the first American Psychiatric Association (APA) task force report on electroconvulsive therapy (to be followed by further reports in 1990 and 2001). The report endorsed the use of ECT in the treatment of depression. The decade also saw criticism of ECT.[101] Specifically, critics pointed to shortcomings such as noted side effects, the procedure being used as a form of abuse, and uneven application of ECT. The use of ECT declined until the 1980s, "when use began to increase amid growing awareness of its benefits and cost-effectiveness for treating severe depression".[99] In 1985, the National Institute of Mental Health and National Institutes of Health convened a consensus development conference on ECT and concluded that, while ECT was the most controversial treatment in psychiatry and had significant side-effects, it had been shown to be effective for a narrow range of severe psychiatric disorders.[102]

Because of the backlash noted previously, national institutions reviewed past practices and set new standards. In 1978, the American Psychiatric Association released its first task force report in which new standards for consent were introduced and the use of unilateral electrode placement was recommended. The 1985 NIMH Consensus Conference confirmed the therapeutic role of ECT in certain circumstances. The American Psychiatric Association released its second task force report in 1990 where specific details on the delivery, education, and training of ECT were documented. Finally, in 2001 the American Psychiatric Association released its latest task force report.[9] This report emphasizes the importance of informed consent, and the expanded role that the procedure has in modern medicine. By 2017, ECT was routinely covered by insurance companies for providing the "biggest bang for the buck" for otherwise intractable cases of severe mental illness, was receiving favorable media coverage, and was being provided in regional medical centers.[103]

Society and culture

Surveys of public opinion, the testimony of former patients, legal restrictions on the use of ECT and disputes as to the efficacy, ethics and adverse effects of ECT within the psychiatric and wider medical community indicate that the use of ECT remains controversial. This is reflected in the January 2011 vote by the FDA's Neurological Devices Advisory Panel to recommend that FDA maintain ECT devices in the Class III device category for high risk devices except for patients suffering from catatonia. This may result in the manufacturers of such devices having to do controlled trials on their safety and efficacy for the first time.[4][111][112] In justifying their position, panelists referred to the memory loss associated with ECT and the lack of long-term data.[113]

Legal status

Informed consent

The World Health Organization (2005) advises that ECT should be used only with the informed consent of the patient (or their guardian if their incapacity to consent has been established).[15]

In the US, this doctrine places a legal obligation on a doctor to make a patient aware of the reason for treatment, the risks and benefits of a proposed treatment, the risks and benefits of alternative treatment, and the risks and benefits of receiving no treatment. The patient is then given the opportunity to accept or reject the treatment. The form states how many treatments are recommended and also makes the patient aware that consent may be revoked and treatment discontinued at any time during a course of ECT.[8] The US Surgeon General's Report on Mental Health states that patients should be warned that the benefits of ECT are short-lived without active continuation treatment in the form of drugs or further ECT, and that there may be some risk of permanent, severe memory loss after ECT.[8] The report advises psychiatrists to involve patients in discussion, possibly with the aid of leaflets or videos, both before and during a course of ECT.

To demonstrate what he believes should be required to fully satisfy the legal obligation for informed consent, one psychiatrist, working for an anti-psychiatry organisation, has formulated his own consent form[114] using the consent form developed and enacted by the Texas Legislature[115] as a model.

According to the US Surgeon General, involuntary treatment is uncommon in the US and is typically used only in cases of great extremity, and only when all other treatment options have been exhausted. The use of ECT is believed to be a potentially life-saving treatment.[47]

In one of the few jurisdictions where recent statistics on ECT usage are available, a national audit of ECT by the Scottish ECT Accreditation Network indicated that 77% of patients who received the treatment in 2008 were capable of giving informed consent.[116]

In the UK, in order for consent to be valid it requires an explanation in "broad terms" of the nature of the procedure and its likely effects.[117] One review from 2005 found that only about half of patients felt they were given sufficient information about ECT and its adverse effects[118] and another survey found that about fifty percent of psychiatrists and nurses agreed with them.[119]

A 2005 study published in the British Journal of Psychiatry described patients' perspectives on the adequacy of informed consent before ECT.[118] The study found that "About half (45–55%) of patients reported they were given an adequate explanation of ECT, implying a similar percentage felt they were not." The authors also stated:
Approximately a third did not feel they had freely consented to ECT even when they had signed a consent form. The proportion who feel they did not freely choose the treatment has actually increased over time. The same themes arise whether the patient had received treatment a year ago or 30 years ago. Neither current nor proposed safeguards for patients are sufficient to ensure informed consent with respect to ECT, at least in England and Wales.[118]

Involuntary ECT

Procedures for involuntary ECT vary from country to country depending on local mental health laws.
United States
In the US, ECT devices came into existence prior to medical devices being regulated by the Food and Drug Administration; when the law came into effect the FDA was obligated to retrospectively review already existing devices and classify them, and determine whether clinical trials were needed to prove efficacy and safety. While the FDA has classified the devices used to administer ECT as Class III medical devices, as of 2011 the FDA had not yet determined whether the devices should be withdrawn from the market until clinical trials prove their safety and efficacy.[4]:5[111][112] The FDA considers ECT machinery to be experimental devices.[120] In most states in the US, a judicial order following a formal hearing is needed before a patient can be forced to undergo involuntary ECT.[8] However, ECT can also be involuntarily administered in situations with less immediate danger. Suicidal intent is a common justification for its involuntary use, especially when other treatments are ineffective.[8]
United Kingdom
Until 2007 in England and Wales, the Mental Health Act 1983 allowed the use of ECT on detained patients whether or not they had capacity to consent to it. However, following amendments which took effect in 2007, ECT may not generally be given to a patient who has capacity and refuses it, irrespective of his or her detention under the Act.[121] In fact, even if a patient is deemed to lack capacity, if they made a valid advance decision refusing ECT then they should not be given it; and even if they do not have an advance decision, the psychiatrist must obtain an independent second opinion (which is also the case if the patient is under age of consent).[122] However, there is an exception regardless of consent and capacity; under Section 62 of the Act, if the treating psychiatrist says the need for treatment is urgent they may start a course of ECT without authorization.[123] From 2003 to 2005, about 2,000 people a year in England and Wales were treated without their consent under the Mental Health Act.[124] Concerns have been raised by the official regulator that psychiatrists are too readily assuming that patients have the capacity to consent to their treatments, and that there is a worrying lack of independent advocacy.[125] In Scotland, the Mental Health (Care and Treatment) (Scotland) Act 2003 also gives patients with capacity the right to refuse ECT.[126]

Public perception

A questionnaire survey of 379 members of the general public in Australia indicated that more than 60% of respondents had some knowledge about the main aspects of ECT. Participants were generally opposed to the use of ECT on depressed individuals with psychosocial issues, on children, and on involuntary patients. Public perceptions of ECT were found to be mainly negative.[110]

Resurgence

Though ECT has become a widely discouraged treatment, many people have recently pushed for the return of this controversial procedure. The 1975 film One Flew Over the Cuckoo's Nest has convinced viewers that ECT is a horrific procedure which only results in the patient's complete memory loss. Scientists have since debunked the notion that patients suffer acute memory loss after treatment, but the horrors of ECT present in the movie still remain a setback.[127] In the last decade, patients have returned to using ECT to treat various mental illnesses including depression and bipolar disorder. Overcoming the looming controversy has proven difficult for doctors and scientists, and various campaigns to challenge negative stereotypes have gained popularity in the past few years. In 2014, the American Psychiatric Association launched a petition to reclassify ECT as a low-risk treatment.[128] Though many people still believe ECT to be an inhumane procedure, many pro-ECT patients have publicly come forward with their positive response to the treatment. One patient by the name of Shelley Miller claims that "medications have a success rate of 50-60% of patients getting better, while ECT succeeds at a rate of 70-90%."[129] With the combined support of patients and doctors, ECT is slowly challenging stereotypes and making its way back into the medical community. However, the negative stigma of ECT still maintains the upper hand in society today.

Famous cases

Ernest Hemingway, an American author, died by suicide shortly after ECT at the Mayo Clinic in 1961. He is reported to have said to his biographer, "Well, what is the sense of ruining my head and erasing my memory, which is my capital, and putting me out of business? It was a brilliant cure but we lost the patient...."[130] American surgeon and award-winning author Sherwin B. Nuland is another notable person who has undergone ECT.[131] In his 40s, this successful surgeon's depression became so severe that he had to be institutionalized. After exhausting all treatment options, a young resident assigned to his case suggested ECT, which ended up being successful.[132] Author David Foster Wallace also received ECT for many years, beginning as a teenager, before his suicide at age 46.[133]
Award-winning New Zealand author Janet Frame had ECT. She later wrote about this in her novel Faces in the Water.

Fictional examples

Electroconvulsive therapy has been depicted in fiction, including fictional works partly based on true experiences. These include Sylvia Plath's autobiographical novel, The Bell Jar, Ken Loach's film Family Life, and Ken Kesey's novel One Flew Over the Cuckoo's Nest; Kesey's novel is a direct product of his time working the graveyard shift as an orderly at a mental health facility in Menlo Park, California.[134][135]

In the 2000 film Requiem for a Dream, Sarah Goldfarb receives "unmodified" electroconvulsive therapy after experiencing severe amphetamine psychosis following prolonged stimulant abuse. In the 2014 TV series Constantine, the protagonist John Constantine is institutionalized and specifically requests electroconvulsive therapy as an attempt to alleviate or resolve his mental problems.

The musical Next to Normal centers around the family of a woman who undergoes the procedure.
Robert Pirsig suffered a nervous breakdown and spent time in and out of psychiatric hospitals between 1961 and 1963. He was diagnosed with paranoid schizophrenia and clinical depression as a result of an evaluation conducted by psychoanalysts, and was treated with electroconvulsive therapy on numerous occasions, a treatment he discusses in his novel, Zen and the Art of Motorcycle Maintenance.

In the HBO series Six Feet Under season 5, George undergoes an ECT treatment to deal with his increasing paranoia. The depiction is shown realistically, with an actual ECT machine.

Gender

Throughout the history of ECT, women have received it two to three times as often as men.[136] Currently, about 70 percent of ECT patients are women.[1] This may be due to the fact that women are more likely to be diagnosed with depression.[1][69] A 1974 study of ECT in Massachusetts reported that women made up 69 percent of those given ECT.[137] The Ministry of Health in Canada reported that from 1999 until 2000 in the province of Ontario, women were 71 percent of those given ECT in provincial psychiatric institutions, and 75 percent of the total ECT given was given to women.[138]

Young people

ECT treatment of severely autistic children with violent, sometimes self-harming behaviour first began in parts of the US during the early years of 21st century. Each session reportedly alleviates symptoms for up to 10 days at a time, but it is not claimed as a cure. One practitioner, Charles Kellner, ECT director at Mount Sinai Hospital in New York, is so convinced ECT is effective and safe that he allowed a parent to witness a procedure and the BBC to record the intervention.

One-state solution

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/One-state_solution The one-state solution is a proposed a...