Search This Blog

Sunday, January 19, 2014

The effect of today’s technology on tomorrow’s jobs will be immense—and no country is ready for it


INNOVATION, the elixir of progress, has always cost people their jobs. In the Industrial Revolution artisan weavers were swept aside by the mechanical loom. Over the past 30 years the digital revolution has displaced many of the mid-skill jobs that underpinned 20th-century middle-class life. Typists, ticket agents, bank tellers and many production-line jobs have been dispensed with, just as the weavers were.

For those, including this newspaper, who believe that technological progress has made the world a better place, such churn is a natural part of rising prosperity. Although innovation kills some jobs, it creates new and better ones, as a more productive society becomes richer and its wealthier inhabitants demand more goods and services. A hundred years ago one in three American workers was employed on a farm. Today less than 2% of them produce far more food. The millions freed from the land were not consigned to joblessness, but found better-paid work as the economy grew more sophisticated. Today the pool of secretaries has shrunk, but there are ever more computer programmers and web designers.
Optimism remains the right starting-point, but for workers the dislocating effects of technology may make themselves evident faster than its benefits (see article). Even if new jobs and wonderful products emerge, in the short term income gaps will widen, causing huge social dislocation and perhaps even changing politics. Technology’s impact will feel like a tornado, hitting the rich world first, but eventually sweeping through poorer countries too. No government is prepared for it.

Why be worried? It is partly just a matter of history repeating itself. In the early part of the Industrial Revolution the rewards of increasing productivity went disproportionately to capital; later on, labour reaped most of the benefits. The pattern today is similar. The prosperity unleashed by the digital revolution has gone overwhelmingly to the owners of capital and the highest-skilled workers. Over the past three decades, labour’s share of output has shrunk globally from 64% to 59%. Meanwhile, the share of income going to the top 1% in America has risen from around 9% in the 1970s to 22% today. Unemployment is at alarming levels in much of the rich world, and not just for cyclical reasons. In 2000, 65% of working-age Americans were in work; since then the proportion has fallen, during good years as well as bad, to the current level of 59%.

Worse, it seems likely that this wave of technological disruption to the job market has only just started. From driverless cars to clever household gadgets (see article), innovations that already exist could destroy swathes of jobs that have hitherto been untouched. The public sector is one obvious target: it has proved singularly resistant to tech-driven reinvention. But the step change in what computers can do will have a powerful effect on middle-class jobs in the private sector too.

Until now the jobs most vulnerable to machines were those that involved routine, repetitive tasks. But thanks to the exponential rise in processing power and the ubiquity of digitised information (“big data”), computers are increasingly able to perform complicated tasks more cheaply and effectively than people. Clever industrial robots can quickly “learn” a set of human actions. Services may be even more vulnerable. Computers can already detect intruders in a closed-circuit camera picture more reliably than a human can. By comparing reams of financial or biometric data, they can often diagnose fraud or illness more accurately than any number of accountants or doctors. One recent study by academics at Oxford University suggests that 47% of today’s jobs could be automated in the next two decades.

At the same time, the digital revolution is transforming the process of innovation itself, as our special report explains. Thanks to off-the-shelf code from the internet and platforms that host services (such as Amazon’s cloud computing), provide distribution (Apple’s app store) and offer marketing (Facebook), the number of digital startups has exploded. Just as computer-games designers invented a product that humanity never knew it needed but now cannot do without, so these firms will no doubt dream up new goods and services to employ millions. But for now they are singularly light on workers. When Instagram, a popular photo-sharing site, was sold to Facebook for about $1 billion in 2012, it had 30m customers and employed 13 people. Kodak, which filed for bankruptcy a few months earlier, employed 145,000 people in its heyday.

The problem is one of timing as much as anything. Google now employs 46,000 people. But it takes years for new industries to grow, whereas the disruption a startup causes to incumbents is felt sooner.
Airbnb may turn homeowners with spare rooms into entrepreneurs, but it poses a direct threat to the lower end of the hotel business—a massive employer.

If this analysis is halfway correct, the social effects will be huge. Many of the jobs most at risk are lower down the ladder (logistics, haulage), whereas the skills that are least vulnerable to automation (creativity, managerial expertise) tend to be higher up, so median wages are likely to remain stagnant for some time and income gaps are likely to widen.

Anger about rising inequality is bound to grow, but politicians will find it hard to address the problem. Shunning progress would be as futile now as the Luddites’ protests against mechanised looms were in the 1810s, because any country that tried to stop would be left behind by competitors eager to embrace new technology. The freedom to raise taxes on the rich to punitive levels will be similarly constrained by the mobility of capital and highly skilled labour.

The main way in which governments can help their people through this dislocation is through education systems. One of the reasons for the improvement in workers’ fortunes in the latter part of the Industrial Revolution was because schools were built to educate them—a dramatic change at the time. Now those schools themselves need to be changed, to foster the creativity that humans will need to set them apart from computers. There should be less rote-learning and more critical thinking.
Technology itself will help, whether through MOOCs (massive open online courses) or even video games that simulate the skills needed for work.

The definition of “a state education” may also change. Far more money should be spent on pre-schooling, since the cognitive abilities and social skills that children learn in their first few years define much of their future potential. And adults will need continuous education. State education may well involve a year of study to be taken later in life, perhaps in stages.

Yet however well people are taught, their abilities will remain unequal, and in a world which is increasingly polarised economically, many will find their job prospects dimmed and wages squeezed.
The best way of helping them is not, as many on the left seem to think, to push up minimum wages. Jacking up the floor too far would accelerate the shift from human workers to computers. Better to top up low wages with public money so that anyone who works has a reasonable income, through a bold expansion of the tax credits that countries such as America and Britain use.

Innovation has brought great benefits to humanity. Nobody in their right mind would want to return to the world of handloom weavers. But the benefits of technological progress are unevenly distributed, especially in the early stages of each new wave, and it is up to governments to spread them. In the 19th century it took the threat of revolution to bring about progressive reforms. Today’s governments would do well to start making the changes needed before their people get angry.
 

Primates, including humans, burn half as many calories as other mammals

The study suggests that our remarkably slow metabolisms explain why humans and other primates grow up so slowly and live such long lives.

“A human – even someone with a very physically active lifestyle – would need to run a marathon each day just to approach the average daily energy expenditure of a mammal their size.” – Herman Pontzer. Image via Wikimedia Commons
 
A study published January 14, 2014 in the Proceedings of the National Academy of Sciences suggests that humans and other primates burn 50% fewer calories each day than other mammals of similar size. This slow metabolism, the researchers say, may help explain why humans and other primates grow up slower and live longer than most mammals.

An international team of scientists examined 17 primate species in zoos, sanctuaries and in the wild. Using a safe and non-invasive technique, they measured the number of calories the primates burned over a 10-day period. Herman Pontzer,Associate Professor of Anthropology at Hunter College, led the study. Pontzer said:
The results were a real surprise. Humans, chimpanzees, baboons and other primates expend only half the calories we’d expect for a mammal. To put that in perspective, a human – even someone with a very physically active lifestyle – would need to run a marathon each day just to approach the average daily energy expenditure of a mammal their size.
The study also found that primates in captivity expend as many calories each day as their counterparts living in the wild. The researchers say this suggests that physical activity may have less of an impact on daily energy expenditure than was previously believed.
chimp-family
Primates, including humans, burn 50% fewer calories each day than other mammals of similar size. Image of chimpanzee family via Science Museum of Minnesota

Bottom line: Humans and other primates burn 50% fewer calories each day than other mammals of similar size. This slow metabolism, the researchers say, may help explain why humans and other primates grow up slower and live longer than most mammals.

Linking Weather Extremes to Global Warming – Greg Laden's Blog

Linking Weather Extremes to Global Warming – Greg Laden's Blog
 

Global Warming is the increase in the Earth’s temperature owing to the greenhouse effects of the release of CO2 and other gasses into the atmosphere, mainly by humans burning fossil fuel, but also by the release of Methane from oil wells and melting of Arctic permafrost, natural gas from leaky pipes, and so on. This increase in temperature occurs in both the atmosphere and the oceans, as well as the land surface itself. During some periods of time most of the increase seems to happen in the atmosphere, while during other times it seems to occur more in the oceans. (As an aside: when you use passive geothermal technology to heat and cool your home, the heat in the ground around your house is actually from the sun warming the Earth’s surface.)

“Weather” as we generally think of it consists partly of storms, perturbations in the atmosphere, and we would expect more of at least some kinds of storms, or more severe ones, if the atmosphere has more energy, which it does because of global warming. But “weather” is also temperature, and we recognize that severe heat waves and cold waves, long periods of heavy flooding rains, and droughts are very important, and it is hard to miss the fact that these phenomena have been occurring with increasing frequency in recent years. 

We know that global warming changes the way air currents in the atmosphere work, and we know that atmospheric air currents can determine both the distribution and severity of storms and the occurrence of long periods of extreme heat or cold and wet or dry. One of the ways this seems to happen is what is known as “high amplitude waves” in the jet stream. One of the Northern Hemisphere Jet Streams, which emerges as the boundary between temperate air masses and polar air masses, is a fast moving high altitude stream of air. There is a large difference in temperature of the Troposphere north and south of any Jet Stream, and it can be thought of as the boundary between cooler and warmer conditions. Often, the northern Jet Stream encircles the planet as a more or less circular stream of fast moving air, moving in a straight line around the globe. However, under certain conditions the Jet Stream can be wavy, curving north then south then north and so on around the planet. These waves can themselves be either stationary (not moving around the planet) or they can move from west to east. A “high amplitude” Jet Stream is a wavy jet stream, and the waves can be very dramatic. When the jet stream is wavy and the waves themselves are relatively stationary, the curves are said to be “blocking” … meaning that they are keeping masses of either cold (to the north) or warm (to the south) air in place. Also, the turning points of the waves set up large rotating systems of circulation that can control the formation of storms.

So, a major heat wave in a given region can be caused by the northern Jet Stream being both wavy (high amplitude) with a big wave curving north across the region, bringing very warm air with it, at the same time the Jet Stream’s waves are relatively stationary, causing that lobe of southerly warm air to stay in place for many days. Conversely, a lobe of cool air from the north can be spread across a region and kept in place for a while.

Here is a cross section of the Jet Streams in the Northern Hemisphere showing their relationship with major circulating air masses:
Jet Stream Cross Section
Cross section of the atmosphere of the Northern Hemisphere. The Jet Streams form at the highly energetic boundary between major circulating cells, near the top of the Troposphere.
Here is a cartoon of the Earth showing jet streams moving around the planet:
The Jet Streams moving around the planet.  Not indicated is the Intertropical Convergence Zone (ITCA) around the equator which is both not a Jet Stream and the Mother of All Jet Streams.  This post mainly concerns the "Polar Jet."  Note that the wind in the Jet Streams moves from west to east, and the Jet Streams can be either pretty straight or pretty curvy.  Curvy = "high amplitude." This figure and the one above are from NOAA.
The Jet Streams moving around the planet. Not indicated is the Intertropical Convergence Zone (ITCA) around the equator which is both not a Jet Stream and the Mother of All Jet Streams. This post mainly concerns the “Polar Jet.” Note that the wind in the Jet Streams moves from west to east, and the Jet Streams can be either pretty straight or pretty curvy. Curvy = “high amplitude.” This figure and the one above are from NOAA.
Here is a depiction of the Jet Stream being very curvy. The waves in the Jet Stream are called Rossby waves.
The Jet Stream in a particularly wavy state.
The Jet Stream in a particularly wavy state.
 
(See also this animation on Wikicommons, which will open in a new window.)

Research published in the Proceedings of the National Academies of Science last February, in a paper titled “Quasiresonant amplification of planetary waves and recent Northern Hemisphere weather extremes,” links global warming to the setup of high amplitude waves in the Jet Stream, as well as relatively stationary, blocking, waves that cause extreme warm or cold conditions to persist for weeks rather than just a few days. According to lead author Vladimir Petoukhov, “An important part of the global air motion in the mid-latitudes of the Earth normally takes the form of waves wandering around the planet, oscillating between the tropical and the Arctic regions. So when they swing up, these waves suck warm air from the tropics to Europe, Russia, or the US, and when they swing down, they do the same thing with cold air from the Arctic…What we found is that during several recent extreme weather events these planetary waves almost freeze in their tracks for weeks. So instead of bringing in cool air after having brought warm air in before, the heat just stays.”

So how does global warming cause the northern Jet Stream to become wavy, with those waves being relatively stationary? It’s complicated. One way to think about it is to observe waves elsewhere in day to day life. On the highway, if there is enough traffic, waves of cars form, as clusters of several cars moving together with relatively few cars to be found in the gaps between these clusters. Change the number of cars, or the speed limit, or other factors, and you may see the size and distribution of these clusters (waves) of cars change as well. If you run the water from your sink faucet at just the right rate, you can see waves moving up and down on the stream of water. If you adjust the flow of water the size and behavior of these “standing waves” changes. In a baseball or football field, when people do “the wave” their hand motions collectively form a wave of silliness that moves around the park, and the width and speed of that wave is a function of how quickly individuals react to their fellow sports fan’s waving activity. Waves form in a medium (of cars, water molecules, people, etc.)
following a number of physical principles that determine the size, shape, speed, and stability of the waves.

The authors of this paper use math that is far beyond the scope of a mere blog post to link together all the relevant atmospheric factors and the shape of the northern Jet Stream. They found that when the effects of Global Warming are added in, the Jet Stream becomes less linear, and the deep meanders (sometimes called Rossby waves) that are set up tend to occur with a certain frequency (6, 7, or 8 major waves encircling the planet) and that these waves tend to not move for many days once they get going. They tested their mathematical model using actual weather data over a period of 32 years and found a good fit between atmospheric conditions, predicted wave patterns, and actual observed wave patterns.

The northern Jet Stream originates as a function of the gradient of heat from the Equatorial regions to the Polar regions. If air temperature was very high at the equator and very low at the poles, the Jet Stream would look one way. If air temperatures were (and this is impossible) the same at the Equator and the poles, there would probably be no Jet Stream at all. At various different plausible gradients of temperature from Equator to the poles, various different possible configurations of Jet Streams emerge.

One of the major effects of global warming has been the warming of the Arctic. This happens for at least two reasons. First, the atmosphere and oceans are simply warmer, so everything gets warmer. In addition, these warmer conditions cause the melting of Arctic ice to be much more extreme each summer, so that there is more exposed water in the Arctic Ocean, for a longer period of time. This means that less sunlight is reflected directly back into space (because there is less shiny ice) and the surface of the ice-free northern sea absorbs sunlight and converts it into heat. For these reasons, the Arctic region is warming at a higher rate than other regions farther to the south in the Northern Hemisphere. This, in turn, makes for a reduced gradient in the atmospheric temperature from tropical to temperate to polar regions.

Changing the gradient of the atmospheric temperature in a north-south axis is like adjusting the rate of water flowing from your faucet, or changing the number of cars on the highway, or replacing all the usual sports fans at the stadium with stoned people with arthritis. The nature of the waves changes.

In the case of the atmosphere of Earth’s Northern Hemisphere, global warming has changed the dynamic of the northern Jet Stream, and this has resulted in changes in weather extremes. This would apply to heat waves, cold snaps, and the distribution of precipitation. The phenomenon that is increasingly being called “Weather Whiplash” … more extremes in all directions, heat vs cold and wet vs. dry, is largely caused by this effect, it would seem.

This study is somewhat limited because it covers only a 32 year period, but the findings of the study are in accord with expectations based on what we know about how the Earth’s climate system works, and the modeling matches empirical reality quite well.

Petoukhov, V., Rahmstorf, S., Petri, S., & Schellnhuber, H. (2013). Quasiresonant amplification of planetary waves and recent Northern Hemisphere weather extremes Proceedings of the National Academy of Sciences, 110 (14), 5336-5341 DOI: 10.1073/pnas.1222000110

How to tap the sun's energy through heat as well as light

1 hour ago by David Chandler
           Solar-power device would use heat to enhance efficiency
          A nanophotonic solar thermophotovoltaic device as viewed from the perspective 
          of  the incoming sunlight. Reflective mirrors boost the intensity of the light reaching the   
          carbon nanotube absorber array (center), enabling the device to reach high temperatures and 
          record-setting efficiencies. Credit: FELICE FRANKEL

A new approach to harvesting solar energy, developed by MIT researchers, could improve efficiency by using sunlight to heat a high-temperature material whose infrared radiation would then be collected by a conventional photovoltaic cell. This technique could also make it easier to store the energy for later use, the researchers say.
Click Here to View a Hoffman Part Number List or Search a Part Number Below!


In this case, adding the extra step improves performance, because it makes it possible to take advantage of wavelengths of light that ordinarily go to waste. The process is described in a paper published this week in the journal Nature Nanotechnology, written by graduate student Andrej Lenert, associate professor of mechanical engineering Evelyn Wang, physics professor Marin Soljačić, principal research scientist Ivan Celanović, and three others.
A conventional silicon-based solar cell "doesn't take advantage of all the photons," Wang explains. That's because converting the energy of a photon into electricity requires that the photon's energy level match that of a characteristic of the photovoltaic (PV) material called a bandgap. Silicon's bandgap responds to many , but misses many others.

To address that limitation, the team inserted a two-layer absorber-emitter device—made of novel materials including carbon nanotubes and photonic crystals—between the and the PV cell. This intermediate material collects energy from a broad spectrum of sunlight, heating up in the process. When it heats up, as with a piece of iron that glows red hot, it emits light of a particular wavelength, which in this case is tuned to match the bandgap of the PV cell mounted nearby.

This basic concept has been explored for several years, since in theory such solar thermophotovoltaic (STPV) systems could provide a way to circumvent a theoretical limit on the energy-conversion efficiency of semiconductor-based photovoltaic devices.
That limit, called the Shockley-Queisser limit, imposes a cap of 33.7 percent on such efficiency, but Wang says that with TPV systems, "the efficiency would be significantly higher—it could ideally be over 80 percent."

There have been many practical obstacles to realizing that potential; previous experiments have been unable to produce a STPV device with efficiency of greater than 1 percent. But Lenert, Wang, and their team have already produced an initial test device with a measured efficiency of 3.2 percent, and they say with further work they expect to be able to reach 20 percent efficiency—enough, they say, for a commercially viable product.

The Infinite Series and the Mind-Blowing Result

By
infinite series
The answer is yes. Sometimes, kinda.
Photo by Numberphile, from the video, modified by Phil Plait
Yesterday, I posted an article about a math video that showed how you can sum up an infinite series of numbers to get a result of, weirdly enough, -1/12.
A lot of stuff happened after I posted it. Some people were blown away by it, and others… not so much. A handful of mathematicians were less than happy with what I wrote, and even more were less than happy with the video. I got a few emails, a lot of tweets, and some very interesting conversations out of it.
                    
I decided to write a follow-up post because I try to correct errors when I make them, and shine more light on a problem if it needs it. There are multiple pathways to take here (which is ironic because that’s actually part of the problem with the math). Therefore this post is part 1) update, 2) correction, 3) and mea culpa, with a defense (hopefully without being defensive).

Let me take a moment to explain right away. No, there is too much. Let me sum up*:
1) The infinite series in the video (1 + 2 + 3 + 4 + 5 …) can in fact be tackled using a rigorous mathematical method, and can in fact be assigned a value of -1/12! This method is quite real, and very useful. And yes, the weirdness of it is brain melting.

2) The method used in the video to write out some series and manipulate them algebraically is actually not a great way to figure this problem out. It uses a trick that’s against the rules, so strictly speaking it doesn’t work. It’s a nice demo to show some fun things, but its utility is questionable at best.

3) I had my suspicions about the method used in the video, but suppressed them. That was a mistake.
That’s the tl;dr version, Here’s the detail.

1) Terms of Endearment

In math, you have to set up rules that allow you to do whatever it is you want to do. These rules can be self-consistent, totally logical, and very useful. Or, they can be self-consistent, totally logical, and not useful. Let me give you an example, inspired by a conversation I had with the delightful mathematician Jordan Ellenberg, who contacted Slate and me after my article went up.

Imagine you come upon a society that uses numbers only as integer magnitudes, that is, to measure the amount of something in integer units (1, 2, 3 etc.). You can have three bricks, and your friend has five bricks. They also have a concept for ratios, so you have 3/5ths as many bricks as your friend.
But in their system, you can’t mix the two. You can’t have 3 and 3/4 bricks, because fractions are only for ratios. In their system, having a fractional brick doesn’t make sense, any more than saying you are six feet nine gallons tall in ours. Those units don’t play well together. Mind you, their system is self-consistent and logical, but I’d argue it has limited use. Fractions can be wildly multipurpose.
It’s similar to infinite series. In the method you learned in high school, the series
1 + 2 + 3 + 4 + 5 … doesn’t converge, and tends to go to infinity. That is also self-consistent, logical, but of limited use in this case. The rules of how we deal with series don’t let you do much with that.
Phil Plait Phil Plait
Phil Plait writes Slate’s Bad Astronomy blog and is an astronomer, public speaker, science evangelizer, and author of Death from the Skies! Follow him on Twitter.
But there is a method called analytic continuation that does. It redefines things a bit, uses different rules, and they allow for dealing with such things. The mathematicians Euler and Riemann used it to get around the problems of infinite diverging series, and allowed them to assign the value -1/12 to it. Those rules are self-consistent, logical, and highly useful. In fact, as I pointed out in the previous post, they’re used to great success in many fields of physics. It gets complicated quickly, but you can read more about this here and especially here (that second one deals with this problem specifically, and in fact shows how analytic continuation can handle the problems of all the series presented in the Numberphile video). One of the greatest mathematicians the world has ever seen, Ramanujan, also did this. In fact, you should read about him; his story is as fascinating as it is tragic.

Anyway, neither set of rules is wrong. One is just better at handling certain things than the other. And you have to be sure to color within the lines depending on which rules you use. Which brings me to…

2) Canceled Series

Before I get to this, I want to say that early on in the main Numberphile video (and in my post) there is a link to the rigorous analytic continuation solution to this problem. So that part was good. However, they then employ a trick that is a bit of a no-no.

There are rules for dealing with infinite series, many developed by Cauchy in the 1800s. One of them is that when you have a series that diverges, that is, does not approach a finite limit, you can’t go around adding and subtracting other series from it, or substituting values for it.

But in the video, they do just that. They write down Grandi’s series, show that’s equal to ½ (more on that below), then use it to show that 1 – 2 + 3 – 4 + 5 … has ¼ as a solution. But given the rules of dealing with series in this way, that’s a fudge (ironically, similar to the trick of “proving” 1 = 0, something I mentioned in my first post). So in the video where they multiply through the series, shift them, subtract them from one another… that’s not allowed. It works for a finite number of terms, but leaves that aggravating tail of infinite terms to mess things up. That tail winds up wagging the dog, negating the whole thing.

Again, using Riemann’s and Euler’s work, you can work this through legally. But using series written out in the way of the video, not so much.

3) Reaching My Limit

[This may be of more interest to writers than math people. Caveat emptor.]
Overall, a lot of what I wrote in the article is correct prima facie. A lot of it wasn’t. How this came to be makes me a bit red-faced as well as has me chuckling at myself.

I did talk a bit about the analytic continuation method, called it rigorous, and said it shows that the series can have a value of -1/12. But I made a couple of mistakes: one was not trusting my instincts, and the other was trusting them too much.

In the first case — and this is killing me — is that in my original draft of the post, I had a section pointing out you can’t just add and subtract divergent series from each other! It was literally the first thing I wrote down after watching the video, because my math/science instinct told me there was a problem there. But I wound up removing it. Why?

Because of my writing instincts. I started digging into the cool Riemann stuff, and realized that since there really was a way to assign a value to the series, I didn’t need to worry about the actual way they did it in the video. It was a trick, but it got the value I expected, so I took the section out. It made some sense at the time; I had the analytic stuff first, and the video second. I figured I had established the -1/12 bit, and it was good.

But the article wasn’t working. I needed to rearrange it, put the video near the top of the post; starting it off with lots of thin-air math might not be the best bet. So I put the rigorous math after the video, and totally left out the deleted section on the trick. Had I left it in, I suspect the new arrangement would’ve triggered alarm bells in my head, and I wouldn’t have been so laissez faire with the video.
Still, it didn’t, and I wound up not dissecting something I should have.
On top of that, I should’ve stressed the analytic solution more. I also should have stressed the idea that the examples I put in (the zigzag graph and the staircase) were only there as thought experiments to help understand the problem; they weren’t meant to be rigorous. I probably should have just left that whole part out. Again, mea culpa.

It’s an interesting balancing act, this writing about science and math. Sometimes it tips the wrong way. I blew it, and I'll try to be more careful in the future.
Term Limits

One more bit of exposition: You may have noticed that all through this post, I have avoided writing “This series equals -1/12,” or “the value of the sum of the series is -1/12.” This is due to my conversation with Ellenberg, which was fascinating to me. We talked about different methods, different rules, how new concepts were not accepted at first, and that things we think are simple now (like using fractions) were at one point in history heatedly debated as to their reality and usefulness. He put it very well:
It's not quite right to describe what the video does as “proving” that 1 + 2 + 3 + 4 + .... = -1/12.  When we ask “what is the value of the infinite sum,” we've made a mistake before we even answer!  Infinite sums don't have values until we assign them a value, and there are different protocols for doing that.  We should be asking not what IS the value, but what should we define the value to be?  There are different protocols, each with their own strengths and weaknesses.  The protocol you learn in calculus class, involving limits, would decline to assign any value at all to the sum in the video.  A different protocol assigns it the value -1/12.  Neither answer is more correct than the other.
Nice. Though I’ll add that one answer has more use than the other in certain circumstances; the point I made above.

This conversation led down the rabbit hole of how we use math and what for, and has inspired me to do some follow-up reading, more about the philosophy and development of various mathematical
methods than the methods themselves. This is pretty cool stuff.

Other Methods

Finally, I got a lot of polite and informative notes from folks correcting me and pointing out details of all this, many of which overlapped with each other (including, interestingly, the last bit about the value assigned to a sum). Thanks to everyone who did so. Here are a few links I was sent for those who want to venture a few more terms down the series:

Ron Garret’s page on manipulating series
Colin Grove’s pageIf you want some hairy details, Terence Tao has ‘em.
Bryden Cais goes through the steps with a clear (though technical) explanation
Just to be sure they get seen again: John Baez’s page on the Euler method, and Riemann’s as well.
So: I made some mistakes, got other stuff right, could’ve been more clear, and learned a lot. Pretty much a typical day in anyone’s book.
 

How Can Neuroplasticity Affect Addiction and Reovery?

Thursday, January 16, 2014
 
What is This Thing Called Neuroplasticity?
And how does it impact addiction and recovery?
Posted by Dirk Hanson


Bielefeld, Germany—
The first in an irregular series of posts about a recent conference, Neuroplasticity in Substance Addiction and Recovery: From Genes to Culture and Back Again. The conference, held at the Center for Interdisciplinary Research (ZiF) at Bielefeld University, drew neuroscientists, historians, psychologists, philosophers, and even a freelance science journalist or two, coming in from Germany, the U.S., The Netherlands, the UK, Finland, France, Italy, Australia, and elsewhere. The organizing idea was to focus on how changes in the brain impact addiction and recovery, and what that says about the interaction of genes and culture. The conference co-organizers were Jason Clark and Saskia Nagel of the Institute of Cognitive Science at the University of Osnabrück, Germany.

One of the stated missions of the conference at Bielefeld’s Center for Interdisciplinary Research was to confront the leaky battleship called the disease model of addiction. Is it the name that needs changing, or the entire concept? Is addiction “hardwired,” or do things like learning and memory and choice and environmental circumstance play commanding roles that have been lost in the excitement over the latest fMRI scan?

What exactly is this neuroplasticity the conference was investigating? From a technical point of view, it refers to the brain’s ability to form new neural connections in response to illness, injury, or new environmental situations, just to name three. Nerve cells engage in a bit of conjuring known as “axonal sprouting,” which can include rerouting new connections around damaged axons. Alternatively, connections are pruned or reduced. Neuroplasticity is not an unmitigated blessing. Consider intrusive tinnitus, a loud and continuous ringing or hissing in the ears, which is thought to be the result of the rewiring of brain cells involved in the processing of sound, rather than the sole result of injury to cochlear hair cells.

The fact that the brain is malleable is not a new idea, to be sure. Psychologist Vaughn Bell, writing at Mind Hacks, has listed a number of scientific papers, from as early as 1896, which discuss the possibility of neural regeneration. But there is a problem with neuroplasticity, writes Bell, and it is that “there is no accepted scientific definition for the term, and, in its broad sense, it means nothing more than ‘something in the brain has changed.’” Bell quotes the introduction to the science text, Toward a Theory of Neuroplasticity: “While many scientists use the word neuroplasticity as an umbrella term, it means different things to different researchers in different subfields… In brief, a mutually agreed upon framework does not appear to exist.”

So the conference was dealing with two very slippery semantic concepts when it linked neuroplasticity and addiction. There were discussions of the epistemology of addiction, and at least one reference to Foucault, and plenty of arguments about dopamine, to keep things properly interdisciplinary. “Talking about ‘neuroscience,’” said Robert Malenka of Stanford University’s Institute for Neuro-Innovation and Translational Neurosciences, “is like talking about ‘art.’”

What do we really know about synaptic restructuring, or “brains in the wild,” as anthropologist Daniel Lende of the University of South Florida characterized it during his presentation? Lende, who called for using both neurobiology and ethnography in investigative research, said that more empirical work was needed if we are to better understand addiction “outside of clinical and laboratory settings.” Indeed, the prevailing conference notion was to open this discussion outwards, to include plasticity in all its ramifications—neural, medical psychological, sociological, and legal—including, as well, the ethical issues surrounding addiction.

Among the addiction treatment modalities discussed in conference presentations were optogenetics, deep brain stimulation, psychedelic drugs, moderation, and cognitive therapies modeled after systems used to treat various obsessive-compulsive disorders. Some treatment approaches, such as optogenetics and deep brain stimulation, “have the potential to challenge previous notions of permanence and changeability, with enormous implications for legal strategies, treatment, stigmatization, and addicts’ conceptions of themselves,” in the words of Clark and Nagel.

Interestingly, there was little discussion of anti-craving medications, like naltrexone for alcohol and methadone for heroin. Nor was the standard “Minnesota Model” of 12 Step treatment much in evidence during the presentations oriented toward treatment. The emphasis was on future treatments, which was understandable, given that almost no one is satisfied with treatment as it is now generally offered. (There was also a running discussion of the extent to which America’s botched health care system and associated insurance companies have screwed up the addiction treatment landscape for everybody.)

It sometimes seems as if the more we study addiction, the farther it slips from our grasp, receding as we advance. Certainly health workers of every stripe, in every field from cancer to infectious diseases to mental health disorders, have despaired about their understanding of the terrain of the disorder they were studying. But even the term addiction is now officially under fire. The DSM5 has banished the word from its pages, for starters.

Developmental psychologist Reinout Wiers of the University of Amsterdam used a common metaphor, the rider on an unruly horse, to stand in for the bewildering clash of top-down and bottom-up neural processes that underlie addictive behaviors. The impulsive horse and the reflective rider must come to terms, without entering into a mutually destructive spiral of negative behavior patterns. Not an easy task.

Saturday, January 18, 2014

Did First Placental Mammal Live Alongside Dinosaurs? Scientists Can't Agree

        
natureheader  |  By Ewen Callaway Posted:
 
 
 
 
 
 
 
 
Did the first mammal with a placenta live alongside the dinosaurs — or did it emerge after a gigantic asteroid wiped them out? This is the subject of a heated debate that pits scientists who contend that fossils are the ultimate timekeeper of life’s history against researchers who say that genetics offers more reliable dates.

Such disputes have been waging for decades, since researchers first began gleaning evolutionary detail from proteins and DNA. But the skirmish over placental mammals — animals that give birth to live offspring that are in late stages of development, including whales, mice and humans — began with a paper published early last year, arguing that the group diversified only after those dinosaurs that did not evolve into birds went extinct, 65 million years ago.

For that study, Maureen O’Leary, an evolutionary biologist at Stony Brook University in New York, and her team spent several years characterizing and analyzing thousands of traits in dozens of living and fossil mammals. The team combined those characteristics with genetic data to build a giant tree of life, showing how different placental mammals related to one another.

But to establish when the different creatures evolved, the researchers looked only at the fossil record. They concluded that the earliest placental mammals appeared only after the asteroid impact that killed the dinosaurs and marked the end of the Cretaceous period and the beginning of the Palaeogene. After this, the team said, the placentals quickly diversified, and a menagerie of mammals filled the habitat niches left by the dinosaurs.

Phil Donoghue, a palaeobiologist at the University of Bristol, UK, and his collaborators were not convinced. O'Leary's work was “an incredibly impressive study in all aspects — except the timescale in evolutionary history”, he says. “What we were really concerned about is that this stuff was going to end up in textbooks.”

Fossil boundary

Now, Donoghue and evolutionary geneticists Mario dos Reis and Ziheng Yang of University College London publish a study in Biology Letters, saying that O’Leary’s team made a fatal error in assuming that lineages of species date back no further than their oldest fossils. The fossils should instead mark the minimum age for a lineage, says Donoghue, because it is likely that animals existed before that, but were not preserved as fossils or their remains have yet to be discovered.

Using a mathematical elaboration of this concept and genome data from dozens of mammals, Donoghue’s team calculated new dates for the placental-mammal family tree. The researchers conclude that placental mammals first emerged between 108 million and 72 million years ago — well before the (non-avian) dinosaurs disappeared.
The critique follows an August 2013 technical comment on O'Leary's paper, in which an independent group also took umbrage with the suggested Palaeogene origin, because, they said, it requires a dramatic increase in the rate of evolution to explain the early diversity of placental mammals.

Hard evidence

O’Leary says that her team wanted to avoid introducing a bias by assuming that placental mammals are older than suggested by the fossil evidence. But Anne Yoder, an evolutionary geneticist at Duke University in Durham, North Carolina, prefers Donoghue, dos Reis and Yang's approach. “It’s hard to say who’s right and who’s wrong, but the weight of the data support the dos Reis conclusion,” she says.

Yoder adds that discoveries have the potential to harmonize molecular and fossil dates. A decade ago she concluded, using genetic data, that the common ancestor of bush babies and lorises lived around 40 million years ago, even though the oldest fossil was less than 20 million years old. While her paper was in press, a team reported a more than 40-million-year-old fossil that matched her predictions.

As for the current debate, “early placental mammals found in the Cretaceous will settle this once and for all”, says Yoder. O’Leary agrees that the question will be decided by fossil evidence, not mathematical models. “I don’t see this as a problem that’s going to be solved by a computer,” she says.

This story originally appeared in Nature News.

Extraterrestrial liquid water

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Extraterrestrial_liquid_water ...