Life as we know it needs water to thrive. Even so, we see life persist in the driest environments on Earth. But how dry is too
dry? At what point is an environment too extreme for even
microorganisms, the smallest and often most resilient of lifeforms, to
survive? These questions are important to scientists searching for life
beyond Earth, including on the planet Mars. To help answer this
question, a research team from NASA’s Ames Research Center in
California’s Silicon Valley traveled to the driest place on Earth: the
Atacama Desert in Chile, a 1000 kilometer strip of land on South
America’s west coast.
The Atacama Desert is one of the Earth’s environments that comes
closest to the parched Martian surface. But the Atacama isn’t uniformly
dry. When traveling from the relatively less dry southern end of the
desert in central Chile to its extremely dry center in northern Chile,
the annual precipitation shifts from a few millimeters of rain per year
to only a few millimeters of rain per decade.
This map of the Atacama Desert shows the change in
annual precipitation from one end of the desert to the other. The
aridity index mentioned is a value based on annual rainfall and water
loss.
Credits: NASA Ames Research Center
This non-uniformly dry environment provides an opportunity to search
for life at decreasing levels of precipitation. By pinning down how much
water an environment needs to be habitable, i.e. be able to support
lifeforms, the research team was able to determine that a dry limit of
habitability exists.
"On Earth, we find evidence of microbial life everywhere," said Mary
Beth Wilhelm, an astrobiologist at Ames and lead author of the new study
published in the journal Astrobiology this month.
"However, in extreme environments, it’s important to know whether a
microbe is dormant and just barely surviving, or really alive and well."
Biologists define something as alive if it is capable of growth and
reproduction. If microbes are simply surviving or performing a few basic
functions, they’ll die within one generation without passing on any
genetic information. When looking for the potential of life on Mars,
scientists need to see this reproduction take place, which leads to
population growth and genetic change from one generation to the next.
"By learning if and how microbes stay alive in extremely dry regions
on Earth, we hope to better understand if Mars once had microbial life
and whether it could have survived until today," said Wilhelm.
A Sign of Stress is a Sign of Life
Scientists have a few tools to figure out whether a sample is growing
or just surviving. One important sign is stress. Living long enough to
grow and adapt in extreme deserts like the Atacama – or potentially on
Mars – is no easy task. If life is really growing in this extremely dry
environment, it’s going to be stressed, while dormant life simply
surviving will not. Because dormant life is not able to even try to grow
or reproduce, there are no stress markers, like changes in the
structure of certain cell molecules. Astrobiologists can look for some
tell-tale signs of this stress to search for evidence of growth in the
parched soils.
The science team collected soil samples from across the Atacama
Desert and brought them back to their lab at Ames. There, they performed
tests to identify stress markers in the samples by looking at features
common to all known living organisms.
Researchers collect samples from the surface of the Atacama
Desert in Chile, going a few centimeters into the ground.
Credits: NASA Ames Research Center
One stress marker can be found in lipids, molecules that make up the
outer surface of a living microbial cell, known as its membrane. When
cells are exposed to stressful conditions, their lipids change
structure, becoming more rigid.
Scientists found this marker in less dry parts of the Atacama, but it
was mysteriously missing from the driest regions, where microbes should
be more stressed. Based on these and other results, the team believes
that a line of transition exists between where minute amounts of water
are still enough for life to grow and where the environment is so dry
that microorganisms merely survive without growth in surface soil in the
Atacama.
Dating the Remnants of Life
Scientists can tell how long cells have been dead by studying a type
of molecule called amino acids, the building blocks of proteins. The
structures of these amino acids take two forms, each a mirror reflection
of the other, like a pair of hands. In fact, this "handedness" is the
term scientists use to describe these structures.
All life on Earth is built with "left-handed" amino acid molecules.
However, when a cell dies, some of its amino acids change at a known
rate into the reflecting "right-handed" structure, eventually balancing
into a 50-50 ratio over many years.
By looking at this ratio in the driest Atacama soils, the scientists
found microbes there that have been dead for at least 10,000 years.
Finding even the remnants of life this old is extremely rare, and
surprising for a sample sitting in the surface of Earth.
Getting Ready for Mars
Mars is 1,000 times drier than even the driest parts of the Atacama,
which makes it less likely that microbial life as we know it exists on
the planet’s surface, even with some access to water. However, even in
the driest areas of Chile’s desert, remnants of past microbial life from
wetter times in the Atacama’s history were clearly present and well
preserved over thousands of years. This means that because scientists
know that Mars was a wetter, more vibrant planet in its past, traces of
that ancient life might still be intact.
"Before we go to Mars, we can use the Atacama like a natural
laboratory and, based on our results, adjust our expectations for what
we might find when we get there," said Wilhelm. "Knowing the surface of
Mars today might be too dry for life to grow, but that traces of
microbes can last for thousands of years helps us design better
instruments to not only search for life on and under the planet’s
surface, but to try and unlock the secrets of its distant past."
Members of the news media interested in learning more about this research should refer to the NASA Ames Media Contacts page to get in touch.
While the future is becoming more difficult to predict with
each passing year, we should expect an accelerating pace of
technological change. Nanotechnology is the next great technology wave
and the next phase of Moore’s Law. Nanotech innovations enable myriad
disruptive businesses that were not possible before, driven by
entrepreneurship.
Much of our future context will be defined by the accelerating
proliferation of information technology as it innervates society and
begins to subsume matter into code. It is a period of exponential growth
in the impact of the learning-doing cycle where the power of biology,
IT and nanotech compounds the advances in each formerly discrete domain.
The history of technology is one of disruption and exponential
growth, epitomized in Moore’s law, and generalized to many basic
technological capabilities that are compounding independently from the
economy. More than a niche subject of interest only to chip designers,
the continued march of Moore’s Law will affect all of the sciences, just
as nanotech will affect all industries. Thinking about Moore’s Law in
the abstract provides a framework for predicting the future of
computation and the transition to a new substrate: molecular
electronics. An analysis of progress in molecular electronics provides a
detailed example of the commercialization challenges and opportunities
common to many nanotechnologies.
Introduction to Technology Exponentials:
Despite a natural human tendency to presume linearity, accelerating
change from positive feedback is a common pattern in technology and
evolution. We are now crossing a threshold where the pace of disruptive
shifts is no longer inter-generational and begins to have a meaningful
impact over the span of careers and eventually product cycles.
As early stage VCs, we look for disruptive businesses run by
entrepreneurs who want to change the world. To be successful, we have to
identify technology waves early and act upon those beliefs. At DFJ, we
believe that nanotech is the next great technology wave, the nexus of
scientific innovation that revolutionizes most industries and indirectly
affects the fabric of society. Historians will look back on the
upcoming epoch with no less portent than the Industrial Revolution.
The aforementioned are some long-term trends. Today, from a
seed-stage venture capitalist perspective (with a broad sampling of the
entrepreneurial pool), we are seeing more innovation than ever before.
And we are investing in more new companies than ever before.
In the medium term, disruptive technological progress is relatively
decoupled from economic cycles. For example, for the past 40 years in
the semiconductor industry, Moore’s Law has not wavered in the face of
dramatic economic cycles. Ray Kurzweil’s abstraction of Moore’s Law
(from transistor-centricity to computational capability and storage
capacity) shows an uninterrupted exponential curve for over 100 years,
again without perturbation during the Great Depression or the World
Wars. Similar exponentials can be seen in Internet connectivity, medical
imaging resolution, genes mapped and solved 3D protein structures. In
each case, the level of analysis is not products or companies, but basic
technological capabilities.
In his forthcoming book, Kurzweil summarizes the exponentiation of
our technological capabilities, and our evolution, with the near-term
shorthand: the next 20 years of technological progress will be
equivalent to the entire 20th century. For most of us, who do
not recall what life was like one hundred years ago, the metaphor is a
bit abstract. In 1900, in the U.S., there were only 144 miles of paved
road, and most Americans (94%+) were born at home, without a telephone,
and never graduated high school. Most (86%+) did not have a bathtub at
home or reliable access to electricity. Consider how much
technology-driven change has compounded over the past century, and
consider that an equivalent amount of progress will occur in one human
generation, by 2020. It boggles the mind, until one dwells on genetics,
nanotechnology, and their intersection. Exponential progress perpetually
pierces the linear presumptions of our intuition. “Future Shock” is no
longer on an inter-generational time-scale.
The history of humanity is that we use our tools and our knowledge to
build better tools and expand the bounds of our learning. We are
entering an era of exponential growth in our capabilities in biotech,
molecular engineering and computing. The cross-fertilization of these
formerly discrete domains compounds our rate of learning and our
engineering capabilities across the spectrum. With the digitization of
biology and matter, technologists from myriad backgrounds can decode and
engage the information systems of biology as never before. And this
inspires new approaches to bottom-up manufacturing, self-assembly, and
layered complex systems development.
Moore’s Law:
Moore’s Law is commonly reported as a doubling of transistor density
every 18 months. But this is not something the co-founder of Intel,
Gordon Moore, has ever said. It is a nice blending of his two
predictions; in 1965, he predicted an annual doubling of transistor
counts in the most cost effective chip and revised it in 1975 to every
24 months. With a little hand waving, most reports attribute 18 months
to Moore’s Law, but there is quite a bit of variability. The popular
perception of Moore’s Law is that computer chips are compounding in
their complexity at near constant per unit cost. This is one of the many
abstractions of Moore’s Law, and it relates to the compounding of
transistor density in two dimensions. Others relate to speed (the
signals have less distance to travel) and computational power (speed x
density).
So as to not miss the long-term trend while sorting out the details,
we will focus on the 100-year abstraction of Moore’s Law below. But we
should digress for a moment to underscore the importance of continued
progress in Moore’s law to a broad set of industries.
Importance of Moore’s Law:
Moore’s Law drives chips, communications and computers and has become
the primary driver in drug discovery and bioinformatics, medical
imaging and diagnostics. Over time, the lab sciences become information
sciences, modeled on a computer rather than trial and error
experimentation.
NASA Ames shut down their wind tunnels this year. As Moore’s Law
provided enough computational power to model turbulence and airflow,
there was no longer a need to test iterative physical design variations
of aircraft in the wind tunnels, and the pace of innovative design
exploration dramatically accelerated.
Eli Lilly processed 100x fewer molecules this year than they
did 15 years ago. But their annual productivity in drug discovery did
not drop proportionately; it went up over the same period. “Fewer atoms
and more bits” is their coda.
Accurate simulation demands computational power, and once a
sufficient threshold has been crossed, simulation acts as an innovation
accelerant over physical experimentation. Many more questions can be
answered per day.
Recent accuracy thresholds have been crossed in diverse areas, such
as modeling the weather (predicting a thunderstorm six hours in advance)
and automobile collisions (a relief for the crash test dummies), and
the thresholds have yet to be crossed for many areas, such as protein
folding dynamics.
Long Term Abstraction of Moore’s Law:
Unless you work for a chip company and focus on fab-yield
optimization, you do not care about transistor counts. Integrated
circuit customers do not buy transistors. Consumers of technology
purchase computational speed and data storage density. When recast in
these terms, Moore’s Law is no longer a transistor-centric metric, and
this abstraction allows for longer-term analysis.
The exponential curve of Moore’s Law extends smoothly back in time
for over 100 years, long before the invention of the semiconductor.
Through five paradigm shifts—such as electro-mechanical calculators and
vacuum tube computers—the computational power that $1000 buys has
doubled every two years. For the past 30 years, it has been doubling
every year.
Each horizontal line on this logarithmic graph represents a 100x
improvement. A straight diagonal line would be an exponential, or
geometrically compounding, curve of progress. Kurzweil plots a slightly
upward curving line—a double exponential.
Each dot represents a human drama. They did not realize that they
were on a predictive curve. Each dot represents an attempt to build the
best computer with the tools of the day. Of course, we use these
computers to make better design software and manufacturing control
algorithms. And so the progress continues.
One machine was used in the 1890 Census; one cracked the Nazi Enigma
cipher in World War II; one predicted Eisenhower’s win in the
Presidential election. And there is the Apple ][, and the Cray 1, and
just to make sure the curve had not petered out recently, I looked up
the cheapest PC available for sale on Wal*Mart.com, and that is the
green dot that I have added to the upper right corner of the graph.
And notice the relative immunity to economic cycles. The Great
Depression and the World Wars and various recessions do not introduce a
meaningful delay in the progress of Moore’s Law. Certainly, the adoption
rates, revenue, profits and inventory levels of the computer companies
behind the various dots on the graph may go though wild oscillations,
but the long-term trend emerges nevertheless.
Any one technology, such as the CMOS transistor, follows an elongated
S-shaped curve of slow progress during initial development, upward
progress during a rapid adoption phase, and then slower growth from
market saturation over time. But a more generalized capability,
such as computation, storage, or bandwidth, tends to follow a pure
exponential—bridging across a variety of technologies and their cascade
of S-curves.
If history is any guide, Moore’s Law will continue on and will jump
to a different substrate than CMOS silicon. It has done so five times in
the past, and will need to again in the future.
Problems With the Current Paradigm:
Intel co-founder Gordon Moore has chuckled at those who have
predicted the imminent demise of Moore’s Law in decades past. But the
traditional semiconductor chip is finally approaching some fundamental
physical limits. Moore recently admitted that Moore’s Law, in its
current form, with CMOS silicon, will run out of gas in 2017.
One of the problems is that the chips are getting very hot. The following graph of power density is also a logarithmic scale:
This provides the impetus for chip cooling companies, like
Nanocoolers, to provide a breakthrough solution for removing 100 Watts
per square centimeter. In the long term, the paradigm has to change.
Another physical limit is the atomic limit—the indivisibility of
atoms. Intel’s current gate oxide is 1.2nm thick. Intel’s 45nm process
is expected to have a gate oxide that is only 3 atoms thick. It is hard
to imagine many more doublings from there, even with further innovation
in insulating materials. Intel has recently announced a breakthrough in a
nano-structured gate oxide (high k dielectric) and metal contact
materials that should enable the 45nm node to come on line in 2007. None
of the industry participants has a CMOS roadmap for the next 50 years.
A major issue with thin gate oxides, and one that will also come to
the fore with high-k dielectrics, is quantum mechanical tunneling. As
the oxide becomes thinner, the gate current can approach and even exceed
the channel current so that the transistor cannot be controlled by the
gate.
Another problem is the escalating cost of a semiconductor fab plant,
which is doubling every three years, a phenomenon dubbed Moore’s Second
Law. Human ingenuity keeps shrinking the CMOS transistor, but with
increasingly expensive manufacturing facilities—currently $3 billion per
fab.
A large component of fab cost is the lithography equipment that
patterns the wafers with successive sub-micron layers. Nanoimprint
lithography from companies like Molecular Imprints can dramatically
lower cost and leave room for further improvement from the field of
molecular electronics.
We have been investing in a variety of companies, such as Coatue,
D-Wave, FlexICs, Nantero, and ZettaCore that are working on the next
paradigm shift to extend Moore’s Law beyond 2017. One near term
extension to Moore’s Law focuses on the cost side of the equation.
Imagine rolls of wallpaper embedded with inexpensive transistors.
FlexICs deposits traditional transistors at room temperature on plastic,
a much cheaper bulk process than growing and cutting crystalline
silicon ingots.
Molecular Electronics:
The primary contender for the post-silicon computation paradigm is
molecular electronics, a nano-scale alternative to the CMOS transistor.
Eventually, molecular switches will revolutionize computation by scaling
into the third dimension—overcoming the planar deposition limitations
of CMOS. Initially, they will substitute for the transistor bottleneck
on an otherwise standard silicon process with standard external I/O
interfaces.
For example, Nantero employs carbon nanotubes suspended above metal
electrodes on silicon to create high-density nonvolatile memory chips
(the weak Van der Waals bond can hold a deflected tube in place
indefinitely with no power drain). Carbon nanotubes are small (~10 atoms
wide), 30x stronger than steel at 1/6 the weight, and perform the
functions of wires, capacitors and transistors with better speed, power,
density and cost. Cheap nonvolatile memory enables important advances,
such as “instant-on” PCs.
Other companies, such as Hewlett Packard and ZettaCore, are combining
organic chemistry with a silicon substrate to create memory elements
that self-assemble using chemical bonds that form along pre-patterned
regions of exposed silicon.
There are several reasons why molecular electronics is the next paradigm for Moore’s Law:
• Size: Molecular electronics has the potential to
dramatically extend the miniaturization that has driven the density and
speed advantages of the integrated circuit (IC) phase of Moore’s Law. In
2002, using a STM to manipulate individual carbon monoxide molecules,
IBM built a 3-input sorter by arranging those molecules precisely on a
copper surface. It is 260,000x smaller than the equivalent circuit built
in the most modern chip plant.
For a memorable sense of the difference in scale, consider a single
drop of water. There are more molecules in a single drop of water than
all transistors ever built. Think of the transistors in every memory
chip and every processor ever built—there are about 100x more molecules
in a drop of water. Sure, water molecules are small, but an important
part of the comparison depends on the 3D volume of a drop. Every IC, in
contrast, is a thin veneer of computation on a thick and inert
substrate.
• Power: One of the reasons that transistors are not stacked
into 3D volumes today is that the silicon would melt. The inefficiency
of the modern transistor is staggering. It is much less efficient at its
task than the internal combustion engine. The brain provides an
existence proof of what is possible; it is 100 million times more
efficient in power/calculation than our best processors. Sure it is slow
(under a kHz) but it is massively interconnected (with 100 trillion
synapses between 60 billion neurons), and it is folded into a 3D volume.
Power per calculation will dominate clock speed as the metric of merit
for the future of computation.
• Manufacturing Cost: Many of the molecular electronics
designs use simple spin coating or molecular self-assembly of organic
compounds. The process complexity is embodied in the synthesized
molecular structures, and so they can literally be splashed on to a
prepared silicon wafer. The complexity is not in the deposition or the
manufacturing process or the systems engineering. Much of the conceptual
difference of nanotech products derives from a biological metaphor:
complexity builds from the bottom up and pivots about conformational
changes, weak bonds, and surfaces. It is not engineered from the top
with precise manipulation and static placement.
• Low Temperature Manufacturing: Biology does not tend to
assemble complexity at 1000 degrees in a high vacuum. It tends to be
room temperature or body temperature. In a manufacturing domain, this
opens the possibility of cheap plastic substrates instead of expensive
silicon ingots.
• Elegance: In addition to these advantages, some of the
molecular electronics approaches offer elegant solutions to non-volatile
and inherently digital storage. We go through unnatural acts with CMOS
silicon to get an inherently analog and leaky medium to approximate a
digital and non-volatile abstraction that we depend on for our design
methodology. Many of the molecular electronic approaches are inherently
digital, and some are inherently non-volatile.
Other research projects, from quantum computing to using DNA as a
structural material for directed assembly of carbon nanotubes, have one
thing in common: they are all nanotechnology.
Why the term “Nanotechnology”?
Nanotech is often defined as the manipulation and control of matter
at the nanometer scale (critical dimensions of 1-100nm). It is a bit
unusual to describe a technology by a length scale. We certainly didn’t
get very excited by “inch-o-technology.” As venture capitalists, we
start to get interested when there are unique properties of matter that
emerge at the nanoscale, and that are not exploitable at the macroscale
world of today’s engineered products. We like to ask the startups that
we are investing in: “Why now? Why couldn’t you have started this
business ten years ago?” Our portfolio of nanotech startups have a
common thread in their response to this question—recent developments in
the capacity to understand and engineer nanoscale materials have enabled
new products that could not have been developed at larger scale.
There are various unique properties of matter that are expressed at
the nanoscale and are quite foreign to our “bulk statistical” senses (we
do not see single photons or quanta of electric charge; we feel bulk
phenomena, like friction, at the statistical or emergent macroscale). At
the nanoscale, the bulk approximations of Newtonian physics are
revealed for their inaccuracy, and give way to quantum physics.
Nanotechnology is more than a linear improvement with scale; everything
changes. Quantum entanglement, tunneling, ballistic transport,
frictionless rotation of superfluids, and several other phenomena have
been regarded as “spooky” by many of the smartest scientists, even
Einstein, upon first exposure.
For a simple example of nanotech’s discontinuous divergence from the
“bulk” sciences, consider the simple aluminum Coke can. If you take the
inert aluminum metal in that can and grind it down into a powder of
20-30nm particles, it will spontaneously explode in air. It becomes a
rocket fuel catalyst. The energetic properties of matter change at that
scale. The surface area to volume ratios become relevant, and even the
inter-atomic distances in a metal lattice change from surface effects.
Innovation from the Edge:
Disruptive innovation, the driver of growth and renewal, occurs at
the edge. In startups, innovation occurs out of the mainstream, away
from the warmth of the herd. In biological evolution, innovative
mutations take hold at the physical edge of the population, at the edge
of survival. In complexity theory, structure and complexity emerge at
the edge of chaos—the dividing line between predictable regularity and
chaotic indeterminacy. And in science, meaningful disruptive innovation
occurs at the inter-disciplinary interstices between formal academic
disciplines.
Herein lies much of the excitement about nanotechnology: in the
richness of human communication about science. Nanotech exposes the core
areas of overlap in the fundamental sciences, the place where quantum
physics and quantum chemistry can cross-pollinate with ideas from the
life sciences.
Over time, each of the academic disciplines develops its own
proprietary systems vernacular that isolates it from neighboring
disciplines. Nanoscale science requires scientists to cut across the
scientific languages to unite the isolated islands of innovation.
Nanotech is the nexus of the sciences.
In academic centers and government labs, nanotech is fostering new
conversations. At Stanford, Duke and many other schools, the new
nanotech buildings are physically located at the symbolic hub of the
schools of engineering, computer science and medicine.
Nanotech is the nexus of the sciences, but outside of the science and
research itself, the nanotech umbrella conveys no business synergy
whatsoever. The marketing, distribution and sales of a nanotech solar
cell, memory chip or drug delivery capsule will be completely different
from each other, and will present few opportunities for common learning
or synergy.
Market Timing:
As an umbrella term for a myriad of technologies spanning multiple
industries, nanotech will eventually disrupt these industries over
different time frames—but most are long-term opportunities. Electronics,
energy, drug delivery and materials are areas of active nanotech
research today. Medicine and bulk manufacturing are future
opportunities. The NSF predicts that nanotech will have a trillion
dollar impact on various industries inside of 15 years.
Of course, if one thinks far enough in the future, every industry
will be eventually revolutionized by a fundamental capability for
molecular manufacturing—from the inorganic structures to the organic and
even the biological. Analog manufacturing becomes digital, engendering a
profound restructuring of the substrate of the physical world.
The science futurism and predictions of potential nanotech products
has a near term benefit. It helps attract some of the best and brightest
scientists to work on hard problems that are stepping-stones to the
future vision. Scientists relish in exploring the frontier of the
unknown, and nanotech embodies the inner frontier.
Given that much of the abstract potential of nanotech is a question
of “when” not “if”, the challenge for the venture capitalist is one of
market timing. When should we be investing, and in which sub-sectors? It
is as if we need to pull the sea of possibilities through an
intellectual chromatograph to tease apart the various segments into a
timeline of probable progression. That is an ongoing process of data
collection (e.g., the growing pool of business plan submissions),
business and technology analysis, and intuition.
Two touchstone events for the scientific enthusiasm for the timing of
nanotech were the decoding of the human genome and the dazzling visual
images from the Scanning Tunneling Microscope (e.g., the arrangement of
individual Xenon atoms into the IBM logo). They represent the
digitization of biology and matter, symbolic milestones for accelerated
learning and simulation-driven innovation.
And more recently, nanotech publication has proliferated, much like
the early days of the Internet. Beside the popular press, the number of
scientific publications on nanotech has grown 10x in the past ten years.
According to the U.S. Patent Office, the number of nanotech patents
granted each year has skyrocketed 3x in the past seven years. Ripe with
symbolism, IBM has more lawyers than engineers working on nanotech.
With the recent codification of the National Nanotech Initiative into
law, federal funding will continue to fill the pipeline of nanotech
research. With $847 million earmarked for 2004, nanotech was a rarity in
the tight budget process; it received more funding than was requested.
And now nanotech is second only to the space race for federal funding of
science. And the U.S. is not alone in funding nanotechnology. Unlike
many previous technological areas, we aren’t even in the lead. Japan
outspends the U.S. each year on nanotech research. In 2003, the U.S.
government spending was one fourth of the world total.
Federal funding is the seed corn for nanotech entrepreneurship. All
of our nanotech portfolio companies are spin-offs (with negotiated IP
transfers) from universities or government labs, and all got their start
with federal funding. Often these companies need specialized equipment
and expensive laboratories to do the early tinkering that will germinate
a new breakthrough. These are typically lacking in the proverbial
garage of the entrepreneur at home.
And corporate investors have discovered a keen interest in
nanotechnology, with internal R&D, external investments in startups,
and acquisitions of promising companies, such as AMD’s recent
acquisition of the molecular electronics company Coatue.
Despite all of this excitement, there are a fair number of investment
dead-ends, and so we continue to refine the filters we use in selecting
companies to back. Every entrepreneur wants to present their business
as fitting an appropriate timeline to commercialization. How can we
guide our intuition on which of these entrepreneurs are right?
The Vertical Integration Question:
Nanotech involves the reengineering of the lowest level physical
layer of a system, and so a natural business question arises: How far
forward do you need to vertically integrate before you can sell a
product on the open market? For example, in molecular electronics, if
you can ship a DRAM-compatible chip, you have found a horizontal layer
of standardization, and further vertical integration is not necessary.
If you have an incompatible 3D memory block, you may have to vertically
integrate to the storage subsystem level, or further, to bring product
to market. That may require industry partnerships, and will, in general,
take more time and money as change is introduced farther up the product
stack. 3D logic with massive interconnectivity may require a new
computer design and a new form of software; this would take the longest
to commercialize. And most startups on this end of the spectrum would
seek partnerships to bring their vision to market. The success and
timeliness of that endeavor will depend on many factors, including IP
protection, the magnitude of improvement, the vertical tier at which
that value is recognized, the number of potential partners, and the
degree of tooling and other industry accommodations.
Product development timelines are impacted by the cycle time of the
R&D feedback loop. For example, outdoor lifetime testing for organic
LEDs will take longer than in silico simulation spins of digital
products. If the product requires partners in the R&D loop or
multiple nested tiers of testing, it will take longer to commercialize.
The “Interface Problem”:
As we think about the startup opportunities in nanotechnology, an
uncertain financial environment underscores the importance of market
timing and revenue opportunities over the next five years. Of the
various paths to nanotech, which are 20-year quests in search of a
government grant, and which are market-driven businesses that will
attract venture capital? Are there co-factors of production that require
a whole industry to be in place before a company ships product?
As a thought experiment, imagine that I could hand you today any
nanotech marvel of your design—a molecular machine as advanced as you
would like. What would it be? A supercomputer? A bloodstream submarine? A
matter compiler capable of producing diamond rods or arbitrary physical
objects? Pick something.
Now, imagine some of the complexities: Did it blow off my hand as I
offer it to you? Can it autonomously move to its intended destination?
What is its energy source? How do you communicate with it?
These questions draw the “interface problem” into sharp focus: Does
your design require an entire nanotech industry to support, power, and
“interface” to your molecular machine? As an analogy, imagine that you
have one of the latest Pentium processors out of Intel’s wafer fab. How
would you make use of the Pentium chip? You then need to wire-bond the
chip to a larger lead frame in a package that connects to a larger
printed circuit board, fed by a bulky power supply that connects to the
electrical power grid. Each of these successive layers relies on the
larger-scale precursors from above (which were developed in reverse
chronological order), and the entire hierarchy is needed to access the
potential of the microchip.
For molecular nanotech, where is the scaling hierarchy?
Today’s business-driven paths to nanotech diverge into two strategies
to cross the “interface” chasm—the biologically inspired bottom-up
path, and the top-down approach of the semiconductor industry. The
non-biological MEMS developers are addressing current markets in the
micro-world while pursuing an ever-shrinking spiral of miniaturization
that builds the relevant infrastructure tiers along the way. Not
surprisingly, this is very similar to the path that has been followed in
the semiconductor industry, and many of its adherents see nanotech as
inevitable, but in the distant future.
On the other hand, biological manipulation presents myriad
opportunities to effect great change in the near-term. Drug development,
tissue engineering, and genetic engineering are all powerfully impacted
by the molecular manipulation capabilities available to us today. And
genetically modified microbes, whether by artificial evolution or
directed gene splicing, give researchers the ability to build structures
from the bottom up.
The Top Down “Chip Path”:
This path is consonant with the original vision of physicist Richard
Feynman (in his 1959 lecture at Caltech) of the iterative
miniaturization of our tools down to the nano scale. Some companies,
like Zyvex, are pursuing the gradual shrinking of semiconductor
manufacturing technology from the micro-electro-mechanical systems
(MEMS) of today into the nanometer domain of NEMS. SiWave engineers and
manufactures MEMS structures with applications in the consumer
electronics, biomedical and communications markets. These precision
mechanical devices are built utilizing a customized semiconductor fab.
MEMS technologies have already revolutionized the automotive industry
with airbag sensors and the printing sector with ink jet nozzles, and
are on track to do the same in medical devices, photonic switches for
communications and mobile phones. In-Stat/MDR forecasts that the $4.7
billion of MEMS revenue in 2003 will grow to $8.3 billion by 2007. But
progress is constrained by the pace (and cost) of the semiconductor
equipment industry, and by the long turnaround time for fab runs.
Microfabrica in Torrance, CA, is seeking to overcome these limitations
to expand the market for MEMS to 3D structures in more materials than
just silicon and with rapid turnaround times.
Many of the nanotech advances in storage, semiconductors and
molecular electronics can be improved, or in some cases enabled, by
tools that allow for the manipulation of matter at the nanoscale. Here
are three examples:
• Nanolithography
Molecular Imprints is commercializing a unique imprint lithographic
technology developed at the University of Texas at Austin. The
technology uses photo-curable liquids and etched quartz plates to
dramatically reduce the cost of nanoscale lithography. This lithography
approach, recently added to the ITRS Roadmap, has special advantages for
applications in the areas of nano-devices, MEMS, microfluidics, optical
components and devices, as well as molecular electronics.
• Optical Traps
Arryx has developed a breakthrough in nano-material manipulation.
They generate hundreds of independently controllable laser tweezers that
can manipulate molecular objects in 3D (move, rotate, cut, place), all
from one laser source passing through an adaptive hologram. The
applications span from cell sorting, to carbon nanotube placement, to
continuous material handling. They can even manipulate the organelles
inside an unruptured living cell (and weigh the DNA in the nucleus).
• Metrology
Imago’s LEAP atom probe microscope is being used by the chip and disk
drive industries to produce 3D pictures that depict both chemistry and
structure of items on an atom-by-atom basis. Unlike traditional
microscopes, which zoom in to see an item on a microscopic level,
Imago’s nanoscope analyzes structures, one atom at a time, and "zooms
out" as it digitally reconstructs the item of interest at a rate of
millions of atoms per minute. This creates an unprecedented level of
visibility and information at the atomic level.
Advances in nanoscale tools help us control and analyze matter more precisely, which in turn, allows us to produce better tools.
To summarize, the top-down path is designed and engineered with:
• Semiconductor industry adjacencies (with the benefits of market
extensions and revenue along the way and the limitation of planar
manufacturing techniques)
• Interfaces of scale inherited from the top
The Biological Bottom Up Path:
In contrast to the top-down path, the biological bottom up archetype is:
• Grown via replication, evolution, and self assembly in a 3D, fluid medium
• Constrained at interfaces to the inorganic world
• Limited by learning and theory gaps (in systems biology, complexity theory and the pruning rules of emergence)
• Bootstrapped by a powerful pre-existing hierarchy of interpreters of digital molecular code.
To elaborate on this last point, the ribosome takes digital
instructions in the form of mRNA and manufactures almost everything we
care about in our bodies from a sequential concatenation of amino acids
into proteins. The ribosome is a wonderful existence proof of the power
and robustness of a molecular machine. It is roughly 20nm on a side and
consists of only 99 thousand atoms. Biological systems are replicating
machines that parse molecular code (DNA) and a variety of feedback to
grow macro-scale beings. These highly evolved systems can be hijacked
and reprogrammed to great effect.
So how does this help with the development of molecular electronics
or nanotech manufacturing? The biological bootstrap provides a more
immediate path to nanotech futures. Biology provides us with a library
of pre-built components and subsystems that can be repurposed and
reused, and scientists in various labs are well underway in
re-engineering the information systems of biology.
For example, researchers at NASA Ames are taking self-assembling heat
shock proteins from thermophiles and genetically modifying them so that
they will deposit a regular array of electrodes with a 17nm spacing.
This could be useful for patterned magnetic media in the disk drive
industry or electrodes in a polymer solar cell.
At MIT, researchers are using accelerated artificial evolution to
rapidly breed M13 bacteriophage to infect bacteria in such a way that
they bind and organize semiconducting materials with molecular
precision.
At IBEA, Craig Venter and Hamilton Smith are leading the Minimal Genome Project. They take the Mycoplasma genitalium
from the human urogenital tract, and strip out 200 unnecessary genes,
thereby creating the simplest organism that can self-replicate. Then
they plan to layer new functionality on to this artificial genome, such
as the ability to generate hydrogen from water using the sun’s energy
for photonic hydrolysis.
The limiting factor is our understanding of these complex systems,
but our pace of learning has been compounding exponentially. We will
learn more about genetics and the origins of disease in the next 10
years than we have in all of human history. And for the minimal genome
microbes, the possibility of understanding the entire proteome and
metabolic pathways seems tantalizingly close to achievable. These
simpler organisms have a simple “one gene: one protein” mapping, and
lack the nested loops of feedback that make the human genetic code so
rich.
Hybrid Molecular Electronics Example:
In the near term, there are myriad companies who are leveraging the
power of organic self-assembly (bottom up) and the market interface
advantages of top down design. The top down substrate constrains the
domain of self-assembly.
Based in Denver, ZettaCore builds molecular memories from
energetically elegant molecules that are similar to chlorophyll.
ZettaCore’s synthetic organic porphyrin molecule self-assembles on
exposed silicon. These molecules, called multiporphyrin nanostructures,
can be oxidized and reduced (electrons removed or replaced) in a way
that is stable, reproducible, and reversible. In this way, the molecules
can be used as a reliable storage medium for electronic devices.
Furthermore, the molecules can be engineered to store multiple bits of
information and to maintain that information for relatively long periods
of time before needing to be refreshed.
Recall the water drop to transistor count comparison, and realize
that these multiporphyrins have already demonstrated up to eight stable
digital states per molecule.
The technology has future potential to scale to 3D circuits with
minimal power dissipation, but initially it will enhance the weakest
element of an otherwise standard 2D memory chip. The ZettaCore memory
chip looks like a standard memory chip to the end customer; nobody needs
to know that it has “nano inside.” The I/O pads, sense amps, row
decoders and wiring interconnect are produced with a standard
semiconductor process. As a final manufacturing step, the molecules are
splashed on the wafer where they self-assemble in the pre-defined
regions of exposed metal.
From a business perspective, the hybrid product design allows an
immediate market entry because the memory chip defines a standard
product feature set, and the molecular electronics manufacturing process
need not change any of the prior manufacturing steps. The
inter-dependencies with the standard silicon manufacturing steps are
also avoided given this late coupling; the fab can process wafers as
they do now before spin coating the molecules. In contrast, new
materials for gate oxides or metal interconnects can have a number of
effects on other processing steps that need to be tested, which
introduces delay (as was seen with copper interconnect).
For these reasons, ZettaCore is currently in the lead in the
commercialization of molecular electronics, with a working megabit chip,
technology tested to a trillion read/write cycles, and manufacturing
partners. In a symbolic nod to the future, Intel co-founder Les Vadasz
(badge #3), has just joined the Board of Directors of ZettaCore. He was
formerly the design manager for the world’s first DRAM, EPROM and
microprocessor.
Generalizing from the ZettaCore experience, the early revenue in
molecular electronics will likely come from simple 1D structures such as
chemical sensors and self-assembled 2D arrays on standard substrates,
such as memory chips, sensor arrays, displays, CCDs for cameras and
solar cells.
IP and business model:
Beyond product development timelines, the path to commercialization
is dramatically impacted by the cost and scale of the manufacturing
ramp. Partnerships with industry incumbents can be the accelerant or
albatross for market entry.
The strength of the IP protection for nanotech relates to the
business models that can be safely pursued. For example, if the
composition of matter patents afford the nanotech startup the same
degree of protection as a biotech startup, then a “biotech licensing
model” may be possible in nanotech. For example, a molecular electronics
company could partner with a large semiconductor company for
manufacturing, sales and marketing, just as a biotech company partners
with a big pharma partner for clinical trials, marketing, sales and
distribution. In both cases, the cost to the big partner is on the order
of $100 million, and the startup earns a royalty on future product
sales.
Notice how the transaction costs and viability of this business model
option pivots around the strength of IP protection. A software
business, on the other end of the IP spectrum, would be very cautious
about sharing their source code with Microsoft in the hopes of forming a
partnership based on royalties.
Manufacturing partnerships are common in the semiconductor industry,
with the “fabless” business model. This layering of the value chain
separates the formerly integrated functions of product
conceptualization, design, manufacturing, testing, and packaging. This
has happened in the semiconductor industry because the capital cost of
manufacturing is so large. The fabless model is a useful way for a small
company with a good idea to bring its own product to market, but the
company then has to face the issue of gaining access to its market and
funding the development of marketing, distribution, and sales.
Having looked at the molecular electronics example in some depth, we
can now move up the abstraction ladder to aggregates, complex systems,
and the potential to advance the capabilities of Moore’s Law in
software.
Systems, Software, and other Abstractions:
Unlike memory chips, which have a regular array of elements,
processors and logic chips are limited by the rats’ nest of wires that
span the chip on multiple layers. The bottleneck in logic chip design is
not raw numbers of transistors, but a design approach that can utilize
all of that capability in a timely fashion. For a solution, several next
generation processor companies have redesigned “systems on silicon”
with a distributed computing bent; wiring bottlenecks are localized, and
chip designers can be more productive by using a high-level programming
language, instead of wiring diagrams and logic gates. Chip design
benefits from the abstraction hierarchy of computer science.
Compared to the relentless march of Moore’s Law, the cognitive
capability of humans is relatively fixed. We have relied on the
compounding power of our tools to achieve exponential progress. To take
advantage of accelerating hardware power, we must further develop layers
of abstraction in software to manage the underlying complexity. For the
next 1000-fold improvement in computing, the imperative will shift to
the growth of distributed complex systems. Our inspiration will likely
come from biology.
As we race to interpret the now complete map of the human genome, and
embark upon deciphering the proteome, the accelerating pace of learning
is not only opening doors to the better diagnosis and treatment of
disease, it is also a source of inspiration for much more powerful
models for computer programming and complex systems development.
Biological Muse:
Many of the interesting software challenges relate to growing complex
systems or have other biological metaphors as inspiration. Some of the
interesting areas include: Biomimetics, Artificial Evolution, Genetic
Algorithms, A-life, Emergence, IBM’s Autonomic Computing initiative,
Viral Marketing, Mesh, Hives, Neural Networks and the Subsumption
architecture in robotics. The Santa Fe Institute just launched a BioComp
research initiative.
In short, biology inspires IT and IT drives biology.
But how inspirational are the information systems of biology? If we
took your entire genetic code–the entire biological program that
resulted in your cells, organs, body and mind–and burned it into a CD,
it would be smaller than Microsoft Office. Just as images and text can
be stored digitally, two digital bits can encode for the four DNA bases
(A,T,C and G) resulting in a 750MB file that can be compressed for the
preponderance of structural filler in the DNA chain.
If, as many scientists believe, most of the human genome consists of
vestigial evolutionary remnants that serve no useful purpose, then we
could compress it to 60MB of concentrated information. Having recently
reinstalled Office, I am humbled by the comparison between its
relatively simple capabilities and the wonder of human life. Much of the
power in bio-processing comes from the use of non-linear fuzzy logic
and feedback in the electrical, physical and chemical domains.
For example, in a fetus, the initial inter-neuronal connections, or
"wiring" of the brain, follow chemical gradients. The massive number of
inter-neuron connections in an adult brain could not be simply encoded
in our DNA, even if the entire DNA sequence was dedicated to this one
task. There are on the order of 100 trillion synaptic connections
between 60 billion neurons in your brain.
This incredibly complex system is not ‘installed’ like Microsoft
Office from your DNA. It is grown, first through widespread connectivity
sprouting from ‘static storms’ of positive electro-chemical feedback,
and then through the pruning of many underused connections through
continuous usage-based feedback. In fact, at the age of 2 to 3 years
old, humans hit their peak with a quadrillion synaptic connections, and
twice the energy burn of an adult brain.
The brain has already served as an inspirational model for artificial
intelligence (AI) programmers. The neural network approach to AI
involves the fully interconnected wiring of nodes, and then the
iterative adjustment of the strength of these connections through
numerous training exercises and the back-propagation of feedback through
the system.
Moving beyond rules-based AI systems, these artificial neural
networks are capable of many human-like tasks, such as speech and visual
pattern recognition with a tolerance for noise and other errors. These
systems shine precisely in the areas where traditional programming
approaches fail.
The coding efficiency of our DNA extends beyond the leverage of
numerous feedback loops to the complex interactions between genes. The
regulatory genes produce proteins that respond to external or internal
signals to regulate the activity of previously produced proteins or
other genes. The result is a complex mesh of direct and indirect
controls.
This nested complexity implies that genetic re-engineering can be a
very tricky endeavor if we have partial system-wide knowledge about the
side effects of tweaking any one gene. For example, recent experiments
show that genetically enhanced memory comes at the expense of enhanced
sensitivity to pain.
By analogy, our genetic code is a dense network of nested hyperlinks,
much like the evolving Web. Computer programmers already tap into the
power and efficiency of indirect pointers and recursive loops. More
recently, biological systems have inspired research in evolutionary
programming, where computer programs are competitively grown in a
simulated environment of natural selection and mutation. These efforts
could transcend the local optimization inherent to natural evolution.
But therein lies great complexity. We have little experience with the
long-term effects of the artificial evolution of complex systems. Early
subsystem work can be deterministic of emergent and higher-level
capabilities, as with the neuron (witness the Cambrian explosion of
structural complexity and intelligence in biological systems once the
neuron enabled something other than nearest-neighbor inter-cellular
communication. Prior to the neuron, most multi-cellular organisms were
small blobs).
Recent breakthroughs in robotics were inspired by the "subsumption
architecture" of biological evolution—using a layered approach to
assembling reactive rules into complete control systems from the bottom
up. The low-level reflexes are developed early on, and remain unchanged
as complexity builds. Early subsystem work in any subsumptive system can
have profound effects on its higher order constructs. We may not have a
predictive model of these downstream effects as we are developing the
architectural equivalent of the neuron.
The Web is the first distributed experiment in biological growth in
technological systems. Peer-to-peer software development and the rise of
low-cost Web-connected embedded systems give the possibility that
complex artificial systems will arise on the Internet, rather than on
one programmer’s desktop. We already use biological metaphors, such as
viral marketing to describe the network economy.
Nanotech Accelerants: quantum simulation and high-throughput experimentation:
We have already discussed the migration of the lab sciences to the
innovation cycles of the information sciences and Moore’s Law. Advances
in multi-scale molecular modeling are helping some companies design
complex molecular systems in silico. But the quantum effects that
underlie the unique properties of nano-scale systems are a double-edged
sword. Although scientists have known for nearly 100 years how to write
down the equations that an engineer needs to solve in order to
understand any quantum system, no computer has ever been built that is
powerful enough to solve them. Even today’s most powerful supercomputers
choke on systems bigger than a single water molecule.
This means that the behavior of nano-scale systems can only be
reliably studied by empirical methods—building something in a lab, and
poking and prodding it to see what happens.
This observation is distressing on several counts. We would like to
design and visualize nano-scale products in the tradition of mechanical
engineering, using CAD-like programs. Unfortunately this future can
never be accurately realized using traditional computer architectures.
The structures of interest to nano-scale scientists present intractable
computational challenges to traditional computers.
The shortfall in our ability to use computers to shorten and cheapen
the design cycles of nano-scale products has serious business
ramifications. If the development of all nano-scale products
fundamentally requires long R&D cycles and significant investment,
the nascent nanotechnology industry will face many of the difficulties
that the biotechnology industry faces, without having a parallel to the
pharmaceutical industry to shepherd products to markets.
In a wonderful turn of poetic elegance, quantum mechanics itself
turns out to be the solution to this quandary. Machines known as quantum
computers, built to harness some simple properties of quantum systems,
can perform accurate simulations of any nano-scale system of comparable
complexity. The type of simulation that a quantum computer does results
in an exact prediction of how a system will behave in nature—something
that is literally impossible for any traditional computer, no matter how
powerful.
Once quantum computers become available, engineers working at the
nano-scale will be able to use them to model and design nano-scale
systems just like today’s aerospace engineers model and design
airplanes—completely virtually—with no wind tunnels (or their chemical
analogues).
This may seem strange, but really it’s not. Think of it like this:
conventional computers are really good at modeling conventional (that
is, non-quantum) stuff—like automobiles and airplanes. Quantum computers
are really good at modeling quantum stuff. Each type of computer speaks
a different language.
Based in Vancouver, Canada, D-Wave is building a quantum computer
using aluminum-based circuits. The company projects that by 2008 it will
be building thumbnail-sized chips with more computing power than the
aggregate total of all computers on the planet today and ever built in
history, when applied to simulating the behavior and predicting the
properties of nano-scale systems—highlighting the vast difference in
capabilities of quantum and conventional computers. This would be of
great value to the development of the nanotechnology industry. And it’s a
jaw-dropping claim. Professor David Deutsch of Oxford summarized:
“Quantum computers have the potential to solve problems that would take a
classical computer longer than the age of the universe.”
While any physical experiment can be regarded as a complex
computation, we will need quantum computers to transcend Moore’s law
into the quantum domain to make this equivalence realizable. In the
meantime, scientists will perform experiments. Until recently, the
methods used for the discovery of new functional materials differed
little from those used by scientists and engineers a hundred years ago.
It was very much a manual, skilled labor-intensive process. One sample
was prepared from millions of possibilities, then it was tested, the
results recorded and the process repeated. Discoveries routinely took
years.
Companies like Affymetrix, Intematix and Symyx have made major
improvements in a new methodology: high throughput experimentation. For
example, Intematix performs high throughput synthesis and screening of
materials to produce and characterize these materials for a wide range
of technology applications. This technology platform enables them to
discover compound materials solutions more than one hundred times faster
than conventional methods. Initial materials developed have application
in wireless communications, fuel cells, batteries, x-ray imaging,
semiconductors, LEDs, and phosphors.
Combinatorial materials discovery replaces the old traditional method by generating a multitude of combinations—possibly all
feasible combinations—of a set of raw materials simultaneously. This
"Materials Library" contains all combinations of a set of materials, and
they can be quickly tested in parallel by automated methods similar to
those used in the combinatorial chemistry and the pharmaceutical
industry. What used to take years to develop now only takes months.
Timeline:
Given our discussion of the various factors affecting the
commercialization of nanotech-nologies, how do we see them sequencing?
• Early Revenue
- Tools and bulk materials (powders, composites). Several revenue stage and public companies already exist in this category.
- 1D chemical and biological sensors. Out of body medical sensors and diagnostics
- Larger MEMS-scale devices
• Medium Term
- 2D Nanoelectronics: memory, displays, solar cells
- Hierarchically-structured nanomaterials
- Hybrid Bio-nano, efficient energy storage and conversion
- Passive drug delivery & diagnostics, improved implantable medical devices
• Long Term
- 3D Nanoelectronics
- Nanomedicine, therapeutics, and artificial chromosomes
- Quantum computers used in small molecule design
- Machine-phase manufacturing
- The safest long-term prediction is that the most important nanotech
developments will be the unforeseen opportunities, something that we
could not predict today.
In the long term, nanotechnology research could ultimately enable
miniaturization to a magnitude never before previously seen, and could
restructure and digitize the basis of manufacturing—such that matter
becomes code. Like the digitization of music, the importance is not just
in the fidelity of reproduction, but in the decoupling of content from
distribution. New opportunities arise once a product is digitized, such
as online music swapping—transforming an industry.
With replicating molecular machines, physical production itself
migrates to the rapid innovation cycle of information technology. With
physical goods, the basis of manufacturing governs inventory planning
and logistics, and the optimal distribution and retail supply chain has
undergone little radical change for many decades. Flexible, low-cost
manufacturing near the point of consumption could transform the physical
goods economy, and even change our notion of ownership—especially for
infrequently used objects.
These are some profound changes to the manufacturing of everything,
which ripples through the fabric of society. The science futurists have
pondered the implications of being able to manufacture anything
for $1 per pound. And as some of these technologies couple tightly to
our biology, it will draw into question the nature and extensibility of
our humanity.
Genes, Memes and Digital Expression:
These changes may not be welcomed smoothly, especially with regard to
reengineering the human germ line. At the societal level, we will
likely try to curtail “genetic free speech” and the evolution of
evolvability. Larry Lessig predicts that we will recapitulate the
200-year debate about the First Amendment to the Constitution. Pressures
to curtail free genetic expression will focus on the dangers of “bad
speech”, and others will argue that good genetic expression will crowd
out the bad, as it did with mimetic evolution (in the scientific method
and the free exchange of ideas). Artificial chromosomes with adult
trigger events can decouple the agency debate about parental control.
And, with a touch of irony, China may lead the charge.
We subconsciously cling to the selfish notion that humanity is the
endpoint of evolution. In the debates about machine intelligence and
genetic enhancements, there is a common and deeply rooted fear about
being surpassed—in our lifetime. When framed as a question of parenthood
(would you want your great grandchild to be smarter and healthier than
you?), the emotion often shifts from a selfish sense of supremacy to a
universal human search for symbolic immortality.
Summary:
While the future is becoming more difficult to predict with each
passing year, we should expect an accelerating pace of technological
change. We conclude that nanotechnology is the next great technology
wave and the next phase of Moore’s Law. Nanotech innovations enable
myriad disruptive businesses that were not possible before, driven by
entrepreneurship.
Much of our future context will be defined by the accelerating
proliferation of information technology—as it innervates society and
begins to subsume matter into code. It is a period of exponential growth
in the impact of the learning-doing cycle where the power of biology,
IT and nanotech compounds the advances in each formerly discrete domain.
So, at DFJ, we conclude that it is a great time to invest in
startups. As in evolution and the Cambrian explosion, many will become
extinct. But some will change the world. So we pursue the strategy of a
diversified portfolio, or in other words, we try to make a broad bet on
mammals.
These tiny Proteus digital sensors and a wearable patch
keep
track of how patients take their prescribed medications. It's
one
example of the growing field of ingestible medical devices.
Proteus Digital Health
Tiny computers have allowed us to do things that were once considered science fiction. Take the 1960s film, Fantastic Voyage, where a crew is shrunk to microscopic size and sent into the body of an injured scientist.
While we aren't shrinking humans quite yet, scientists are working with nanotechnology to send computers inside patients for a more accurate and specific, diagnosis.
Albert Swiston, a biomaterials scientist at MIT, is testing a tiny pill
that combines a microphone, a thermometer and a battery to collect
several measures at once from inside a body. It's a latest in a series
of ingestable computers like a Proteus sensor that tracks how patients take prescribed medications, a VitalSense from Philips that tracks a patient's temperature or a PillCam that allows people to skip colonoscopies.
Swiston and Stephen Shankland, senior writer for CNET covering digital technology, joined NPR on All Things Considered to discuss the future of nano tech on Monday.
Here are some takeaways.
STEPHEN SHANKLAND
On how small computers have become
About
20 years ago it was a desktop, about 10 years ago it was a laptop —
that kind of a computer is now about the size of your thumb.
Through the amazing technological innovations of Moore's Law, microprocessors — that's the chip brain that's inside every computer — have been getting smaller and smaller every year.
Instead of getting faster PCs, we're getting tinier PCs.
On why he isn't sold on thumb-sized computers
I
think they open up a lot of new avenues because they're relatively
inexpensive so you can fool around with them. It's not a delicate,
precious object you have to worry a lot about. You can tinker with them.
If you sit on it and it cracks, it isn't the end of the world. I think
it can be liberating. But for most people, a traditional laptop is a
much more useful product and it's going to run faster. These little tiny
computers are constrained.
On the future of computing
You
might look at a future where you don't have computers at all. The
computing power you need is just woven into the fabric of your shirt, or
maybe it is in your ring, or your watch. Maybe that device connects
automatically to some screen next to you, or some projector you carry
with you.
Maybe you won't even need a display — it will just get piped straight into your eyeballs.
ALBERT SWISTON
On his latest project — digestible sensors monitoring vital signs inside pigs
It's a device that is so small, it's the size of a large
multi-vitamin pill that you might take, and it is able to listen to your
heart, and listen to your lungs, and it has a thermometer on board that
can tell you what your heart rate, your breathing rate and what your
body core temperature are. And then it can send that information outside
of you to a smartphone or to a laptop computer, and tell you what those
vital signs are without touching the body. It's not a wearable, it's an
ingestible.
On the target audience
If
you're a performance athlete, $70, which is roughly what it would cost
to build such a device, is totally a trivial amount. Tom Brady could be
taking these things every quarter. However, it's true that you couldn't
be taking these every day — it would be just too expensive. So we're
thinking of markets where a patient comes into an emergency room and
needs to be monitored for a short period of time.
But you can
imagine cases like military or performance sports where you could be
taking these before every game or every mission.
On the potential for a computer — inside your body — being hacked
It's
not implantable, it's not there forever. So while it's true that
hackers can hack computers, you'd only have to worry about it for a
couple of days. In fact, I don't think you'd have to worry about it very
much, even for those few days, because that information that you're
sending out in the current device we have is pretty simple.
On the future of ingestible technology
As
electronics get smaller and smaller, more capabilities can be put onto
one of these devices. Right now, we essentially have a microphone, but
you can imagine having a real time camera, that is, say, able to stream
through the body, or able to take samples of fluids in your body to tell
you things like more subtle markers for cancer, for heart disease.
Being able to travel in the bloodstream or affect the nervous system, I think, is the next great frontier of medicine.
Over the past few decades, the fields of science and engineering have been seeking to develop new and improved types of energy
technologies that have the capability of improving life all over the
world. In order to make the next leap forward from the current
generation of technology, scientists and engineers have been developing energy applications of nanotechnology. Nanotechnology, a new field in science, is any technology that contains components smaller than 100 nanometers. For scale, a single virus particle is about 100 nanometers in width.
An important subfield of nanotechnology related to energy is nanofabrication.
Nanofabrication is the process of designing and creating devices on
the nanoscale. Creating devices smaller than 100 nanometers opens many
doors for the development of new ways to capture, store, and transfer
energy. The inherent level of control that nanofabrication could give
scientists and engineers would be critical in providing the capability
of solving many of the problems that the world is facing today related
to the current generation of energy technologies.[1]
People in the fields of science and engineering have already
begun developing ways of utilizing nanotechnology for the development of
consumer products. Benefits already observed from the design of these products are an increased efficiency of lighting and heating, increased electrical storage capacity, and a decrease in the amount of pollution from the use of energy. Benefits such as these make the investment of capital in the research and development of nanotechnology a top priority.
Consumer products
Recently,
previously established and entirely new companies such as BetaBatt,
Inc. and Oxane Materials are focusing on nanomaterials as a way to
develop and improve upon older methods for the capture, transfer, and
storage of energy for the development of consumer products.
ConsERV, a product developed by the Dais Analytic Corporation, uses nanoscale polymer
membranes to increase the efficiency of heating and cooling systems and
has already proven to be a lucrative design. The polymer membrane was
specifically configured for this application by selectively engineering
the size of the pores in the membrane to prevent air from passing, while
allowing moisture to pass through the membrane. ConsERV's value is
demonstrated in the form of an energy recovery
a device which pretreats the incoming fresh air to a building using the
energy found in the exhaust air steam using no moving parts to lower
the energy and carbon footprint of existing forms of heating and cooling
equipment Polymer membranes can be designed to selectively allow
particles of one size and shape to pass through while preventing other.
This makes for a powerful tool that can be used in all markets -
consumer, commercial, industrial, and government products from biological weapons
protection to industrial chemical separations. Dais's near term uses of
this 'family' of selectively engineered nanotechnology materials, aside
from ConsERV, include (a.) a completely new cooling cycle capable of
replacing the refrigerant based cooling cycle the world has known for
the past 100 plus years. This product, under development, is named
NanoAir. NanoAir uses only water and this selectively engineered
membrane material to cool (or heat) and dehumidify (or humidify) air.
There are no fluorocarbon producing gasses used, and the energy required
to cool a space drops as thermodynamics does the actual cooling. The
company was awarded an Advanced Research Program Administration - Energy
award in 2010, and a United States Department of Defense (DoD) grant in
2011 both designed to accelerate this newer, energy efficient
technology closer to commercialization, and (b.) a novel way to clean
most all contaminated forms of water called NanoClear. By using the
selectivity of this hermetic, engineered composite material it can
transfer only a water molecule from one face of the membrane to the
other leaving behind the contaminants. It should also be noted Dais
received a US Patent (Patent Number 7,990,679) in October 2011 titled
"Nanoparticle Ultracapacitor". This patented item again uses the
selectively engineered material to create an energy storage mechanism
projected to have performance and cost advantages over existing storage
technologies. The company has used this patent's concepts to create a
functional energy storage prototype device named NanoCap. NanoCap is a
form of ultra-capacitor potentially useful to power a broad range of
applications including most forms of transportation, energy storage
(especially useful as a storage media for renewable energy
technologies), telecommunication infrastructure, transistor gate
dielectrics, and consumer battery applications (cell phones, computers,
etc.).[2]
A New York-based company called Applied NanoWorks, Inc. has been
developing a consumer product that utilizes LED technology to generate
light. Light-emitting diodes or LEDs, use only about 10% of the energy that a typical incandescent or fluorescent light bulb
uses and typically last much longer, which makes them a viable
alternative to traditional light bulbs. While LEDs have been around for
decades, this company and others like it have been developing a special
variant of LED called the white LED. White LEDs consist of
semi-conducting organic layers that are only about 100 nanometers in
distance from each other and are placed between two electrodes, which
create an anode, and a cathode. When voltage is applied to the system, light is generated when electricity passes through the two organic layers. This is called electroluminescence. The semiconductor properties of the organic layers are what allow for the minimal amount of energy necessary to generate light. In traditional light bulbs,
a metal filament is used to generate light when electricity is run
through the filament. Using metal generates a great deal of heat and
therefore lowers efficiency.
Research for longer lasting batteries has been an ongoing process for years. Researchers have now begun to utilize nanotechnology for battery technology. mPhase Technologies in conglomeration with Rutgers University and Bell Laboratories
have utilized nanomaterials to alter the wetting behavior of the
surface where the liquid in the battery lies to spread the liquid
droplets over a greater area on the surface and therefore have greater
control over the movement of the droplets. This gives more control to
the designer of the battery. This control prevents reactions in the
battery by separating the electrolytic liquid from the anode and the cathode when the battery is not in use and joining them when the battery is in need of use.
Thermal applications also are a future applications of
nanothechonlogy creating low cost system of heating, ventilation, and
air conditioning, changing molecular structure for better management of
temperature
Reduction of energy consumption
A
reduction of energy consumption can be reached by better insulation
systems, by the use of more efficient lighting or combustion systems,
and by use of lighter and stronger materials in the transportation
sector. Currently used light bulbs only convert approximately 5% of the
electrical energy into light. Nanotechnological approaches like or quantum caged atoms (QCAs) could lead to a strong reduction of energy consumption for illumination.[citation needed]
Increasing the efficiency of energy production
Today's best solar cells have layers of several different semiconductors
stacked together to absorb light at different energies but they still
only manage to use 40 percent of the Sun's energy. Commercially
available solar cells have much lower efficiencies (15-20%).
Nanostructuring has been used to improve the efficiencies of established
photovoltaic tehcnologies, for example by improving current collection
in amorphous silicon devices,[3] plasmonic enhancement in dye-sensitized solar cells,[4] and improved light trapping in crystalline silicon.[5]
Furthermore, nanotechnology could help increase the efficiency of light conversion by using nanostructures with a continuum of bandgaps[citation needed], or by controlling the directivity and photon escape probability of photovoltaic devices.[6]
The degree of efficiency of the internal combustion engine
is about 30-40% at present. Nanotechnology could improve combustion by
designing specific catalysts with maximized surface area. In 2005,
scientists at the University of Toronto developed a spray-on nanoparticle substance that, when applied to a surface, instantly transforms it into a solar collector.[7]
Nuclear Accident Cleanup and Waste Storage
Nanomaterials deployed by swarm robotics
may be helpful for decontaminating a site of a nuclear accident which
poses hazards to humans because of high levels of radiation and
radioactive particles. Hot nuclear compounds such as corium or melting fuel rods
may be contained in "bubbles" made from nanomaterials that are designed
to isolate the harmful effects of nuclear activity occurring inside of
them from the outside environment that organisms inhabit.[citation needed]
Economic benefits
The
relatively recent shift toward using nanotechnology with respect to the
capture, transfer, and storage of energy has and will continue to have
many positive economic impacts on society. The control of materials
that nanotechnology offers to scientists and engineers of consumer
products is one of the most important aspects of nanotechnology. This
allows for an improved efficiency of products across the board.
A major issue with current energy generation is the loss of
efficiency from the generation of heat as a by-product of the process. A
common example of this is the heat generated by the internal combustion engine. The internal combustion engine loses about 64% of the energy from gasoline
as heat and an improvement of this alone could have a significant
economic impact. However, improving the internal combustion engine in
this respect has proven to be extremely difficult without sacrificing
performance. Improving the efficiency of fuel cells through the use of nanotechnology appears to be more plausible by using molecularly tailored catalysts, polymer membranes, and improved fuel storage.
In order for a fuel cell to operate, particularly of the hydrogen variant, a noble-metal catalyst (usually platinum, which is very expensive) is needed to separate the electrons from the protons of the hydrogen atoms. However, catalysts of this type are extremely sensitive to carbon monoxide reactions. In order to combat this, alcohols or hydrocarbons compounds are used to lower the carbon monoxide
concentration in the system. This adds an additional cost to the
device. Using nanotechnology, catalysts can be designed through
nanofabrication that are much more resistant to carbon monoxide
reactions, which improves the efficiency of the process and may be
designed with cheaper materials to additionally lower costs.
Fuel cells that are currently designed for transportation
need rapid start-up periods for the practicality of consumer use. This
process puts a lot of strain on the traditional polymer electrolyte
membranes, which decreases the life of the membrane requiring frequent
replacement. Using nanotechnology, engineers have the ability to create
a much more durable polymer membrane, which addresses this problem.
Nanoscale polymer membranes are also much more efficient in ionic conductivity. This improves the efficiency of the system and decreases the time between replacements, which lowers costs.
Another problem with contemporary fuel cells is the storage of the fuel.
In the case of hydrogen fuel cells, storing the hydrogen in gaseous
rather than liquid form improves the efficiency by 5%. However, the
materials that we currently have available to us significantly limit
fuel storage due to low stress tolerance and costs. Scientists have
come up with an answer to this by using a nanoporous styrene material (which is a relatively inexpensive material) that when super-cooled to around -196oC, naturally holds on to hydrogen atoms and when heated again releases the hydrogen for use.
Capacitors: then and now
For decades, scientists and engineers have been attempting to make computers smaller and more efficient. A crucial component of computers are capacitors. A capacitor is a device that is made of a pair of electrodes separated by an insulator
that each stores an opposite charge. A capacitor stores a charge when
it is removed from the circuit that it is connected to; the charge is
released when it is replaced back into the circuit. Capacitors have an
advantage over batteries in that they release their charge much more
quickly than a battery.
Traditional or foil capacitors are composed of thin metal
conducting plates separated by an electrical insulator, which are then
stacked or rolled and placed in a casing. The problem with a
traditional capacitor such as this is that they limit how small an
engineer can design a computer. Scientists and engineers have since
turned to nanotechnology for a solution to the problem.
Using nanotechnology, researchers developed what they call “ultracapacitors.”
An ultracapacitor is a capacitor that contains nanocomponents.
Ultracapacitors are being researched heavily because of their high
density interior, compact size, reliability, and high capacitance. This
decrease in size makes it increasingly possible to develop much smaller
circuits and computers. Ultracapacitors also have the capability to supplement batteries in hybrid vehicles by providing a large amount of energy during peak acceleration
and allowing the battery to supply energy over longer periods of time,
such as during a constant driving speed. This could decrease the size
and weight of the large batteries needed in hybrid vehicles as well as
take additional stress off the battery. However, the combination of
ultracapacitors and a battery is not cost effective due to the need of
additional DC/DC electronics to coordinate the two.
Nanoporous carbon aerogel
is one type of material that is being utilized for the design of
ultracapacitors. These aerogels have a very large interior surface area
and can have its properties altered by changing the pore diameter and
distribution along with adding nanosized alkali metals to alter its conductivity.
Carbon nanotubes
are another possible material for use in an ultracapacitor. Carbon
nanotubes are created by vaporizing carbon and allowing it to condense
on a surface. When the carbon condenses, it forms a nanosized tube
composed of carbon atoms. This tube has a high surface area, which
increases the amount of charge that can be stored. The low reliability
and high cost of using carbon nanotubes for ultracapacitors is currently
an issue of research.
In a study concerning ultracapacitors or supercapacitors, researchers at the Sungkyunkwan University in the Republic of Korea explored the possibility of increasing the capacitance of electrodes through the addition of fluorine
atoms to the walls of carbon nanotubes. As briefly mentioned before,
carbon nanotubes are an increasing form of capacitors due to their
superb chemical stability, high conductivity, light mass, and their
large surface area. These researchers fluorinated single-walled carbon
nanotubes (SWCNTs) at high temperatures to bind fluorine atoms to the
walls. The attached fluorine atoms changed the non-polar nanotubes to
become polar molecules. This can be attributed to the charge transfer
from the fluorine. This created dipole-dipole layers along the carbon
nanotube walls. Testing of these fluorinated SWCNTs against normal
state SWCNTs showed a difference in capacitance. It was determined that
the fluorinated SWCNTs are advantageous in fabricating electrodes for
capacitors and improve the wettability
with aqueous electrolytes, which promotes the overall performance of
supercapacitors. While this study brought to knowledge a more efficient
example of capacitors, little is known about this new supercapacitor,
large scale synthesis is lacking and is necessary for any massive
production, and preparation conditions are quite tedious in achieving
the final product.[8]
Theory of capacitance
Understanding the concept of capacitance
can be helpful in understanding why nanotechnology is such a powerful
tool for the design of higher energy storing capacitors. A capacitor’s
capacitance (C) or amount of energy stored is equal to the amount of
charge (Q) stored on each plate divided by the voltage (V) between the
plates. Another representation of capacitance is that capacitance (C)
is approximately equal to the permittivity (ε) of the dielectric
times the area (A) of the plates divided by the distance (d) between
them. Therefore, capacitance is proportional to the surface area of the
conducting plate and inversely proportional to the distance between the
plates.
Using carbon nanotubes as an example, a property of carbon
nanotubes is that they have a very high surface area to store a charge.
Using the above proportionality that capacitance (C) is proportional to
the surface area (A) of the conducting plate; it becomes obvious that
using nanoscaled materials with high surface area would be great for
increasing capacitance. The other proportionality described above is
that capacitance (C) is inversely proportional to the distance (d)
between the plates. Using nanoscaled plates such as carbon nanotubes
with nanofabrication techniques, gives the capability of decreasing the
space between plates which again increases capacitance.