Search This Blog

Tuesday, July 24, 2018

Asteroid mining

From Wikipedia, the free encyclopedia
Artist's concept of asteroid mining
433 Eros is a stony asteroid in a near-Earth orbit

Asteroid mining is the exploitation of raw materials from asteroids and other minor planets, including near-Earth objects.

Minerals can be mined from an asteroid or spent comet then used in space for construction materials or taken back to Earth. These include gold, iridium, silver, osmium, palladium, platinum, rhenium, rhodium, ruthenium and tungsten for transport back to Earth; iron, cobalt, manganese, molybdenum, nickel, aluminium, and titanium for construction.

Due to the high launch and transportation costs of spaceflight, inaccurate identification of asteroids suitable for mining, and in-situ ore extraction challenges, terrestrial mining remains the only means of raw mineral acquisition today. If space program funding, either public or private, dramatically increases, this situation is likely to change in the future as resources on Earth are becoming increasingly scarce and the full potentials of asteroid mining—and space exploration in general—are researched in greater detail.[1]:47f However, it is yet uncertain whether asteroid mining will develop to attain the volume and composition needed in due time to fully compensate for dwindling terrestrial reserves.[2][3][4]

Purpose

Based on known terrestrial reserves, and growing consumption in both developed and developing countries, key elements needed for modern industry and food production could be exhausted on Earth within 50–60 years.[5] These include phosphorus, antimony, zinc, tin, lead, indium, silver, gold and copper.[6] In response, it has been suggested that platinum, cobalt and other valuable elements from asteroids may be mined and sent to Earth for profit, used to build solar-power satellites and space habitats,[7][8] and water processed from ice to refuel orbiting propellant depots.[9][10][11]

Although asteroids and Earth accreted from the same starting materials, Earth's relatively stronger gravity pulled all heavy siderophilic (iron-loving) elements into its core during its molten youth more than four billion years ago.[12][13][14] This left the crust depleted of such valuable elements until a rain of asteroid impacts re-infused the depleted crust with metals like gold, cobalt, iron, manganese, molybdenum, nickel, osmium, palladium, platinum, rhenium, rhodium, ruthenium and tungsten (some flow from core to surface does occur, e.g. at the Bushveld Igneous Complex, a famously rich source of platinum-group metals)[citation needed]. Today, these metals are mined from Earth's crust, and they are essential for economic and technological progress. Hence, the geologic history of Earth may very well set the stage for a future of asteroid mining.

In 2006, the Keck Observatory announced that the binary Jupiter trojan 617 Patroclus,[15] and possibly large numbers of other Jupiter trojans, are likely extinct comets and consist largely of water ice. Similarly, Jupiter-family comets, and possibly near-Earth asteroids that are extinct comets, might also provide water. The process of in-situ resource utilization—using materials native to space for propellant, thermal management, tankage, radiation shielding, and other high-mass components of space infrastructure—could lead to radical reductions in its cost.[16] Although whether these cost reductions could be achieved, and if achieved would offset the enormous infrastructure investment required, is unknown.

Ice would satisfy one of two necessary conditions to enable "human expansion into the Solar System" (the ultimate goal for human space flight proposed by the 2009 "Augustine Commission" Review of United States Human Space Flight Plans Committee): physical sustainability and economic sustainability.[17]

From the astrobiological perspective, asteroid prospecting could provide scientific data for the search for extraterrestrial intelligence (SETI). Some astrophysicists have suggested that if advanced extraterrestrial civilizations employed asteroid mining long ago, the hallmarks of these activities might be detectable.[18][19][20] Why extraterrestrials would have resorted to asteroid mining in near proximity to earth, with its readily available resources, has not been explained.

Asteroid selection

Comparison of delta-v requirements for standard Hohmann transfers
Mission Δv
Earth surface to LEO 8.0 km/s
LEO to near-Earth asteroid 5.5 km/s[note 1]
LEO to lunar surface 6.3 km/s
LEO to moons of Mars 8.0 km/s
An important factor to consider in target selection is orbital economics, in particular the change in velocity (Δv) and travel time to and from the target. More of the extracted native material must be expended as propellant in higher Δv trajectories, thus less returned as payload. Direct Hohmann trajectories are faster than Hohmann trajectories assisted by planetary and/or lunar flybys, which in turn are faster than those of the Interplanetary Transport Network, but the reduction in transfer time comes at the cost of increased Δv requirements.

The Easily Recoverable Object (ERO) subclass of Near-Earth asteroids are considered likely candidates for early mining activity. Their low Δv makes them suitable for use in extracting construction materials for near-Earth space-based facilities, greatly reducing the economic cost of transporting supplies into Earth orbit.[21]

The table above shows a comparison of Δv requirements for various missions. In terms of propulsion energy requirements, a mission to a near-Earth asteroid compares favorably to alternative mining missions.

An example of a potential target[22] for an early asteroid mining expedition is 4660 Nereus, expected to be mainly enstatite. This body has a very low Δv compared to lifting materials from the surface of the Moon. However it would require a much longer round-trip to return the material.

Multiple types of asteroids have been identified but the three main types would include the C-type, S-type, and M-type asteroids:
  1. C-type asteroids have a high abundance of water which is not currently of use for mining but could be used in an exploration effort beyond the asteroid. Mission costs could be reduced by using the available water from the asteroid. C-type asteroids also have a lot of organic carbon, phosphorus, and other key ingredients for fertilizer which could be used to grow food.[23]
  2. S-type asteroids carry little water but look more attractive because they contain numerous metals including: nickel, cobalt and more valuable metals such as gold, platinum and rhodium. A small 10-meter S-type asteroid contains about 650,000 kg (1,433,000 lb) of metal with 50 kg (110 lb) in the form of rare metals like platinum and gold.[23]
  3. M-type asteroids are rare but contain up to 10 times more metal than S-types[23]
A class of easily recoverable objects (EROs) was identified by a group of researchers in 2013.
Twelve asteroids made up the initially identified group, all of which could be potentially mined with present-day rocket technology. Of 9,000 asteroids searched in the NEO database, these twelve could all be brought into an Earth-accessible orbit by changing their velocity by less than 500 meters per second (1,800 km/h; 1,100 mph). The dozen asteroids range in size from 2 to 20 meters (10 to 70 ft).[24]

Asteroid cataloging

The B612 Foundation is a private nonprofit foundation with headquarters in the United States, dedicated to protecting Earth from asteroid strikes. As a non-governmental organization it has conducted two lines of related research to help detect asteroids that could one day strike Earth, and find the technological means to divert their path to avoid such collisions.
The foundation's 2013 goal was to design and build a privately financed asteroid-finding space telescope, Sentinel, hoping in 2013 to launch it in 2017–2018. The Sentinel's infrared telescope, once parked in an orbit similar to that of Venus, is designed to help identify threatening asteroids by cataloging 90% of those with diameters larger than 140 metres (460 ft), as well as surveying smaller Solar System objects.[25][26][27][needs update]

Data gathered by Sentinel was intended to be provided through an existing scientific data-sharing network that includes NASA and academic institutions such as the Minor Planet Center in Cambridge, Massachusetts. Given the satellite's telescopic accuracy, Sentinel's data may prove valuable for other possible future missions, such as asteroid mining.[26][27][28]

Mining considerations

There are three options for mining:[21]
  1. Bring raw asteroidal material to Earth for use.
  2. Process it on-site to bring back only processed materials, and perhaps produce propellant for the return trip.
  3. Transport the asteroid to a safe orbit around the Moon, Earth or to the ISS.[11] This can hypothetically allow for most materials to be used and not wasted.[8] Along these lines, NASA has proposed a potential future space mission known as the Asteroid Redirect Mission, although the primary focus of this mission is on retrieval. The House of Representatives deleted a line item for the ARP budget from NASA's FY 2017 budget request.[citation needed]
Processing in situ for the purpose of extracting high-value minerals will reduce the energy requirements for transporting the materials, although the processing facilities must first be transported to the mining site. In situ mining will involve drilling boreholes and injecting hot fluid/gas and allow the useful material to react or melt with the solvent and the extract the solute. Due to the weak gravitational fields of asteroids, any drilling will cause large disturbances and form dust clouds.

Mining operations require special equipment to handle the extraction and processing of ore in outer space.[21] The machinery will need to be anchored to the body,[citation needed] but once in place, the ore can be moved about more readily due to the lack of gravity. However, no techniques for refining ore in zero gravity currently exist. Docking with an asteroid might be performed using a harpoon-like process, where a projectile would penetrate the surface to serve as an anchor; then an attached cable would be used to winch the vehicle to the surface, if the asteroid is both penetrable and rigid enough for a harpoon to be effective.[29]

Due to the distance from Earth to an asteroid selected for mining, the round-trip time for communications will be several minutes or more, except during occasional close approaches to Earth by near-Earth asteroids. Thus any mining equipment will either need to be highly automated, or a human presence will be needed nearby.[21] Humans would also be useful for troubleshooting problems and for maintaining the equipment. On the other hand, multi-minute communications delays have not prevented the success of robotic exploration of Mars, and automated systems would be much less expensive to build and deploy.[30]

Technology being developed by Planetary Resources to locate and harvest these asteroids has resulted in the plans for three different types of satellites:
  1. Arkyd Series 100 (the Leo Space telescope) is a less expensive instrument that will be used to find, analyze, and see what resources are available on nearby asteroids.[23]
  2. Arkyd Series 200 (the Interceptor) Satellite that would actually land on the asteroid to get a closer analysis of the available resources.[23]
  3. Arkyd Series 300 (Rendezvous Prospector) Satellite developed for research and finding resources deeper in space.[23]
Technology being developed by Deep Space Industries to examine, sample, and harvest asteroids is divided into three families of spacecraft:
  1. FireFlies are triplets of nearly identical spacecraft in CubeSat form launched to different asteroids to rendezvous and examine them.[31]
  2. DragonFlies also are launched in waves of three nearly identical spacecraft to gather small samples (5–10 kg) and return them to Earth for analysis.[31]
  3. Harvestors voyage out to asteroids to gather hundreds of tons of material for return to high Earth orbit for processing.[32]
Asteroid mining could potentially revolutionize space exploration. The C-type asteroids's high abundance of water could be used to produce fuel by splitting water into hydrogen and oxygen. This would make space travel a more feasible option by lowering cost of fuel. While the cost of fuel is a relatively insignificant factor in the overall cost for low earth orbit manned space missions, storing it and the size of the craft become a much bigger factor for interplanetary missions. Typically 1 kg in orbit is equivalent to more than 10 kg on the ground (for a Falcon 9 1.0 it would need 250 tons of fuel to put 5 tons in GEO orbit or 10 tons in LEO). This limitation is a major factor in the difficulty of interplanetary missions as fuel becomes payload.

Extraction techniques

Surface mining

On some types of asteroids, material may be scraped off the surface using a scoop or auger, or for larger pieces, an "active grab."[21] There is strong evidence that many asteroids consist of rubble piles,[33] making this approach possible.

Shaft mining

A mine can be dug into the asteroid, and the material extracted through the shaft. This requires precise knowledge to engineer accuracy of astro-location under the surface regolith and a transportation system to carry the desired ore to the processing facility.

Magnetic rakes

Asteroids with a high metal content may be covered in loose grains that can be gathered by means of a magnet.[21][34]

Heating

For asteroids such as carbonaceous chondrites that contain hydrated minerals, water and other volatiles can be extracted simply by heating. A water extraction test in 2016[35] by Honeybee Robotics used asteroid regolith simulant[36] developed by Deep Space Industries and the University of Central Florida to match the bulk mineralogy of a particular carbonaceous meteorite. Although the simulant was physically dry (i.e., it contained no water molecules adsorbed in the matrix of the rocky material), heating to about 510 °C released hydroxyl, which came out as substantial amounts of water vapor from the molecular structure of phyllosilicate clays and sulphur compounds. The vapor was condensed into liquid water filling the collection containers, demonstrating the feasibility of mining water from certain classes of physically dry asteroids.[citation needed]

For volatile materials in extinct comets, heat can be used to melt and vaporize the matrix.[21][37]

Extraction using the Mond process

The nickel and iron of an iron rich asteroid could be extracted by the Mond process. This involves passing carbon monoxide over the asteroid at a temperature between 50 and 60 °C for nickel, higher for iron, and with high pressures and enclosed in materials that are resistant to the corrosive carbonyls. This forms the gases nickel tetracarbonyl and iron pentacarbonyl - then nickel and iron can be removed from the gas again at higher temperatures, perhaps in an attached printer, and platinum, gold etc. left as a residue.[38][39][40]

Self-replicating machines

A 1980 NASA study entitled Advanced Automation for Space Missions proposed a complex automated factory on the Moon that would work over several years to build 80% of a copy of itself, the other 20% being imported from Earth since those more complex parts (like computer chips) would require a vastly larger supply chain to produce.[41] Exponential growth of factories over many years could refine large amounts of lunar (or asteroidal) regolith. Since 1980 there has been major progress in miniaturization, nanotechnology, materials science, and additive manufacturing, so it may be possible to achieve 100% "closure" with a reasonably small mass of hardware, although these technology advancements are themselves enabled on Earth by expansion of the supply chain so it needs further study. A NASA study in 2012 proposed a "bootstrapping" approach to establish an in-space supply chain with 100% closure, suggesting it could be achieved in only two to four decades with low annual cost.[42] A study in 2016 again claimed it is possible to complete in just a few decades because of ongoing advances in robotics, and it argued it will provide benefits back to the Earth including economic growth, environmental protection, and provision of clean energy while also providing humanity protection against existential threats.[43]

Proposed mining projects

On April 24, 2012 a plan was announced by billionaire entrepreneurs to mine asteroids for their resources. The company is called Planetary Resources and its founders include aerospace entrepreneurs Eric Anderson and Peter Diamandis. Advisers include film director and explorer James Cameron and investors include Google's chief executive Larry Page and its executive chairman Eric Schmidt.[16][44] They also plan to create a fuel depot in space by 2020 by using water from asteroids, splitting it to liquid oxygen and liquid hydrogen for rocket fuel. From there, it could be shipped to Earth orbit for refueling commercial satellites or spacecraft.[16] The plan has been met with skepticism by some scientists, who do not see it as cost-effective, even though platinum is worth £22 per gram and gold nearly £31 per gram (approximately £961 per troy ounce).[when?] Platinum and gold are raw materials traded on terrestrial markets, and it is impossible to predict what prices either will command at the point in the future when resources from asteroids become available. For example, platinum traditionally is very valuable due to its use in both industrial and jewelry applications, but should future technologies make the internal combustion engine obsolete, the demand for platinum's use as the catalyst in catalytic converters may well decline and decrease the metal's long term demand. The ongoing NASA mission OSIRIS-REx, which is planned to return just a minimal amount (60 g; two ounces) of material but could get up to 2 kg from an asteroid to Earth, will cost about US$1 billion.[16][45]

Planetary Resources says that, in order to be successful, it will need to develop technologies that bring the cost of space flight down. Planetary Resources also expects that the construction of "space infrastructure" will help to reduce long-term running costs. For example, fuel costs can be reduced by extracting water from asteroids and splitting to hydrogen using solar energy. In theory, hydrogen fuel mined from asteroids costs significantly less than fuel from Earth due to high costs of escaping Earth's gravity. If successful, investment in "space infrastructure" and economies of scale could reduce operational costs to levels significantly below NASA's ongoing (OSIRIS-REx) mission.This investment would have to be amortized through the sale of commodities, delaying any return to investors. There are also some indications that Planetary Resources expects government to fund infrastructure development, as was exemplified by its recent request for $700,000 from NASA to fund the first of the telescopes described above.

Another similar venture, called Deep Space Industries, was started by David Gump, who had founded other space companies.[47] The company hoped to begin prospecting for asteroids suitable for mining by 2015 and by 2016 return asteroid samples to Earth.[48] By 2023 Deep Space Industries plans to begin mining asteroids.[49]

At ISDC-San Diego 2013,[50] Kepler Energy and Space Engineering (KESE,llc) also announced it was going to mine asteroids, using a simpler, more straightforward approach: KESE plans to use almost exclusively existing guidance, navigation and anchoring technologies from mostly successful missions like the Rosetta/Philae, Dawn, and Hyabusa's Muses-C and current NASA Technology Transfer tooling to build and send a 4-module Automated Mining System (AMS) to a small asteroid with a simple digging tool to collect ~40 tons of asteroid regolith and bring each of the four return modules back to low Earth orbit (LEO) by the end of the decade. Small asteroids are expected to be loose piles of rubble, therefore providing for easy extraction.

In September 2012, the NASA Institute for Advanced Concepts (NIAC) announced the Robotic Asteroid Prospector project, which will examine and evaluate the feasibility of asteroid mining in terms of means, methods, and systems.[51]

Being the largest body in the asteroid belt, Ceres could become the main base and transport hub for future asteroid mining infrastructure,[52] allowing mineral resources to be transported to Mars, the Moon, and Earth. Because of its small escape velocity combined with large amounts of water ice, it also could serve as a source of water, fuel, and oxygen for ships going through and beyond the asteroid belt.[52] Transportation from Mars or the Moon to Ceres would be even more energy-efficient than transportation from Earth to the Moon.[53]

Companies and organizations

Organizations which are working on asteroid mining include the following:

Organisation Type
Deep Space Industries Private company
Planetary Resources Private company
Moon Express Private company
Kleos Space Private company
TransAstra Private company
Aten Engineering Private company
OffWorld Private company
SpaceFab.US Private company
Asteroid Mining Corporation Ltd. UK[54] Private company

Potential targets

According to the Asterank database[when?], the following asteroids are considered the best targets for mining if maximum cost-effectiveness is to be achieved:[55]
 
Asteroid Est. Value (US$) Est. Profit (US$) Δv (km/s) Composition
Ryugu 95 billion 35 billion 4.663 Nickel, iron, cobalt, water, nitrogen, hydrogen, ammonia
1989 ML 14 billion 4 billion 4.888 Nickel, iron, cobalt
Nereus 5 billion 1 billion 4.986 Nickel, iron, cobalt
Didymos 84 billion 22 billion 5.162 Nickel, iron, cobalt
2011 UW158 8 billion 2 billion 5.187 Platinum, nickel, iron, cobalt
Anteros 5570 billion 1250 billion 5.439 Magnesium silicate, aluminum, iron silicate
2001 CC21 147 billion 30 billion 5.636 Magnesium silicate, aluminum, iron silicate
1992 TC 84 billion 17 billion 5.647 Nickel, iron, cobalt
2001 SG10 4 billion 0.6 billion 5.880 Nickel, iron, cobalt
2002 DO3 0.3 billion 0.06 billion 5.894 Nickel, iron, cobalt

Economics

Currently, the quality of the ore and the consequent cost and mass of equipment required to extract it are unknown and can only be speculated. Some economic analyses indicate that the cost of returning asteroidal materials to Earth far outweighs their market value, and that asteroid mining will not attract private investment at current commodity prices and space transportation costs.[56][57] Other studies suggest large profit by using solar power.[58][59] Potential markets for materials can be identified and profit generated if extraction cost is brought down. For example, the delivery of multiple tonnes of water to low Earth orbit for rocket fuel preparation for space tourism could generate a significant profit if space tourism itself proves profitable, which has not been proven.[60]

In 1997 it was speculated that a relatively small metallic asteroid with a diameter of 1.6 km (1 mi) contains more than US$20 trillion worth of industrial and precious metals.[10][61] A comparatively small M-type asteroid with a mean diameter of 1 km (0.62 mi) could contain more than two billion metric tons of ironnickel ore,[62] or two to three times the world production of 2004.[63] The asteroid 16 Psyche is believed to contain 1.7×1019 kg of nickel–iron, which could supply the world production requirement for several million years. A small portion of the extracted material would also be precious metals.

Not all mined materials from asteroids would be cost-effective, especially for the potential return of economic amounts of material to Earth. For potential return to Earth, platinum is considered very rare in terrestrial geologic formations and therefore is potentially worth bringing some quantity for terrestrial use. Nickel, on the other hand, is quite abundant and being mined in many terrestrial locations, so the high cost of asteroid mining may not make it economically viable.[64]

Although Planetary Resources indicated in 2012 that the platinum from a 30-meter-long (98 ft) asteroid could be worth US$25–50 billion,[65] an economist remarked any outside source of precious metals could lower prices sufficiently to possibly doom the venture by rapidly increasing the available supply of such metals.[66]

Development of an infrastructure for altering asteroid orbits could offer a large return on investment.[67]

Scarcity

Scarcity is a fundamental economic problem of humans having seemingly unlimited wants in a world of limited resources. Since Earth's resources are not infinite, the relative abundance of asteroidal ore gives asteroid mining the potential to provide nearly unlimited resources, which would essentially eliminate scarcity for those materials.
The idea of exhausting resources is not new. In 1798, Thomas Malthus wrote, because resources are ultimately limited, the exponential growth in a population would result in falls in income per capita until poverty and starvation would result as a constricting factor on population.[68] It should be noted that Malthus posited this 220 years ago, and no sign has yet emerged of the Malthus effect regarding raw materials.
  • Proven reserves are deposits of mineral resources that are already discovered and known to be economically extractable under present or similar demand, price and other economic and technological conditions.[68]
  • Conditional reserves are discovered deposits that are not yet economically viable.[citation needed]
  • Indicated reserves are less intensively measured deposits whose data is derived from surveys and geological projections. Hypothetical reserves and speculative resources make up this group of reserves.
  • Inferred reserves are deposits that have been located but not yet exploited.[68]
Continued development in asteroid mining techniques and technology will help to increase mineral discoveries.[69] As the cost of extracting mineral resources, especially platinum group metals, on Earth rises, the cost of extracting the same resources from celestial bodies declines due to technological innovations around space exploration.[68] However, it should be noted that the "substitution effect", i.e. the use of other materials for the functions now performed by platinum, would increase in strength as the cost of platinum increased. New supplies would also come to market in the form of jewelry and recycled electronic equipment from itinerant "we buy platinum" businesses like the "we buy gold" businesses that exist now.

As of September 2016, there are 711 known asteroids with a value exceeding US$100 trillion.[55]

Financial feasibility

Space ventures are high-risk, with long lead times and heavy capital investment, and that is no different for asteroid-mining projects. These types of ventures could be funded through private investment or through government investment. For a commercial venture it can be profitable as long as the revenue earned is greater than total costs (costs for extraction and costs for marketing).[70] The costs involving an asteroid-mining venture have been estimated to be around US$100 billion in 1996.[70]

There are six categories of cost considered for an asteroid mining venture:[70]
  1. Research and development costs
  2. Exploration and prospecting costs
  3. Construction and infrastructure development costs
  4. Operational and engineering costs
  5. Environmental costs
  6. Time cost
Determining financial feasibility is best represented through net present value.[70] One requirement needed for financial feasibility is a high return on investments estimating around 30%.[70] Example calculation assumes for simplicity that the only valuable material on asteroids is platinum. On August 16, 2016 platinum was valued at $1157 per ounce or $37,000 per kilogram. At a price of $1,340, for a 10% return on investment, 173,400 kg (5,575,000 ozt) of platinum would have to be extracted for every 1,155,000 tons of asteroid ore. For a 50% return on investment 1,703,000 kg (54,750,000 ozt) of platinum would have to be extracted for every 11,350,000 tons of asteroid ore. This analysis assumes that doubling the supply of platinum to the market (5.13 million ounces in 2014) would have no effect on the price of platinum. A more realistic assumption is that increasing the supply by this amount would reduce the price 30–50%.[citation needed]

Decreases in the price of space access matter. The start of operational use of the low-cost-per-kilogram-in-orbit Falcon Heavy launch vehicle in 2018 is projected by astronomer Martin Elvis to have increased the extent of economically-minable near-Earth asteroids from hundreds to thousands. With the increased availability of several kilometers per second of delta-v that Falcon Heavy provides, it increases the number of NEAs accessible from 3 percent to around 45 percent.[71]

Regulation and safety

Space law involves a specific set of international treaties, along with national statutory laws. The system and framework for international and domestic laws have emerged in part through the United Nations Office for Outer Space Affairs.[72] The rules, terms and agreements that space law authorities consider to be part of the active body of international space law are the five international space treaties and five UN declarations. Approximately 100 nations and institutions were involved in negotiations. The space treaties cover many major issues such as arms control, non-appropriation of space, freedom of exploration, liability for damages, safety and rescue of astronauts and spacecraft, prevention of harmful interference with space activities and the environment, notification and registration of space activities, and the settlement of disputes. In exchange for assurances from the space power, the nonspacefaring nations acquiesced to U.S. and Soviet proposals to treat outer space as a commons (res communis) territory which belonged to no one state.

Asteroid mining in particular is covered by both international treaties—for example, the Outer Space Treaty—and national statutory laws—for example, specific legislative acts in the United States[73] and Luxembourg.[74]

Varying degrees of criticism exist regarding international space law. Some critics accept the Outer Space Treaty, but reject the Moon Agreement. Therefore, it is important to note that even the Moon Agreement with its common heritage of mankind clause, allows space mining, extraction, private property rights and exclusive ownership rights over natural outer space resources, if removed from their natural place. The Outer Space Treaty and the Moon Agreement allow private property rights for outer space natural resources once removed from the surface, subsurface or subsoil of the moon and other celestial bodies in outer space. Thus, international space law is capable of managing newly emerging space mining activities, private space transportation, commercial spaceports and commercial space stations/habitats/settlements. Space mining involving the extraction and removal of natural resources from their natural location is without question allowable under the Outer Space Treaty and the Moon Agreement. Once removed, those natural resources can be reduced to possession, sold, traded and explored or used for scientific purposes. International space law allows space mining, specifically the extraction of natural resources. It is generally understood within the space law authorities that extracting space resources is allowable, even by private companies for profit. However, international space law prohibits property rights over territories and outer space land.

Astrophysicists Carl Sagan and Steven J. Ostro raised the concern altering the trajectories of asteroids near Earth might pose a collision hazard threat. They concluded that orbit engineering has both opportunities and dangers: if controls instituted on orbit-manipulation technology were too tight, future spacefaring could be hampered, but if they were too loose, human civilization would be at risk.[67][75][76]

The Outer Space Treaty

After ten years of negotiations between nearly 100 nations, the Outer Space Treaty opened for signature on January 27, 1966. It entered into force as the constitution for outer space on October 10, 1967. The Outer Space Treaty was well received; it was ratified by ninety-six nations and signed by an additional twenty-seven states. The outcome has been that the basic foundation of international space law consists of five (arguably four) international space treaties, along with various written resolutions and declarations. The main international treaty is the Outer Space Treaty of 1967; it is generally viewed as the "Constitution" for outer space. By ratifying the Outer Space Treaty of 1967, ninety-eight nations agreed that outer space would belong to the "province of mankind", that all nations would have the freedom to "use" and "explore" outer space, and that both these provisions must be done in a way to "benefit all mankind". The province of mankind principle and the other key terms have not yet been specifically defined (Jasentuliyana, 1992). Critics have complained that the Outer Space Treaty is vague. Yet, international space law has worked well and has served space commercial industries and interests for many decades. The taking away and extraction of Moon rocks, for example, has been treated as being legally permissible.
The framers of Outer Space Treaty initially focused on solidifying broad terms first, with the intent to create more specific legal provisions later (Griffin, 1981: 733–734). This is why the members of the COPUOS later expanded the Outer Space Treaty norms by articulating more specific understandings which are found in the "three supplemental agreements" – the Rescue and Return Agreement of 1968, the Liability Convention of 1973, and the Registration Convention of 1976 (734).

Hobe (2006) explains that the Outer Space Treaty "explicitly and implicitly prohibits only the acquisition of territorial property rights" – public or private, but extracting space resources is allowable.

The Moon Agreement

The Moon Agreement (1979–1984) is often treated[by whom?] as though it is not a part of the body of international space law, and there has been extensive debate on whether or not the Moon Agreement is a valid part of international law. It entered into force in 1984, because of a five state ratification consensus procedure, agreed upon by the members of the United Nations Committee on Peaceful Uses of Outer Space (COPUOS). Still today very few nations have signed and/or ratified the Moon Agreement. In recent years this figure has crept up to a few more than a dozen nations who have signed and ratified the treaty. The other three outer space treaties experienced a high level of international cooperation in terms of signage and ratification, but the Moon Treaty went further than them, by defining the Common Heritage concept in more detail and by imposing specific obligations on the parties engaged in the exploration and/or exploitation of outer space. The Moon Treaty explicitly designates the Moon and its natural resources as part of the Common Heritage of Mankind.The Moon Agreement allows space mining, specifically the extraction of natural resources. The treaty specifically provides in Article 11, paragraph 3 that:
Neither the surface nor the subsurface of the Moon, nor any part thereof or natural resources in place [emphasis added], shall become property of any State, international intergovernmental or non-governmental organization, national organization or non-governmental entity or of any natural person. The placement of personnel, space vehicles, equipment, facilities, stations and installations on or below the surface of the Moon, including structures connected with its surface or subsurface, shall not create a right of ownership over the surface or the subsurface of the Moon or any areas thereof.
The objection to the treaty by the spacefaring nations is held to be the requirement that extracted resources (and the technology used to that end) must be shared with other nations. The similar regime in the United Nations Convention on the Law of the Sea is believed to impede the development of such industries on the seabed.[77]

Legal regimes of some countries

Some nations are beginning to promulgate legal regimes for extraterrestrial resource extraction. For example, the United States "SPACE Act of 2015"—facilitating private development of space resources consistent with US international treaty obligations—passed the US House of Representatives in July 2015.[78][79] In November 2015 it passed the United States Senate.[80] On 25 November US-President Barack Obama signed the H.R.2262 – U.S. Commercial Space Launch Competitiveness Act into law.[81] The law recognizes the right of U.S. citizens to own space resources they obtain and encourages the commercial exploration and utilization of resources from asteroids. According to the article § 51303 of the law:[82]
A United States citizen engaged in commercial recovery of an asteroid resource or a space resource under this chapter shall be entitled to any asteroid resource or space resource obtained, including to possess, own, transport, use, and sell the asteroid resource or space resource obtained in accordance with applicable law, including the international obligations of the United States
In February 2016, the Government of Luxembourg announced that it would attempt to "jump-start an industrial sector to mine asteroid resources in space" by, among other things, creating a "legal framework" and regulatory incentives for companies involved in the industry.[74][83] By June 2016, announced that it would "invest more than US$200 million in research, technology demonstration, and in the direct purchase of equity in companies relocating to Luxembourg."[84] In 2017, it became the "first European country to pass a law conferring to companies the ownership of any resources they extract from space", and remained active in advancing space resource public policy in 2018.[85]

Missions

Ongoing and planned

  • Hayabusa 2 – ongoing JAXA asteroid sample return mission (arriving at the target in 2018)
  • OSIRIS-REx – planned NASA asteroid sample return mission (launched in September 2016)
  • Fobos-Grunt 2 – proposed Roskosmos sample return mission to Phobos (launch in 2024)

Completed

First successful missions by country:[86]

Nation Flyby Orbit Landing Sample return
 USA ICE (1985) NEAR (1997) NEAR (2001) Stardust (2006)
 Japan Suisei (1986) Hayabusa (2005) Hayabusa (2005) Hayabusa (2010)
 EU ICE (1985) Rosetta (2014) Rosetta (2014)
 Soviet Union Vega 1 (1986)

 China Chang'e 2 (2012)

In fiction

The first mention of asteroid mining in science fiction is apparently Garrett P. Serviss' story Edison's Conquest of Mars, New York Evening Journal, 1898.[87][88]
The 1979 film Alien, directed by Ridley Scott, is about the crew of the Nostromo, a commercially operated spaceship on a return trip to Earth hauling a refinery and 20 million tons of mineral ore mined from an asteroid.

C. J. Cherryh's novel, Heavy Time focuses on the plight of asteroid miners in the Alliance-Union universe, while Moon is a 2009 British science fiction drama film depicting a lunar facility that mines the alternative fuel helium-3 needed to provide energy on Earth. It was notable for its realism and drama, winning several awards internationally.[89][90][91]

In several science fiction video games, asteroid mining is a possibility. For example, in the space-MMO, EVE Online, asteroid mining is a very popular career, owing to its simplicity.[92][93][94]

In the computer game Star Citizen, the mining occupation supports a variety of dedicated specialists, each of which has a critical role to play in the effort.[95]

In The Expanse series of novels, asteroid mining is a driving economic force behind the colonization of the solar system. Since huge energy input is required to escape planets' gravity, it is implied that once space-based mining platforms are established, it will be more efficient to harvest natural resources (water, oxygen, building materials, etc.) from asteroids rather than lifting them out of Earth's gravity well.[citation needed]

Gallery

Is Mars’ Soil Too Dry to Sustain Life?

Author: Frank Tavares  | July 23, 2018
Original link:  https://www.nasa.gov/feature/ames/is-mars-soil-too-dry-to-sustain-life

Life as we know it needs water to thrive. Even so, we see life persist in the driest environments on Earth. But how dry is too dry? At what point is an environment too extreme for even microorganisms, the smallest and often most resilient of lifeforms, to survive? These questions are important to scientists searching for life beyond Earth, including on the planet Mars. To help answer this question, a research team from NASA’s Ames Research Center in California’s Silicon Valley traveled to the driest place on Earth: the Atacama Desert in Chile, a 1000 kilometer strip of land on South America’s west coast.

The Atacama Desert is one of the Earth’s environments that comes closest to the parched Martian surface. But the Atacama isn’t uniformly dry. When traveling from the relatively less dry southern end of the desert in central Chile to its extremely dry center in northern Chile, the annual precipitation shifts from a few millimeters of rain per year to only a few millimeters of rain per decade.

A map of the Atacama Desert, showing how aridity changes throughout the region.
This map of the Atacama Desert shows the change in annual precipitation from one end of the desert to the other. The aridity index mentioned is a value based on annual rainfall and water loss.
Credits: NASA Ames Research Center

This non-uniformly dry environment provides an opportunity to search for life at decreasing levels of precipitation. By pinning down how much water an environment needs to be habitable, i.e. be able to support lifeforms, the research team was able to determine that a dry limit of habitability exists.

"On Earth, we find evidence of microbial life everywhere," said Mary Beth Wilhelm, an astrobiologist at Ames and lead author of the new study published in the journal Astrobiology this month. "However, in extreme environments, it’s important to know whether a microbe is dormant and just barely surviving, or really alive and well."

Biologists define something as alive if it is capable of growth and reproduction. If microbes are simply surviving or performing a few basic functions, they’ll die within one generation without passing on any genetic information. When looking for the potential of life on Mars, scientists need to see this reproduction take place, which leads to population growth and genetic change from one generation to the next.

"By learning if and how microbes stay alive in extremely dry regions on Earth, we hope to better understand if Mars once had microbial life and whether it could have survived until today," said Wilhelm.

A Sign of Stress is a Sign of Life

Scientists have a few tools to figure out whether a sample is growing or just surviving. One important sign is stress. Living long enough to grow and adapt in extreme deserts like the Atacama – or potentially on Mars – is no easy task. If life is really growing in this extremely dry environment, it’s going to be stressed, while dormant life simply surviving will not. Because dormant life is not able to even try to grow or reproduce, there are no stress markers, like changes in the structure of certain cell molecules. Astrobiologists can look for some tell-tale signs of this stress to search for evidence of growth in the parched soils.

The science team collected soil samples from across the Atacama Desert and brought them back to their lab at Ames. There, they performed tests to identify stress markers in the samples by looking at features common to all known living organisms.

Two researchers collecting samples from a hole in the ground in the Atacama Desert using a drill.
Researchers collect samples from the surface of the Atacama
Desert in Chile, going a few centimeters into the ground.
Credits: NASA Ames Research Center

One stress marker can be found in lipids, molecules that make up the outer surface of a living microbial cell, known as its membrane. When cells are exposed to stressful conditions, their lipids change structure, becoming more rigid.

Scientists found this marker in less dry parts of the Atacama, but it was mysteriously missing from the driest regions, where microbes should be more stressed. Based on these and other results, the team believes that a line of transition exists between where minute amounts of water are still enough for life to grow and where the environment is so dry that microorganisms merely survive without growth in surface soil in the Atacama.

Dating the Remnants of Life

Scientists can tell how long cells have been dead by studying a type of molecule called amino acids, the building blocks of proteins. The structures of these amino acids take two forms, each a mirror reflection of the other, like a pair of hands. In fact, this "handedness" is the term scientists use to describe these structures.

All life on Earth is built with "left-handed" amino acid molecules. However, when a cell dies, some of its amino acids change at a known rate into the reflecting "right-handed" structure, eventually balancing into a 50-50 ratio over many years.

By looking at this ratio in the driest Atacama soils, the scientists found microbes there that have been dead for at least 10,000 years. Finding even the remnants of life this old is extremely rare, and surprising for a sample sitting in the surface of Earth.

Getting Ready for Mars

Mars is 1,000 times drier than even the driest parts of the Atacama, which makes it less likely that microbial life as we know it exists on the planet’s surface, even with some access to water. However, even in the driest areas of Chile’s desert, remnants of past microbial life from wetter times in the Atacama’s history were clearly present and well preserved over thousands of years. This means that because scientists know that Mars was a wetter, more vibrant planet in its past, traces of that ancient life might still be intact.

"Before we go to Mars, we can use the Atacama like a natural laboratory and, based on our results, adjust our expectations for what we might find when we get there," said Wilhelm. "Knowing the surface of Mars today might be too dry for life to grow, but that traces of microbes can last for thousands of years helps us design better instruments to not only search for life on and under the planet’s surface, but to try and unlock the secrets of its distant past."

Members of the news media interested in learning more about this research should refer to the NASA Ames Media Contacts page to get in touch.
 
Last Updated: July 24, 2018
Editor: Frank Tavares

Transcending Moore’s Law with Molecular Electronics and Nanotechnology

September 27, 2004 by Steve T. Jurvetson
Original link:  http://www.kurzweilai.net/transcending-moore-s-law-with-molecular-electronics-and-nanotechnology
Originally published in Nanotechnology Law & Business March 2004. Published on KurzweilAI.net September 27, 2004.

While the future is becoming more difficult to predict with each passing year, we should expect an accelerating pace of technological change. Nanotechnology is the next great technology wave and the next phase of Moore’s Law. Nanotech innovations enable myriad disruptive businesses that were not possible before, driven by entrepreneurship.

Much of our future context will be defined by the accelerating proliferation of information technology as it innervates society and begins to subsume matter into code. It is a period of exponential growth in the impact of the learning-doing cycle where the power of biology, IT and nanotech compounds the advances in each formerly discrete domain.

The history of technology is one of disruption and exponential growth, epitomized in Moore’s law, and generalized to many basic technological capabilities that are compounding independently from the economy. More than a niche subject of interest only to chip designers, the continued march of Moore’s Law will affect all of the sciences, just as nanotech will affect all industries. Thinking about Moore’s Law in the abstract provides a framework for predicting the future of computation and the transition to a new substrate: molecular electronics. An analysis of progress in molecular electronics provides a detailed example of the commercialization challenges and opportunities common to many nanotechnologies.

Introduction to Technology Exponentials:

Despite a natural human tendency to presume linearity, accelerating change from positive feedback is a common pattern in technology and evolution. We are now crossing a threshold where the pace of disruptive shifts is no longer inter-generational and begins to have a meaningful impact over the span of careers and eventually product cycles.

As early stage VCs, we look for disruptive businesses run by entrepreneurs who want to change the world. To be successful, we have to identify technology waves early and act upon those beliefs. At DFJ, we believe that nanotech is the next great technology wave, the nexus of scientific innovation that revolutionizes most industries and indirectly affects the fabric of society. Historians will look back on the upcoming epoch with no less portent than the Industrial Revolution.

The aforementioned are some long-term trends. Today, from a seed-stage venture capitalist perspective (with a broad sampling of the entrepreneurial pool), we are seeing more innovation than ever before. And we are investing in more new companies than ever before.

In the medium term, disruptive technological progress is relatively decoupled from economic cycles. For example, for the past 40 years in the semiconductor industry, Moore’s Law has not wavered in the face of dramatic economic cycles. Ray Kurzweil’s abstraction of Moore’s Law (from transistor-centricity to computational capability and storage capacity) shows an uninterrupted exponential curve for over 100 years, again without perturbation during the Great Depression or the World Wars. Similar exponentials can be seen in Internet connectivity, medical imaging resolution, genes mapped and solved 3D protein structures. In each case, the level of analysis is not products or companies, but basic technological capabilities.

In his forthcoming book, Kurzweil summarizes the exponentiation of our technological capabilities, and our evolution, with the near-term shorthand: the next 20 years of technological progress will be equivalent to the entire 20th century. For most of us, who do not recall what life was like one hundred years ago, the metaphor is a bit abstract. In 1900, in the U.S., there were only 144 miles of paved road, and most Americans (94%+) were born at home, without a telephone, and never graduated high school. Most (86%+) did not have a bathtub at home or reliable access to electricity. Consider how much technology-driven change has compounded over the past century, and consider that an equivalent amount of progress will occur in one human generation, by 2020. It boggles the mind, until one dwells on genetics, nanotechnology, and their intersection. Exponential progress perpetually pierces the linear presumptions of our intuition. “Future Shock” is no longer on an inter-generational time-scale.

The history of humanity is that we use our tools and our knowledge to build better tools and expand the bounds of our learning. We are entering an era of exponential growth in our capabilities in biotech, molecular engineering and computing. The cross-fertilization of these formerly discrete domains compounds our rate of learning and our engineering capabilities across the spectrum. With the digitization of biology and matter, technologists from myriad backgrounds can decode and engage the information systems of biology as never before. And this inspires new approaches to bottom-up manufacturing, self-assembly, and layered complex systems development.

Moore’s Law:

Moore’s Law is commonly reported as a doubling of transistor density every 18 months. But this is not something the co-founder of Intel, Gordon Moore, has ever said. It is a nice blending of his two predictions; in 1965, he predicted an annual doubling of transistor counts in the most cost effective chip and revised it in 1975 to every 24 months. With a little hand waving, most reports attribute 18 months to Moore’s Law, but there is quite a bit of variability. The popular perception of Moore’s Law is that computer chips are compounding in their complexity at near constant per unit cost. This is one of the many abstractions of Moore’s Law, and it relates to the compounding of transistor density in two dimensions. Others relate to speed (the signals have less distance to travel) and computational power (speed x density).

So as to not miss the long-term trend while sorting out the details, we will focus on the 100-year abstraction of Moore’s Law below. But we should digress for a moment to underscore the importance of continued progress in Moore’s law to a broad set of industries.

Importance of Moore’s Law:

Moore’s Law drives chips, communications and computers and has become the primary driver in drug discovery and bioinformatics, medical imaging and diagnostics. Over time, the lab sciences become information sciences, modeled on a computer rather than trial and error experimentation.
NASA Ames shut down their wind tunnels this year. As Moore’s Law provided enough computational power to model turbulence and airflow, there was no longer a need to test iterative physical design variations of aircraft in the wind tunnels, and the pace of innovative design exploration dramatically accelerated.

Eli Lilly processed 100x fewer molecules this year than they did 15 years ago. But their annual productivity in drug discovery did not drop proportionately; it went up over the same period. “Fewer atoms and more bits” is their coda.

Accurate simulation demands computational power, and once a sufficient threshold has been crossed, simulation acts as an innovation accelerant over physical experimentation. Many more questions can be answered per day.

Recent accuracy thresholds have been crossed in diverse areas, such as modeling the weather (predicting a thunderstorm six hours in advance) and automobile collisions (a relief for the crash test dummies), and the thresholds have yet to be crossed for many areas, such as protein folding dynamics.

Long Term Abstraction of Moore’s Law:

Unless you work for a chip company and focus on fab-yield optimization, you do not care about transistor counts. Integrated circuit customers do not buy transistors. Consumers of technology purchase computational speed and data storage density. When recast in these terms, Moore’s Law is no longer a transistor-centric metric, and this abstraction allows for longer-term analysis.

The exponential curve of Moore’s Law extends smoothly back in time for over 100 years, long before the invention of the semiconductor. Through five paradigm shifts—such as electro-mechanical calculators and vacuum tube computers—the computational power that $1000 buys has doubled every two years. For the past 30 years, it has been doubling every year.



Each horizontal line on this logarithmic graph represents a 100x improvement. A straight diagonal line would be an exponential, or geometrically compounding, curve of progress. Kurzweil plots a slightly upward curving line—a double exponential.

Each dot represents a human drama. They did not realize that they were on a predictive curve. Each dot represents an attempt to build the best computer with the tools of the day. Of course, we use these computers to make better design software and manufacturing control algorithms. And so the progress continues.
One machine was used in the 1890 Census; one cracked the Nazi Enigma cipher in World War II; one predicted Eisenhower’s win in the Presidential election. And there is the Apple ][, and the Cray 1, and just to make sure the curve had not petered out recently, I looked up the cheapest PC available for sale on Wal*Mart.com, and that is the green dot that I have added to the upper right corner of the graph.

And notice the relative immunity to economic cycles. The Great Depression and the World Wars and various recessions do not introduce a meaningful delay in the progress of Moore’s Law. Certainly, the adoption rates, revenue, profits and inventory levels of the computer companies behind the various dots on the graph may go though wild oscillations, but the long-term trend emerges nevertheless.

Any one technology, such as the CMOS transistor, follows an elongated S-shaped curve of slow progress during initial development, upward progress during a rapid adoption phase, and then slower growth from market saturation over time. But a more generalized capability, such as computation, storage, or bandwidth, tends to follow a pure exponential—bridging across a variety of technologies and their cascade of S-curves.

If history is any guide, Moore’s Law will continue on and will jump to a different substrate than CMOS silicon. It has done so five times in the past, and will need to again in the future.

Problems With the Current Paradigm:

Intel co-founder Gordon Moore has chuckled at those who have predicted the imminent demise of Moore’s Law in decades past. But the traditional semiconductor chip is finally approaching some fundamental physical limits. Moore recently admitted that Moore’s Law, in its current form, with CMOS silicon, will run out of gas in 2017.

One of the problems is that the chips are getting very hot. The following graph of power density is also a logarithmic scale:



This provides the impetus for chip cooling companies, like Nanocoolers, to provide a breakthrough solution for removing 100 Watts per square centimeter. In the long term, the paradigm has to change.

Another physical limit is the atomic limit—the indivisibility of atoms. Intel’s current gate oxide is 1.2nm thick. Intel’s 45nm process is expected to have a gate oxide that is only 3 atoms thick. It is hard to imagine many more doublings from there, even with further innovation in insulating materials. Intel has recently announced a breakthrough in a nano-structured gate oxide (high k dielectric) and metal contact materials that should enable the 45nm node to come on line in 2007. None of the industry participants has a CMOS roadmap for the next 50 years.

A major issue with thin gate oxides, and one that will also come to the fore with high-k dielectrics, is quantum mechanical tunneling. As the oxide becomes thinner, the gate current can approach and even exceed the channel current so that the transistor cannot be controlled by the gate.

Another problem is the escalating cost of a semiconductor fab plant, which is doubling every three years, a phenomenon dubbed Moore’s Second Law. Human ingenuity keeps shrinking the CMOS transistor, but with increasingly expensive manufacturing facilities—currently $3 billion per fab.


A large component of fab cost is the lithography equipment that patterns the wafers with successive sub-micron layers. Nanoimprint lithography from companies like Molecular Imprints can dramatically lower cost and leave room for further improvement from the field of molecular electronics.

We have been investing in a variety of companies, such as Coatue, D-Wave, FlexICs, Nantero, and ZettaCore that are working on the next paradigm shift to extend Moore’s Law beyond 2017. One near term extension to Moore’s Law focuses on the cost side of the equation. Imagine rolls of wallpaper embedded with inexpensive transistors. FlexICs deposits traditional transistors at room temperature on plastic, a much cheaper bulk process than growing and cutting crystalline silicon ingots.

Molecular Electronics:

The primary contender for the post-silicon computation paradigm is molecular electronics, a nano-scale alternative to the CMOS transistor. Eventually, molecular switches will revolutionize computation by scaling into the third dimension—overcoming the planar deposition limitations of CMOS. Initially, they will substitute for the transistor bottleneck on an otherwise standard silicon process with standard external I/O interfaces.

For example, Nantero employs carbon nanotubes suspended above metal electrodes on silicon to create high-density nonvolatile memory chips (the weak Van der Waals bond can hold a deflected tube in place indefinitely with no power drain). Carbon nanotubes are small (~10 atoms wide), 30x stronger than steel at 1/6 the weight, and perform the functions of wires, capacitors and transistors with better speed, power, density and cost. Cheap nonvolatile memory enables important advances, such as “instant-on” PCs.



Other companies, such as Hewlett Packard and ZettaCore, are combining organic chemistry with a silicon substrate to create memory elements that self-assemble using chemical bonds that form along pre-patterned regions of exposed silicon.

There are several reasons why molecular electronics is the next paradigm for Moore’s Law:

Size: Molecular electronics has the potential to dramatically extend the miniaturization that has driven the density and speed advantages of the integrated circuit (IC) phase of Moore’s Law. In 2002, using a STM to manipulate individual carbon monoxide molecules, IBM built a 3-input sorter by arranging those molecules precisely on a copper surface. It is 260,000x smaller than the equivalent circuit built in the most modern chip plant.

For a memorable sense of the difference in scale, consider a single drop of water. There are more molecules in a single drop of water than all transistors ever built. Think of the transistors in every memory chip and every processor ever built—there are about 100x more molecules in a drop of water. Sure, water molecules are small, but an important part of the comparison depends on the 3D volume of a drop. Every IC, in contrast, is a thin veneer of computation on a thick and inert substrate.

Power: One of the reasons that transistors are not stacked into 3D volumes today is that the silicon would melt. The inefficiency of the modern transistor is staggering. It is much less efficient at its task than the internal combustion engine. The brain provides an existence proof of what is possible; it is 100 million times more efficient in power/calculation than our best processors. Sure it is slow (under a kHz) but it is massively interconnected (with 100 trillion synapses between 60 billion neurons), and it is folded into a 3D volume. Power per calculation will dominate clock speed as the metric of merit for the future of computation.

Manufacturing Cost: Many of the molecular electronics designs use simple spin coating or molecular self-assembly of organic compounds. The process complexity is embodied in the synthesized molecular structures, and so they can literally be splashed on to a prepared silicon wafer. The complexity is not in the deposition or the manufacturing process or the systems engineering. Much of the conceptual difference of nanotech products derives from a biological metaphor: complexity builds from the bottom up and pivots about conformational changes, weak bonds, and surfaces. It is not engineered from the top with precise manipulation and static placement.

Low Temperature Manufacturing: Biology does not tend to assemble complexity at 1000 degrees in a high vacuum. It tends to be room temperature or body temperature. In a manufacturing domain, this opens the possibility of cheap plastic substrates instead of expensive silicon ingots.

Elegance: In addition to these advantages, some of the molecular electronics approaches offer elegant solutions to non-volatile and inherently digital storage. We go through unnatural acts with CMOS silicon to get an inherently analog and leaky medium to approximate a digital and non-volatile abstraction that we depend on for our design methodology. Many of the molecular electronic approaches are inherently digital, and some are inherently non-volatile.

Other research projects, from quantum computing to using DNA as a structural material for directed assembly of carbon nanotubes, have one thing in common: they are all nanotechnology.

Why the term “Nanotechnology”?

Nanotech is often defined as the manipulation and control of matter at the nanometer scale (critical dimensions of 1-100nm). It is a bit unusual to describe a technology by a length scale. We certainly didn’t get very excited by “inch-o-technology.” As venture capitalists, we start to get interested when there are unique properties of matter that emerge at the nanoscale, and that are not exploitable at the macroscale world of today’s engineered products. We like to ask the startups that we are investing in: “Why now? Why couldn’t you have started this business ten years ago?” Our portfolio of nanotech startups have a common thread in their response to this question—recent developments in the capacity to understand and engineer nanoscale materials have enabled new products that could not have been developed at larger scale.

There are various unique properties of matter that are expressed at the nanoscale and are quite foreign to our “bulk statistical” senses (we do not see single photons or quanta of electric charge; we feel bulk phenomena, like friction, at the statistical or emergent macroscale). At the nanoscale, the bulk approximations of Newtonian physics are revealed for their inaccuracy, and give way to quantum physics. Nanotechnology is more than a linear improvement with scale; everything changes. Quantum entanglement, tunneling, ballistic transport, frictionless rotation of superfluids, and several other phenomena have been regarded as “spooky” by many of the smartest scientists, even Einstein, upon first exposure.

For a simple example of nanotech’s discontinuous divergence from the “bulk” sciences, consider the simple aluminum Coke can. If you take the inert aluminum metal in that can and grind it down into a powder of 20-30nm particles, it will spontaneously explode in air. It becomes a rocket fuel catalyst. The energetic properties of matter change at that scale. The surface area to volume ratios become relevant, and even the inter-atomic distances in a metal lattice change from surface effects.

Innovation from the Edge:

Disruptive innovation, the driver of growth and renewal, occurs at the edge. In startups, innovation occurs out of the mainstream, away from the warmth of the herd. In biological evolution, innovative mutations take hold at the physical edge of the population, at the edge of survival. In complexity theory, structure and complexity emerge at the edge of chaos—the dividing line between predictable regularity and chaotic indeterminacy. And in science, meaningful disruptive innovation occurs at the inter-disciplinary interstices between formal academic disciplines.

Herein lies much of the excitement about nanotechnology: in the richness of human communication about science. Nanotech exposes the core areas of overlap in the fundamental sciences, the place where quantum physics and quantum chemistry can cross-pollinate with ideas from the life sciences.

Over time, each of the academic disciplines develops its own proprietary systems vernacular that isolates it from neighboring disciplines. Nanoscale science requires scientists to cut across the scientific languages to unite the isolated islands of innovation.

Nanotech is the nexus of the sciences.


In academic centers and government labs, nanotech is fostering new conversations. At Stanford, Duke and many other schools, the new nanotech buildings are physically located at the symbolic hub of the schools of engineering, computer science and medicine.

Nanotech is the nexus of the sciences, but outside of the science and research itself, the nanotech umbrella conveys no business synergy whatsoever. The marketing, distribution and sales of a nanotech solar cell, memory chip or drug delivery capsule will be completely different from each other, and will present few opportunities for common learning or synergy.

Market Timing:

As an umbrella term for a myriad of technologies spanning multiple industries, nanotech will eventually disrupt these industries over different time frames—but most are long-term opportunities. Electronics, energy, drug delivery and materials are areas of active nanotech research today. Medicine and bulk manufacturing are future opportunities. The NSF predicts that nanotech will have a trillion dollar impact on various industries inside of 15 years.

Of course, if one thinks far enough in the future, every industry will be eventually revolutionized by a fundamental capability for molecular manufacturing—from the inorganic structures to the organic and even the biological. Analog manufacturing becomes digital, engendering a profound restructuring of the substrate of the physical world.

The science futurism and predictions of potential nanotech products has a near term benefit. It helps attract some of the best and brightest scientists to work on hard problems that are stepping-stones to the future vision. Scientists relish in exploring the frontier of the unknown, and nanotech embodies the inner frontier.

Given that much of the abstract potential of nanotech is a question of “when” not “if”, the challenge for the venture capitalist is one of market timing. When should we be investing, and in which sub-sectors? It is as if we need to pull the sea of possibilities through an intellectual chromatograph to tease apart the various segments into a timeline of probable progression. That is an ongoing process of data collection (e.g., the growing pool of business plan submissions), business and technology analysis, and intuition.

Two touchstone events for the scientific enthusiasm for the timing of nanotech were the decoding of the human genome and the dazzling visual images from the Scanning Tunneling Microscope (e.g., the arrangement of individual Xenon atoms into the IBM logo). They represent the digitization of biology and matter, symbolic milestones for accelerated learning and simulation-driven innovation.

And more recently, nanotech publication has proliferated, much like the early days of the Internet. Beside the popular press, the number of scientific publications on nanotech has grown 10x in the past ten years. According to the U.S. Patent Office, the number of nanotech patents granted each year has skyrocketed 3x in the past seven years. Ripe with symbolism, IBM has more lawyers than engineers working on nanotech.

With the recent codification of the National Nanotech Initiative into law, federal funding will continue to fill the pipeline of nanotech research. With $847 million earmarked for 2004, nanotech was a rarity in the tight budget process; it received more funding than was requested. And now nanotech is second only to the space race for federal funding of science. And the U.S. is not alone in funding nanotechnology. Unlike many previous technological areas, we aren’t even in the lead. Japan outspends the U.S. each year on nanotech research. In 2003, the U.S. government spending was one fourth of the world total.

Federal funding is the seed corn for nanotech entrepreneurship. All of our nanotech portfolio companies are spin-offs (with negotiated IP transfers) from universities or government labs, and all got their start with federal funding. Often these companies need specialized equipment and expensive laboratories to do the early tinkering that will germinate a new breakthrough. These are typically lacking in the proverbial garage of the entrepreneur at home.

And corporate investors have discovered a keen interest in nanotechnology, with internal R&D, external investments in startups, and acquisitions of promising companies, such as AMD’s recent acquisition of the molecular electronics company Coatue.

Despite all of this excitement, there are a fair number of investment dead-ends, and so we continue to refine the filters we use in selecting companies to back. Every entrepreneur wants to present their business as fitting an appropriate timeline to commercialization. How can we guide our intuition on which of these entrepreneurs are right?

The Vertical Integration Question:

Nanotech involves the reengineering of the lowest level physical layer of a system, and so a natural business question arises: How far forward do you need to vertically integrate before you can sell a product on the open market? For example, in molecular electronics, if you can ship a DRAM-compatible chip, you have found a horizontal layer of standardization, and further vertical integration is not necessary. If you have an incompatible 3D memory block, you may have to vertically integrate to the storage subsystem level, or further, to bring product to market. That may require industry partnerships, and will, in general, take more time and money as change is introduced farther up the product stack. 3D logic with massive interconnectivity may require a new computer design and a new form of software; this would take the longest to commercialize. And most startups on this end of the spectrum would seek partnerships to bring their vision to market. The success and timeliness of that endeavor will depend on many factors, including IP protection, the magnitude of improvement, the vertical tier at which that value is recognized, the number of potential partners, and the degree of tooling and other industry accommodations.

Product development timelines are impacted by the cycle time of the R&D feedback loop. For example, outdoor lifetime testing for organic LEDs will take longer than in silico simulation spins of digital products. If the product requires partners in the R&D loop or multiple nested tiers of testing, it will take longer to commercialize.

The “Interface Problem”:

As we think about the startup opportunities in nanotechnology, an uncertain financial environment underscores the importance of market timing and revenue opportunities over the next five years. Of the various paths to nanotech, which are 20-year quests in search of a government grant, and which are market-driven businesses that will attract venture capital? Are there co-factors of production that require a whole industry to be in place before a company ships product?

As a thought experiment, imagine that I could hand you today any nanotech marvel of your design—a molecular machine as advanced as you would like. What would it be? A supercomputer? A bloodstream submarine? A matter compiler capable of producing diamond rods or arbitrary physical objects? Pick something.

Now, imagine some of the complexities: Did it blow off my hand as I offer it to you? Can it autonomously move to its intended destination? What is its energy source? How do you communicate with it?

These questions draw the “interface problem” into sharp focus: Does your design require an entire nanotech industry to support, power, and “interface” to your molecular machine? As an analogy, imagine that you have one of the latest Pentium processors out of Intel’s wafer fab. How would you make use of the Pentium chip? You then need to wire-bond the chip to a larger lead frame in a package that connects to a larger printed circuit board, fed by a bulky power supply that connects to the electrical power grid. Each of these successive layers relies on the larger-scale precursors from above (which were developed in reverse chronological order), and the entire hierarchy is needed to access the potential of the microchip.

For molecular nanotech, where is the scaling hierarchy?

Today’s business-driven paths to nanotech diverge into two strategies to cross the “interface” chasm—the biologically inspired bottom-up path, and the top-down approach of the semiconductor industry. The non-biological MEMS developers are addressing current markets in the micro-world while pursuing an ever-shrinking spiral of miniaturization that builds the relevant infrastructure tiers along the way. Not surprisingly, this is very similar to the path that has been followed in the semiconductor industry, and many of its adherents see nanotech as inevitable, but in the distant future.

On the other hand, biological manipulation presents myriad opportunities to effect great change in the near-term. Drug development, tissue engineering, and genetic engineering are all powerfully impacted by the molecular manipulation capabilities available to us today. And genetically modified microbes, whether by artificial evolution or directed gene splicing, give researchers the ability to build structures from the bottom up.

The Top Down “Chip Path”:

This path is consonant with the original vision of physicist Richard Feynman (in his 1959 lecture at Caltech) of the iterative miniaturization of our tools down to the nano scale. Some companies, like Zyvex, are pursuing the gradual shrinking of semiconductor manufacturing technology from the micro-electro-mechanical systems (MEMS) of today into the nanometer domain of NEMS. SiWave engineers and manufactures MEMS structures with applications in the consumer electronics, biomedical and communications markets. These precision mechanical devices are built utilizing a customized semiconductor fab.



MEMS technologies have already revolutionized the automotive industry with airbag sensors and the printing sector with ink jet nozzles, and are on track to do the same in medical devices, photonic switches for communications and mobile phones. In-Stat/MDR forecasts that the $4.7 billion of MEMS revenue in 2003 will grow to $8.3 billion by 2007. But progress is constrained by the pace (and cost) of the semiconductor equipment industry, and by the long turnaround time for fab runs. Microfabrica in Torrance, CA, is seeking to overcome these limitations to expand the market for MEMS to 3D structures in more materials than just silicon and with rapid turnaround times.

Many of the nanotech advances in storage, semiconductors and molecular electronics can be improved, or in some cases enabled, by tools that allow for the manipulation of matter at the nanoscale. Here are three examples:

• Nanolithography

Molecular Imprints is commercializing a unique imprint lithographic technology developed at the University of Texas at Austin. The technology uses photo-curable liquids and etched quartz plates to dramatically reduce the cost of nanoscale lithography. This lithography approach, recently added to the ITRS Roadmap, has special advantages for applications in the areas of nano-devices, MEMS, microfluidics, optical components and devices, as well as molecular electronics.

• Optical Traps

Arryx has developed a breakthrough in nano-material manipulation. They generate hundreds of independently controllable laser tweezers that can manipulate molecular objects in 3D (move, rotate, cut, place), all from one laser source passing through an adaptive hologram. The applications span from cell sorting, to carbon nanotube placement, to continuous material handling. They can even manipulate the organelles inside an unruptured living cell (and weigh the DNA in the nucleus).

• Metrology

Imago’s LEAP atom probe microscope is being used by the chip and disk drive industries to produce 3D pictures that depict both chemistry and structure of items on an atom-by-atom basis.  Unlike traditional microscopes, which zoom in to see an item on a microscopic level, Imago’s nanoscope analyzes structures, one atom at a time, and "zooms out" as it digitally reconstructs the item of interest at a rate of millions of atoms per minute.  This creates an unprecedented level of visibility and information at the atomic level.

Advances in nanoscale tools help us control and analyze matter more precisely, which in turn, allows us to produce better tools.

To summarize, the top-down path is designed and engineered with:

• Semiconductor industry adjacencies (with the benefits of market extensions and revenue along the way and the limitation of planar manufacturing techniques)

• Interfaces of scale inherited from the top

The Biological Bottom Up Path:

In contrast to the top-down path, the biological bottom up archetype is:

• Grown via replication, evolution, and self assembly in a 3D, fluid medium

• Constrained at interfaces to the inorganic world

• Limited by learning and theory gaps (in systems biology, complexity theory and the pruning rules of emergence)

• Bootstrapped by a powerful pre-existing hierarchy of interpreters of digital molecular code.

To elaborate on this last point, the ribosome takes digital instructions in the form of mRNA and manufactures almost everything we care about in our bodies from a sequential concatenation of amino acids into proteins. The ribosome is a wonderful existence proof of the power and robustness of a molecular machine. It is roughly 20nm on a side and consists of only 99 thousand atoms. Biological systems are replicating machines that parse molecular code (DNA) and a variety of feedback to grow macro-scale beings. These highly evolved systems can be hijacked and reprogrammed to great effect.

So how does this help with the development of molecular electronics or nanotech manufacturing? The biological bootstrap provides a more immediate path to nanotech futures. Biology provides us with a library of pre-built components and subsystems that can be repurposed and reused, and scientists in various labs are well underway in re-engineering the information systems of biology.

For example, researchers at NASA Ames are taking self-assembling heat shock proteins from thermophiles and genetically modifying them so that they will deposit a regular array of electrodes with a 17nm spacing. This could be useful for patterned magnetic media in the disk drive industry or electrodes in a polymer solar cell.

At MIT, researchers are using accelerated artificial evolution to rapidly breed M13 bacteriophage to infect bacteria in such a way that they bind and organize semiconducting materials with molecular precision.

At IBEA, Craig Venter and Hamilton Smith are leading the Minimal Genome Project. They take the Mycoplasma genitalium from the human urogenital tract, and strip out 200 unnecessary genes, thereby creating the simplest organism that can self-replicate. Then they plan to layer new functionality on to this artificial genome, such as the ability to generate hydrogen from water using the sun’s energy for photonic hydrolysis.

The limiting factor is our understanding of these complex systems, but our pace of learning has been compounding exponentially. We will learn more about genetics and the origins of disease in the next 10 years than we have in all of human history. And for the minimal genome microbes, the possibility of understanding the entire proteome and metabolic pathways seems tantalizingly close to achievable. These simpler organisms have a simple “one gene: one protein” mapping, and lack the nested loops of feedback that make the human genetic code so rich.

Hybrid Molecular Electronics Example:

In the near term, there are myriad companies who are leveraging the power of organic self-assembly (bottom up) and the market interface advantages of top down design. The top down substrate constrains the domain of self-assembly.

Based in Denver, ZettaCore builds molecular memories from energetically elegant molecules that are similar to chlorophyll. ZettaCore’s synthetic organic porphyrin molecule self-assembles on exposed silicon. These molecules, called multiporphyrin nanostructures, can be oxidized and reduced (electrons removed or replaced) in a way that is stable, reproducible, and reversible. In this way, the molecules can be used as a reliable storage medium for electronic devices. Furthermore, the molecules can be engineered to store multiple bits of information and to maintain that information for relatively long periods of time before needing to be refreshed.


Recall the water drop to transistor count comparison, and realize that these multiporphyrins have already demonstrated up to eight stable digital states per molecule.

The technology has future potential to scale to 3D circuits with minimal power dissipation, but initially it will enhance the weakest element of an otherwise standard 2D memory chip. The ZettaCore memory chip looks like a standard memory chip to the end customer; nobody needs to know that it has “nano inside.” The I/O pads, sense amps, row decoders and wiring interconnect are produced with a standard semiconductor process. As a final manufacturing step, the molecules are splashed on the wafer where they self-assemble in the pre-defined regions of exposed metal.


From a business perspective, the hybrid product design allows an immediate market entry because the memory chip defines a standard product feature set, and the molecular electronics manufacturing process need not change any of the prior manufacturing steps. The inter-dependencies with the standard silicon manufacturing steps are also avoided given this late coupling; the fab can process wafers as they do now before spin coating the molecules. In contrast, new materials for gate oxides or metal interconnects can have a number of effects on other processing steps that need to be tested, which introduces delay (as was seen with copper interconnect).

For these reasons, ZettaCore is currently in the lead in the commercialization of molecular electronics, with a working megabit chip, technology tested to a trillion read/write cycles, and manufacturing partners. In a symbolic nod to the future, Intel co-founder Les Vadasz (badge #3), has just joined the Board of Directors of ZettaCore. He was formerly the design manager for the world’s first DRAM, EPROM and microprocessor.

Generalizing from the ZettaCore experience, the early revenue in molecular electronics will likely come from simple 1D structures such as chemical sensors and self-assembled 2D arrays on standard substrates, such as memory chips, sensor arrays, displays, CCDs for cameras and solar cells.

IP and business model:

Beyond product development timelines, the path to commercialization is dramatically impacted by the cost and scale of the manufacturing ramp. Partnerships with industry incumbents can be the accelerant or albatross for market entry.

The strength of the IP protection for nanotech relates to the business models that can be safely pursued. For example, if the composition of matter patents afford the nanotech startup the same degree of protection as a biotech startup, then a “biotech licensing model” may be possible in nanotech. For example, a molecular electronics company could partner with a large semiconductor company for manufacturing, sales and marketing, just as a biotech company partners with a big pharma partner for clinical trials, marketing, sales and distribution. In both cases, the cost to the big partner is on the order of $100 million, and the startup earns a royalty on future product sales.

Notice how the transaction costs and viability of this business model option pivots around the strength of IP protection. A software business, on the other end of the IP spectrum, would be very cautious about sharing their source code with Microsoft in the hopes of forming a partnership based on royalties.

Manufacturing partnerships are common in the semiconductor industry, with the “fabless” business model. This layering of the value chain separates the formerly integrated functions of product conceptualization, design, manufacturing, testing, and packaging. This has happened in the semiconductor industry because the capital cost of manufacturing is so large. The fabless model is a useful way for a small company with a good idea to bring its own product to market, but the company then has to face the issue of gaining access to its market and funding the development of marketing, distribution, and sales.

Having looked at the molecular electronics example in some depth, we can now move up the abstraction ladder to aggregates, complex systems, and the potential to advance the capabilities of Moore’s Law in software.

Systems, Software, and other Abstractions:

Unlike memory chips, which have a regular array of elements, processors and logic chips are limited by the rats’ nest of wires that span the chip on multiple layers. The bottleneck in logic chip design is not raw numbers of transistors, but a design approach that can utilize all of that capability in a timely fashion. For a solution, several next generation processor companies have redesigned “systems on silicon” with a distributed computing bent; wiring bottlenecks are localized, and chip designers can be more productive by using a high-level programming language, instead of wiring diagrams and logic gates. Chip design benefits from the abstraction hierarchy of computer science.

Compared to the relentless march of Moore’s Law, the cognitive capability of humans is relatively fixed. We have relied on the compounding power of our tools to achieve exponential progress. To take advantage of accelerating hardware power, we must further develop layers of abstraction in software to manage the underlying complexity. For the next 1000-fold improvement in computing, the imperative will shift to the growth of distributed complex systems. Our inspiration will likely come from biology.

As we race to interpret the now complete map of the human genome, and embark upon deciphering the proteome, the accelerating pace of learning is not only opening doors to the better diagnosis and treatment of disease, it is also a source of inspiration for much more powerful models for computer programming and complex systems development.

Biological Muse:

Many of the interesting software challenges relate to growing complex systems or have other biological metaphors as inspiration. Some of the interesting areas include: Biomimetics, Artificial Evolution, Genetic Algorithms, A-life, Emergence, IBM’s Autonomic Computing initiative, Viral Marketing, Mesh, Hives, Neural Networks and the Subsumption architecture in robotics. The Santa Fe Institute just launched a BioComp research initiative.

In short, biology inspires IT and IT drives biology.

But how inspirational are the information systems of biology? If we took your entire genetic code–the entire biological program that resulted in your cells, organs, body and mind–and burned it into a CD, it would be smaller than Microsoft Office. Just as images and text can be stored digitally, two digital bits can encode for the four DNA bases (A,T,C and G) resulting in a 750MB file that can be compressed for the preponderance of structural filler in the DNA chain.

If, as many scientists believe, most of the human genome consists of vestigial evolutionary remnants that serve no useful purpose, then we could compress it to 60MB of concentrated information. Having recently reinstalled Office, I am humbled by the comparison between its relatively simple capabilities and the wonder of human life. Much of the power in bio-processing comes from the use of non-linear fuzzy logic and feedback in the electrical, physical and chemical domains.

For example, in a fetus, the initial inter-neuronal connections, or "wiring" of the brain, follow chemical gradients. The massive number of inter-neuron connections in an adult brain could not be simply encoded in our DNA, even if the entire DNA sequence was dedicated to this one task. There are on the order of 100 trillion synaptic connections between 60 billion neurons in your brain.

This incredibly complex system is not ‘installed’ like Microsoft Office from your DNA. It is grown, first through widespread connectivity sprouting from ‘static storms’ of positive electro-chemical feedback, and then through the pruning of many underused connections through continuous usage-based feedback. In fact, at the age of 2 to 3 years old, humans hit their peak with a quadrillion synaptic connections, and twice the energy burn of an adult brain.

The brain has already served as an inspirational model for artificial intelligence (AI) programmers. The neural network approach to AI involves the fully interconnected wiring of nodes, and then the iterative adjustment of the strength of these connections through numerous training exercises and the back-propagation of feedback through the system.

Moving beyond rules-based AI systems, these artificial neural networks are capable of many human-like tasks, such as speech and visual pattern recognition with a tolerance for noise and other errors. These systems shine precisely in the areas where traditional programming approaches fail.

The coding efficiency of our DNA extends beyond the leverage of numerous feedback loops to the complex interactions between genes. The regulatory genes produce proteins that respond to external or internal signals to regulate the activity of previously produced proteins or other genes. The result is a complex mesh of direct and indirect controls.

This nested complexity implies that genetic re-engineering can be a very tricky endeavor if we have partial system-wide knowledge about the side effects of tweaking any one gene. For example, recent experiments show that genetically enhanced memory comes at the expense of enhanced sensitivity to pain.

By analogy, our genetic code is a dense network of nested hyperlinks, much like the evolving Web. Computer programmers already tap into the power and efficiency of indirect pointers and recursive loops. More recently, biological systems have inspired research in evolutionary programming, where computer programs are competitively grown in a simulated environment of natural selection and mutation. These efforts could transcend the local optimization inherent to natural evolution.

But therein lies great complexity. We have little experience with the long-term effects of the artificial evolution of complex systems. Early subsystem work can be deterministic of emergent and higher-level capabilities, as with the neuron (witness the Cambrian explosion of structural complexity and intelligence in biological systems once the neuron enabled something other than nearest-neighbor inter-cellular communication. Prior to the neuron, most multi-cellular organisms were small blobs).

Recent breakthroughs in robotics were inspired by the "subsumption architecture" of biological evolution—using a layered approach to assembling reactive rules into complete control systems from the bottom up. The low-level reflexes are developed early on, and remain unchanged as complexity builds. Early subsystem work in any subsumptive system can have profound effects on its higher order constructs. We may not have a predictive model of these downstream effects as we are developing the architectural equivalent of the neuron.

The Web is the first distributed experiment in biological growth in technological systems. Peer-to-peer software development and the rise of low-cost Web-connected embedded systems give the possibility that complex artificial systems will arise on the Internet, rather than on one programmer’s desktop. We already use biological metaphors, such as viral marketing to describe the network economy.

Nanotech Accelerants: quantum simulation and high-throughput experimentation:

We have already discussed the migration of the lab sciences to the innovation cycles of the information sciences and Moore’s Law. Advances in multi-scale molecular modeling are helping some companies design complex molecular systems in silico. But the quantum effects that underlie the unique properties of nano-scale systems are a double-edged sword. Although scientists have known for nearly 100 years how to write down the equations that an engineer needs to solve in order to understand any quantum system, no computer has ever been built that is powerful enough to solve them. Even today’s most powerful supercomputers choke on systems bigger than a single water molecule.

This means that the behavior of nano-scale systems can only be reliably studied by empirical methods—building something in a lab, and poking and prodding it to see what happens.

This observation is distressing on several counts. We would like to design and visualize nano-scale products in the tradition of mechanical engineering, using CAD-like programs. Unfortunately this future can never be accurately realized using traditional computer architectures. The structures of interest to nano-scale scientists present intractable computational challenges to traditional computers.

The shortfall in our ability to use computers to shorten and cheapen the design cycles of nano-scale products has serious business ramifications. If the development of all nano-scale products fundamentally requires long R&D cycles and significant investment, the nascent nanotechnology industry will face many of the difficulties that the biotechnology industry faces, without having a parallel to the pharmaceutical industry to shepherd products to markets.

In a wonderful turn of poetic elegance, quantum mechanics itself turns out to be the solution to this quandary. Machines known as quantum computers, built to harness some simple properties of quantum systems, can perform accurate simulations of any nano-scale system of comparable complexity. The type of simulation that a quantum computer does results in an exact prediction of how a system will behave in nature—something that is literally impossible for any traditional computer, no matter how powerful.

Once quantum computers become available, engineers working at the nano-scale will be able to use them to model and design nano-scale systems just like today’s aerospace engineers model and design airplanes—completely virtually—with no wind tunnels (or their chemical analogues).

This may seem strange, but really it’s not. Think of it like this: conventional computers are really good at modeling conventional (that is, non-quantum) stuff—like automobiles and airplanes. Quantum computers are really good at modeling quantum stuff. Each type of computer speaks a different language.

Based in Vancouver, Canada, D-Wave is building a quantum computer using aluminum-based circuits. The company projects that by 2008 it will be building thumbnail-sized chips with more computing power than the aggregate total of all computers on the planet today and ever built in history, when applied to simulating the behavior and predicting the properties of nano-scale systems—highlighting the vast difference in capabilities of quantum and conventional computers. This would be of great value to the development of the nanotechnology industry. And it’s a jaw-dropping claim. Professor David Deutsch of Oxford summarized: “Quantum computers have the potential to solve problems that would take a classical computer longer than the age of the universe.”

While any physical experiment can be regarded as a complex computation, we will need quantum computers to transcend Moore’s law into the quantum domain to make this equivalence realizable. In the meantime, scientists will perform experiments. Until recently, the methods used for the discovery of new functional materials differed little from those used by scientists and engineers a hundred years ago. It was very much a manual, skilled labor-intensive process. One sample was prepared from millions of possibilities, then it was tested, the results recorded and the process repeated. Discoveries routinely took years.

Companies like Affymetrix, Intematix and Symyx have made major improvements in a new methodology: high throughput experimentation. For example, Intematix performs high throughput synthesis and screening of materials to produce and characterize these materials for a wide range of technology applications. This technology platform enables them to discover compound materials solutions more than one hundred times faster than conventional methods. Initial materials developed have application in wireless communications, fuel cells, batteries, x-ray imaging, semiconductors, LEDs, and phosphors.

Combinatorial materials discovery replaces the old traditional method by generating a multitude of combinations—possibly all feasible combinations—of a set of raw materials simultaneously. This "Materials Library" contains all combinations of a set of materials, and they can be quickly tested in parallel by automated methods similar to those used in the combinatorial chemistry and the pharmaceutical industry. What used to take years to develop now only takes months.

Timeline:

Given our discussion of the various factors affecting the commercialization of nanotech-nologies, how do we see them sequencing?

• Early Revenue

- Tools and bulk materials (powders, composites). Several revenue stage and public companies already exist in this category.

- 1D chemical and biological sensors. Out of body medical sensors and diagnostics

- Larger MEMS-scale devices

• Medium Term

- 2D Nanoelectronics: memory, displays, solar cells

- Hierarchically-structured nanomaterials

- Hybrid Bio-nano, efficient energy storage and conversion

- Passive drug delivery & diagnostics, improved implantable medical devices

• Long Term

- 3D Nanoelectronics

- Nanomedicine, therapeutics, and artificial chromosomes

- Quantum computers used in small molecule design

- Machine-phase manufacturing

- The safest long-term prediction is that the most important nanotech developments will be the unforeseen opportunities, something that we could not predict today.

In the long term, nanotechnology research could ultimately enable miniaturization to a magnitude never before previously seen, and could restructure and digitize the basis of manufacturing—such that matter becomes code. Like the digitization of music, the importance is not just in the fidelity of reproduction, but in the decoupling of content from distribution. New opportunities arise once a product is digitized, such as online music swapping—transforming an industry.

With replicating molecular machines, physical production itself migrates to the rapid innovation cycle of information technology. With physical goods, the basis of manufacturing governs inventory planning and logistics, and the optimal distribution and retail supply chain has undergone little radical change for many decades. Flexible, low-cost manufacturing near the point of consumption could transform the physical goods economy, and even change our notion of ownership—especially for infrequently used objects.

These are some profound changes to the manufacturing of everything, which ripples through the fabric of society. The science futurists have pondered the implications of being able to manufacture anything for $1 per pound. And as some of these technologies couple tightly to our biology, it will draw into question the nature and extensibility of our humanity.

Genes, Memes and Digital Expression:

These changes may not be welcomed smoothly, especially with regard to reengineering the human germ line. At the societal level, we will likely try to curtail “genetic free speech” and the evolution of evolvability. Larry Lessig predicts that we will recapitulate the 200-year debate about the First Amendment to the Constitution. Pressures to curtail free genetic expression will focus on the dangers of “bad speech”, and others will argue that good genetic expression will crowd out the bad, as it did with mimetic evolution (in the scientific method and the free exchange of ideas). Artificial chromosomes with adult trigger events can decouple the agency debate about parental control. And, with a touch of irony, China may lead the charge.

We subconsciously cling to the selfish notion that humanity is the endpoint of evolution. In the debates about machine intelligence and genetic enhancements, there is a common and deeply rooted fear about being surpassed—in our lifetime. When framed as a question of parenthood (would you want your great grandchild to be smarter and healthier than you?), the emotion often shifts from a selfish sense of supremacy to a universal human search for symbolic immortality.

Summary:

While the future is becoming more difficult to predict with each passing year, we should expect an accelerating pace of technological change. We conclude that nanotechnology is the next great technology wave and the next phase of Moore’s Law. Nanotech innovations enable myriad disruptive businesses that were not possible before, driven by entrepreneurship.

Much of our future context will be defined by the accelerating proliferation of information technology—as it innervates society and begins to subsume matter into code. It is a period of exponential growth in the impact of the learning-doing cycle where the power of biology, IT and nanotech compounds the advances in each formerly discrete domain.

So, at DFJ, we conclude that it is a great time to invest in startups. As in evolution and the Cambrian explosion, many will become extinct. But some will change the world. So we pursue the strategy of a diversified portfolio, or in other words, we try to make a broad bet on mammals.

© 2003 Steve T. Jurvetson. Reprinted with permission.

Biodiversity loss

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Biodiversity_loss   ...