Search This Blog

Tuesday, July 24, 2018

The near-term inevitability of radical life extension and expansion

RAY KURZWEIL
Inventor and Technologist; Author, The Singularity Is Near: When Humans Transcend Biology
Original link:  https://www.edge.org/q2006/q06_2.html#kurzweil



My dangerous idea is the near-term inevitability of radical life extension and expansion. The idea is dangerous, however, only when contemplated from current linear perspectives.

First the inevitability: the power of information technologies is doubling each year, and moreover comprises areas beyond computation, most notably our knowledge of biology and of our own intelligence. It took 15 years to sequence HIV and from that perspective the genome project seemed impossible in 1990. But the amount of genetic data we were able to sequence doubled every year while the cost came down by half each year.

We finished the genome project on schedule and were able to sequence SARS in only 31 days. We are also gaining the means to reprogram the ancient information processes underlying biology. RNA interference can turn genes off by blocking the messenger RNA that express them. New forms of gene therapy are now able to place new genetic information in the right place on the right chromosome. We can create or block enzymes, the work horses of biology. We are reverse-engineering — and gaining the means to reprogram — the information processes underlying disease and aging, and this process is accelerating, doubling every year. If we think linearly, then the idea of turning off all disease and aging processes appears far off into the future just as the genome project did in 1990. On the other hand, if we factor in the doubling of the power of these technologies each year, the prospect of radical life extension is only a couple of decades away.

In addition to reprogramming biology, we will be able to go substantially beyond biology with nanotechnology in the form of computerized nanobots in the bloodstream. If the idea of programmable devices the size of blood cells performing therapeutic functions in the bloodstream sounds like far off science fiction, I would point out that we are doing this already in animals. One scientist cured type I diabetes in rats with blood cell sized devices containing 7 nanometer pores that let insulin out in a controlled fashion and that block antibodies. If we factor in the exponential advance of computation and communication (price-performance multiplying by a factor of a billion in 25 years while at the same time shrinking in size by a factor of thousands), these scenarios are highly realistic.

The apparent dangers are not real while unapparent dangers are real. The apparent dangers are that a dramatic reduction in the death rate will create over population and thereby strain energy and other resources while exacerbating environmental degradation. However we only need to capture 1 percent of 1 percent of the sunlight to meet all of our energy needs (3 percent of 1 percent by 2025) and nanoengineered solar panels and fuel cells will be able to do this, thereby meeting all of our energy needs in the late 2020s with clean and renewable methods. Molecular nanoassembly devices will be able to manufacture a wide range of products, just about everything we need, with inexpensive tabletop devices. The power and price-performance of these systems will double each year, much faster than the doubling rate of the biological population. As a result, poverty and pollution will decline and ultimately vanish despite growth of the biological population.

There are real downsides, however, and this is not a utopian vision. We have a new existential threat today in the potential of a bioterrorist to engineer a new biological virus. We actually do have the knowledge to combat this problem (for example, new vaccine technologies and RNA interference which has been shown capable of destroying arbitrary biological viruses), but it will be a race. We will have similar issues with the feasibility of self-replicating nanotechnology in the late 2020s. Containing these perils while we harvest the promise is arguably the most important issue we face.

Some people see these prospects as dangerous because they threaten their view of what it means to be human. There is a fundamental philosophical divide here. In my view, it is not our limitations that define our humanity. Rather, we are the species that seeks and succeeds in going beyond our limitations.

Space manufacturing

From Wikipedia, the free encyclopedia
Growth of protein crystals from liquid in outer space: the top part shows a syringe with extruded protein droplet.[1]
Crystals grown by American scientists on the Russian Space Station Mir in 1995: (a) rhombohedral canavalin, (b) creatine kinase, (c) lysozyme, (d) beef catalase, (e) porcine alpha amylase, (f) fungal catalase, (g) myglobin, (h) concanavalin B, (i) thaumatin, (j) apoferritin, (k) satellite tobacco mosaic virus and (l) hexagonal canavalin.[2]
Comparison of insulin crystals growth in outer space (left) and on Earth (right).
Space manufacturing is the production of manufactured goods in an environment outside a planetary atmosphere. Typically this includes conditions of microgravity and hard vacuum. Manufacturing in space has several potential advantages over Earth-based industry.
  1. The unique environment can allow for industrial processes that cannot be readily reproduced on Earth.
  2. Raw materials could be lifted to orbit from other bodies within the solar system and processed at a low expense compared to the cost of lifting materials into orbit from Earth.
  3. Potentially hazardous processes can be performed in space with minimal risk to the environment of the Earth or other planets.
The space environment is expected to be beneficial for production of a variety of products. Once the heavy capitalization costs of assembling the mining and manufacturing facilities is paid, the production will need to be economically profitable in order to become self-sustaining and beneficial to society. The most significant cost is overcoming the energy hurdle for boosting materials into orbit. Once this barrier is significantly reduced in cost per kilogram, the entry price for space manufacturing can make it much more attractive to entrepreneurs.

Economic requirements of space manufacturing imply a need to collect the requisite raw materials at a minimum energy cost. The economical movement of material in space is directly related to the delta-v, or change in velocity required to move from the mining sites to the manufacturing plants. Near-Earth asteroids, Phobos, Deimos and the lunar surface have a much lower delta-v compared to launching the materials from the surface of the Earth to Earth orbit.

History

During the Soyuz 6 mission of 1969, Russian astronauts performed the first welding experiments in space. Three different welding processes were tested using a hardware unit called Vulkan. The tests included welding aluminum, titanium, and stainless steel.

The Skylab mission, launched in May 1973, served as a laboratory to perform various space manufacturing experiments. The station was equipped with a materials processing facility that included a multi-purpose electric furnace, a crystal growth chamber, and an electron beam gun. Among the experiments to be performed was research on molten metal processing; photographing the behavior of ignited materials in zero-gravity; crystal growth; processing of immiscible alloys; brazing of stainless steel tubes, electron beam welding, and the formation of spheres from molten metal. The crew spent a total of 32 man-hours on materials science and space manufacturing investigation during the mission.

The Space Studies Institute began hosting a bi-annual Space Manufacturing Conference in 1977.
Microgravity research in materials processing continued in 1983 using the Spacelab facility. This module has been carried into orbit 26 times aboard the Space Shuttle, as of 2002. In this role the shuttle served as an interim, short-duration research platform before the completion of the International Space Station.

The Wake Shield Facility is deployed by the Space Shuttle's robotic arm. NASA image

In February 1994 and September 1995, the Wake Shield Facility was carried into orbit by the Space Shuttle. This demonstration platform used the vacuum created in the orbital wake to manufacture thin films of gallium arsenide and aluminum gallium arsenide.

On May 31, 2005, the recoverable, unmanned Foton-M2 laboratory was launched into orbit. Among the experiments were crystal growth and the behavior of molten-metal in weightlessness.

ISS

The completion of the International Space Station has provided expanded and improved facilities for performing industrial research. These have and will continue to lead to improvements in our knowledge of materials sciences, new manufacturing techniques on Earth, and potentially some important discoveries in space manufacturing methods.

The Material Science Laboratory Electromagnetic Levitator (MSL-EML) on board the Columbus Laboratory is a science facility that can be used to study the melting and solidification properties of various materials. The Fluid Science Laboratory (FSL) is used to study the behavior of liquids in microgravity.[3] ISS is also equipped with a 3D printer and is allowing the crew on ISS to manufacture parts on station and is keeping costs of launches to a minimum[citation needed].

Environment

There are several unique differences between the properties of materials in space compared to the same materials on the Earth. These differences can be exploited to produce unique or improved manufacturing techniques.
  • The microgravity environment allows control of convection in liquids or gasses, and the elimination of sedimentation. Diffusion becomes the primary means of material mixing, allowing otherwise immiscible materials to be intermixed. The environment allows enhanced growth of larger, higher-quality crystals in solution.
  • The ultraclean vacuum of space allows the creation of very pure materials and objects. The use of vapor deposition can be used to build up materials layer by layer, free from defects.
  • Surface tension causes liquids in microgravity to form perfectly round spheres. This can cause problems when trying to pump liquids through a conduit, but it is very useful when perfect spheres of consistent size are needed for an application.
  • Space can provide readily available extremes of heat and cold. Sunlight can be focused to concentrate enough heat to melt the materials, while objects kept in perpetual shade are exposed to temperatures close to absolute zero. The temperature gradient can be exploited to produce strong, glassy materials.

Materials processing

For most manufacturing applications, specific material requirements must be satisfied. Mineral ores need to be refined to extract specific metals, and volatile organic compounds will need to be purified. Ideally these raw materials are delivered to the processing site in an economical manner, where time to arrival, propulsion energy expenditure, and extraction costs are factored into the planning process. Minerals can be obtained from asteroids, the lunar surface, or a planetary body. Volatiles could potentially be obtained from a comet or the moons of Mars or other planets. It may also prove possible to extract hydrogen from the cold traps at the poles of the Moon.

Another potential source of raw materials, at least in the short term, is recycled orbiting satellites and other man-made objects in space. Some consideration was given to the use of the Space Shuttle external fuel tanks for this purpose, but NASA determined that the potential benefits were outweighed by the increased risk to crew and vehicle[citation needed].

Unless the materials processing and the manufacturing sites are co-located with the resource extraction facilities, the raw materials will need to be moved about the solar system. There are several proposed means of providing propulsion for this material, including solar sails, electric sails, magnetic sails, electric ion thrusters, or mass drivers (this last method uses a sequence of electromagnets mounted in a line to accelerate a conducting material).

At the materials processing facility, the incoming materials will need to be captured by some means. Maneuvering rockets attached to the load can park the content in a matching orbit. Alternatively, if the load is moving at a low delta-v relative to the destination, then it can be captured by means of a mass catcher. This could consist of a large, flexible net or inflatable structure that would transfer the momentum of the mass to the larger facility. Once in place, the materials can be moved into place by mechanical means or by means of small thrusters.

Materials can be used for manufacturing either in their raw form, or by processing them to extract the constituent elements. Processing techniques include various chemical, thermal, electrolytic, and magnetic methods for separation. In the near term, relatively straightforward methods can be used to extract aluminum, iron, oxygen, and silicon from lunar and asteroidal sources. Less concentrated elements will likely require more advanced processing facilities, which may have to wait until a space manufacturing infrastructure is fully developed.

Some of the chemical processes will require a source of hydrogen for the production of water and acid mixtures. Hydrogen gas can also be used to extract oxygen from the lunar regolith, although the process is not very efficient.[clarification needed][citation needed] So a readily available source of useful volatiles is a positive factor in the development of space manufacturing. Alternatively, oxygen can be liberated from the lunar regolith without reusing any imported materials. Just heat the regolith to 2,500 C in a vacuum. This was tested on Earth with lunar simulant in a vacuum chamber. As much as 20% of the sample was released as free oxygen. Eric Cardiff calls the remainder slag. This process is highly efficient in terms of imported materials used up per batch, but is not the most efficient process in energy per kilogram of oxygen.[4]

One proposed method of purifying asteroid materials is through the use of carbon monoxide (CO). Heating the material to 500 °F (260 °C) and exposing it to CO causes the metals to form gaseous carbonyls. This vapor can then be distilled to separate out the metal components, and the CO can then be recovered by another heating cycle. Thus an automated ship can scrape up loose surface materials from, say, the relatively nearby 4660 Nereus (in delta-v terms), process the ore using solar heating and CO, and eventually return with a load of almost pure metal. The economics of this process can potentially allow the material to be extracted at one-twentieth the cost of launching from Earth, but it would require a two-year round trip to return any mined ore.[citation needed]

Manufacturing

Due to speed of light constraints on communication, manufacturing in space at a distant point of resource acquisition will either require completely autonomous robotics to perform the labor, or a human crew with all the accompanying habitat and safety requirements. If the plant is built in orbit around the Earth, or near a manned space habitat, however, telecheric devices can be used for certain tasks that require human intelligence and flexibility.

Solar power provides a readily available power source for thermal processing. Even with heat alone, simple thermally-fused materials can be used for basic construction of stable structures. Bulk soil from the Moon or asteroids has a very low water content, and when melted to form glassy materials is very durable. These simple, glassy solids can be used for the assembly of habitats on the surface of the Moon or elsewhere. The solar energy can be concentrated in the manufacturing area using an array of steerable mirrors.

The availability and favorable physical properties of metals will make them a major component of space manufacturing. Most of the metal handling techniques used on Earth can also be adopted for space manufacturing. A few of these techniques will need significant modifications due to the microgravity environment.

The production of hardened steel in space will introduce some new factors. Carbon only appears in small proportions in lunar surface materials and will need to be delivered from elsewhere. Waste materials carried by humans from the Earth is one possible source, as are comets. The water normally used to quench steel will also be in short supply, and require strong agitation.

Casting steel can be a difficult process in microgravity, requiring special heating and injection processes, or spin forming. Heating can be performed using sunlight combined with electrical heaters. The casting process would also need to be managed to avoid the formation of voids as the steel cools and shrinks.

Various metal-working techniques can be used to shape the metal into the desired form. The standard methods are casting, drawing, forging, machining, rolling, and welding. Both rolling and drawing metals require heating and subsequent cooling. Forging and extrusion can require powered presses, as gravity is not available. Electron beam welding has already been demonstrated on board the Skylab, and will probably be the method of choice in space. Machining operations can require precision tools which will need to be imported from the Earth for some duration.

New space manufacturing technologies are being studied at places such as Marshall's National Center for Advanced Manufacturing. The methods being investigated include coatings that can be sprayed on surfaces in space using a combination of heat and kinetic energy, and electron beam free form fabrication[5] of parts. Approaches such as these, as well as examination of material properties that can be investigated in an orbiting laboratory, will be studied on the International Space Station by NASA and Made In Space, Inc.[6]

3D-Printing in Space

The option of 3D printing items in space holds many advantages over manufacturing situated on Earth. With 3D printing technologies, rather than exporting tools and equipment from Earth into space, astronauts have the option to manufacture needed items directly. On-demand patterns of manufacturing make long-distance space travel more feasible and self-sufficient as space excursions require less cargo. Mission safety is also improved.

The Made In Space, Inc. 3D printers, which launched in 2014 to the International Space Station, are designed specifically for a zero-gravity or micro-gravity environment. The effort was awarded the Phase III Small Business Innovation and Research Contract.[7] The Additive Manufacturing Facility will be used by NASA to carry out repairs (including during emergency situations), upgrades, and installation.[8] Made In Space lists the advantages of 3D printing as easy customization, minimal raw material waste, optimized parts, faster production time, integrated electronics, limited human interaction, and option to modify the printing process.[8]

The Refabricator experiment, under development by Firmamentum, a division of Tethers Unlimited, Inc. under a NASA Phase III Small Business Innovation Research contract, combines a recycling system and a 3D printer to perform demonstration of closed-cycle in-space manufacturing on the International Space Station (ISS).[9] The Refabricator experiment, scheduled for launch to the ISS in early 2018, will process plastic feedstock through multiple printing and recycling cycles to evaluate how many times the plastic materials can be re-used in the microgravity environment before their polymers degrade to unacceptable levels.

Additionally, 3D printing in space can also account for the printing of meals. NASA's Advanced Food Technology program is currently investigating the possibility of printing food items in order to improve food quality, nutrient content, and variety. [10]

Products

There are thought to be a number of useful products that can potentially be manufactured in space and result in an economic benefit. Research and development is required to determine the best commodities to be produced, and to find efficient production methods. The following products are considered prospective early candidates:
As the infrastructure is developed and the cost of assembly drops, some of the manufacturing capacity can be directed toward the development of expanded facilities in space, including larger scale manufacturing plants. These will likely require the use of lunar and asteroid materials, and so follow the development of mining bases.

Rock is the simplest product, and at minimum is useful for radiation shielding. It can also be subsequently processed to extract elements for various uses.

Water from lunar sources, Near Earth Asteroids or Martian moons is thought to be relatively cheap and simple to extract, and gives adequate performance for many manufacturing and material shipping purposes. Separation of water into hydrogen and oxygen can be easily performed in small scale, but some scientists [3] believe that this will not be performed on any large scale initially due to the large quantity of equipment and electrical energy needed to split water and liquify the resultant gases. Water used in steam rockets gives a specific impulse of about 190 seconds[citation needed]; less than half that of hydrogen/oxygen, but this is adequate for delta-v's that are found between Mars and Earth[citation needed]. Water is useful as a radiation shield and in many chemical processes.

Ceramics made from lunar or asteroid soil can be employed for a variety of manufacturing purposes.[citation needed] These uses include various thermal and electrical insulators, such as heat shields for payloads being delivered to the Earth's surface.

Metals can be used to assemble a variety of useful products, including sealed containers (such as tanks and pipes), mirrors for focusing sunlight, and thermal radiators. The use of metals for electrical devices would require insulators for the wires, so a flexible insulating material such as plastic or fiberglass will be needed.

A notable output of space manufacturing is expected to be solar panels. Expansive solar energy arrays can be constructed and assembled in space. As the structure does not need to support the loads that would be experienced on Earth, huge arrays can be assembled out of proportionately smaller amounts of material. The generated energy can then be used to power manufacturing facilities, habitats, spacecraft, lunar bases, and even beamed down to collectors on the Earth with microwaves.

Other possibilities for space manufacturing include propellants for spacecraft, some repair parts for spacecraft and space habitats, and, of course, larger factories.[11] Ultimately, space manufacturing facilities can hypothetically become nearly self-sustaining, requiring only minimal imports from the Earth. The microgravity environment allows for new possibilities in construction on a massive scale, including megascale engineering. These future projects might potentially assemble space elevators, massive solar array farms, very high capacity spacecraft, and rotating habitats capable of sustaining populations of tens of thousands of people in Earth-like conditions.

We Are the Web

January 19, 2006 by Kevin Kelly
Original link:  http://www.kurzweilai.net/we-are-the-web-2
Originally published in Wired Magazine August 2005. Published on KurzweilAI.net January 19, 2006.

The planet-sized “Web” computer is already more complex than a human brain and has surpassed the 20-petahertz threshold for potential intelligence as calculated by Ray Kurzweil. In 10 years, it will be ubiquitous. So will superintelligence emerge on the Web, not a supercomputer?

Ten years ago, Netscape’s explosive IPO ignited huge piles of money. The brilliant flash revealed what had been invisible only a moment before: the World Wide Web. As Eric Schmidt (then at Sun, now at Google) noted, the day before the IPO, nothing about the Web; the day after, everything.
Computing pioneer Vannevar Bush outlined the Web’s core idea—hyperlinked pages—in 1945, but the first person to try to build out the concept was a freethinker named Ted Nelson who envisioned his own scheme in 1965. However, he had little success connecting digital bits on a useful scale, and his efforts were known only to an isolated group of disciples. Few of the hackers writing code for the emerging Web in the 1990s knew about Nelson or his hyperlinked dream machine.

At the suggestion of a computer-savvy friend, I got in touch with Nelson in 1984, a decade before Netscape. We met in a dark dockside bar in Sausalito, California. He was renting a houseboat nearby and had the air of someone with time on his hands. Folded notes erupted from his pockets, and long strips of paper slipped from overstuffed notebooks. Wearing a ballpoint pen on a string around his neck, he told me—way too earnestly for a bar at 4 o’clock in the afternoon—about his scheme for organizing all the knowledge of humanity. Salvation lay in cutting up 3 x 5 cards, of which he had plenty.

Although Nelson was polite, charming, and smooth, I was too slow for his fast talk. But I got an aha! from his marvelous notion of hypertext. He was certain that every document in the world should be a footnote to some other document, and computers could make the links between them visible and permanent. But that was just the beginning! Scribbling on index cards, he sketched out complicated notions of transferring authorship back to creators and tracking payments as readers hopped along networks of documents, what he called the docuverse. He spoke of "transclusion" and "intertwingularity" as he described the grand utopian benefits of his embedded structure. It was going to save the world from stupidity.

I believed him. Despite his quirks, it was clear to me that a hyperlinked world was inevitable—someday. But looking back now, after 10 years of living online, what surprises me about the genesis of the Web is how much was missing from Vannevar Bush’s vision, Nelson’s docuverse, and my own expectations. We all missed the big story. The revolution launched by Netscape’s IPO was only marginally about hypertext and human knowledge. At its heart was a new kind of participation that has since developed into an emerging culture based on sharing. And the ways of participating unleashed by hyperlinks are creating a new type of thinking—part human and part machine—found nowhere else on the planet or in history.

Not only did we fail to imagine what the Web would become, we still don’t see it today! We are blind to the miracle it has blossomed into. And as a result of ignoring what the Web really is, we are likely to miss what it will grow into over the next 10 years. Any hope of discerning the state of the Web in 2015 requires that we own up to how wrong we were 10 years ago.

1995

Before the Netscape browser illuminated the Web, the Internet did not exist for most people. If it was acknowledged at all, it was mischaracterized as either corporate email (as exciting as a necktie) or a clubhouse for adolescent males (read: pimply nerds). It was hard to use. On the Internet, even dogs had to type. Who wanted to waste time on something so boring?

The memories of an early enthusiast like myself can be unreliable, so I recently spent a few weeks reading stacks of old magazines and newspapers. Any promising new invention will have its naysayers, and the bigger the promises, the louder the nays. It’s not hard to find smart people saying stupid things about the Internet on the morning of its birth. In late 1994, Time magazine explained why the Internet would never go mainstream: "It was not designed for doing commerce, and it does not gracefully accommodate new arrivals." Newsweek put the doubts more bluntly in a February 1995 headline: "THE INTERNET? BAH!" The article was written by astrophysicist and Net maven Cliff Stoll, who captured the prevailing skepticism of virtual communities and online shopping with one word: "baloney."

This dismissive attitude pervaded a meeting I had with the top leaders of ABC in 1989. I was there to make a presentation to the corner office crowd about this "Internet stuff." To their credit, they realized something was happening. Still, nothing I could tell them would convince them that the Internet was not marginal, not just typing, and, most emphatically, not just teenage boys. Stephen Weiswasser, a senior VP, delivered the ultimate putdown: "The Internet will be the CB radio of the ’90s," he told me, a charge he later repeated to the press. Weiswasser summed up ABC’s argument for ignoring the new medium: "You aren’t going to turn passive consumers into active trollers on the Internet."

I was shown the door. But I offered one tip before I left. "Look," I said. "I happen to know that the address abc.com has not been registered. Go down to your basement, find your most technical computer guy, and have him register abc.com immediately. Don’t even think about it. It will be a good thing to do." They thanked me vacantly. I checked a week later. The domain was still unregistered.

While it is easy to smile at the dodos in TV land, they were not the only ones who had trouble imagining an alternative to couch potatoes. Wired did, too. When I examine issues of Wired from before the Netscape IPO (issues that I proudly edited), I am surprised to see them touting a future of high production-value content—5,000 always-on channels and virtual reality, with a side order of email sprinkled with bits of the Library of Congress. In fact, Wired offered a vision nearly identical to that of Internet wannabes in the broadcast, publishing, software, and movie industries: basically, TV that worked. The question was who would program the box. Wired looked forward to a constellation of new media upstarts like Nintendo and Yahoo!, not old-media dinosaurs like ABC.

Problem was, content was expensive to produce, and 5,000 channels of it would be 5,000 times as costly. No company was rich enough, no industry large enough, to carry off such an enterprise. The great telecom companies, which were supposed to wire up the digital revolution, were paralyzed by the uncertainties of funding the Net. In June 1994, David Quinn of British Telecom admitted to a conference of software publishers, "I’m not sure how you’d make money out of it."

The immense sums of money supposedly required to fill the Net with content sent many technocritics into a tizzy. They were deeply concerned that cyberspace would become cyburbia—privately owned and operated. Writing in Electronic Engineering Times in 1995, Jeff Johnson worried: "Ideally, individuals and small businesses would use the information highway to communicate, but it is more likely that the information highway will be controlled by Fortune 500 companies in 10 years." The impact would be more than commercial. "Speech in cyberspace will not be free if we allow big business to control every square inch of the Net," wrote Andrew Shapiro in The Nation in July 1995.

The fear of commercialization was strongest among hardcore programmers: the coders, Unix weenies, TCP/IP fans, and selfless volunteer IT folk who kept the ad hoc network running. The major administrators thought of their work as noble, a gift to humanity. They saw the Internet as an open commons, not to be undone by greed or commercialization. It’s hard to believe now, but until 1991, commercial enterprise on the Internet was strictly prohibited. Even then, the rules favored public institutions and forbade "extensive use for private or personal business."

In the mid-1980s, when I was involved in the WELL, an early nonprofit online system, we struggled to connect it to the emerging Internet but were thwarted, in part, by the "acceptable use" policy of the National Science Foundation (which ran the Internet backbone). In the eyes of the NSF, the Internet was funded for research, not commerce. At first this restriction wasn’t a problem for online services, because most providers, the WELL included, were isolated from one another. Paying customers could send email within the system—but not outside it. In 1987, the WELL fudged a way to forward outside email through the Net without confronting the acceptable use policy, which our organization’s own techies were reluctant to break. The NSF rule reflected a lingering sentiment that the Internet would be devalued, if not trashed, by opening it up to commercial interests. Spam was already a problem (one every week!).

This attitude prevailed even in the offices of Wired. In 1994, during the first design meetings for Wired‘s embryonic Web site, HotWired, programmers were upset that the innovation we were cooking up—what are now called clickthrough ad banners—subverted the great social potential of this new territory. The Web was hardly out of diapers, and already they were being asked to blight it with billboards and commercials. Only in May 1995, after the NSF finally opened the floodgates to ecommerce, did the geek elite begin to relax.

Three months later, Netscape’s public offering took off, and in a blink a world of DIY possibilities was born. Suddenly it became clear that ordinary people could create material anyone with a connection could view. The burgeoning online audience no longer needed ABC for content. Netscape’s stock peaked at $75 on its first day of trading, and the world gasped in awe. Was this insanity, or the start of something new?

2005

The scope of the Web today is hard to fathom. The total number of Web pages, including those that are dynamically created upon request and document files available through links, exceeds 600 billion. That’s 100 pages per person alive.

How could we create so much, so fast, so well? In fewer than 4,000 days, we have encoded half a trillion versions of our collective story and put them in front of 1 billion people, or one-sixth of the world’s population. That remarkable achievement was not in anyone’s 10-year plan.

The accretion of tiny marvels can numb us to the arrival of the stupendous. Today, at any Net terminal, you can get: an amazing variety of music and video, an evolving encyclopedia, weather forecasts, help wanted ads, satellite images of anyplace on Earth, up-to-the-minute news from around the globe, tax forms, TV guides, road maps with driving directions, real-time stock quotes, telephone numbers, real estate listings with virtual walk-throughs, pictures of just about anything, sports scores, places to buy almost anything, records of political contributions, library catalogs, appliance manuals, live traffic reports, archives to major newspapers—all wrapped up in an interactive index that really works.

This view is spookily godlike. You can switch your gaze of a spot in the world from map to satellite to 3-D just by clicking. Recall the past? It’s there. Or listen to the daily complaints and travails of almost anyone who blogs (and doesn’t everyone?). I doubt angels have a better view of humanity.

Why aren’t we more amazed by this fullness? Kings of old would have gone to war to win such abilities. Only small children would have dreamed such a magic window could be real. I have reviewed the expectations of waking adults and wise experts, and I can affirm that this comprehensive wealth of material, available on demand and free of charge, was not in anyone’s scenario. Ten years ago, anyone silly enough to trumpet the above list as a vision of the near future would have been confronted by the evidence: There wasn’t enough money in all the investment firms in the entire world to fund such a cornucopia. The success of the Web at this scale was impossible.

But if we have learned anything in the past decade, it is the plausibility of the impossible.

Take eBay. In some 4,000 days, eBay has gone from marginal Bay Area experiment in community markets to the most profitable spinoff of hypertext. At any one moment, 50 million auctions race through the site. An estimated half a million folks make their living selling through Internet auctions. Ten years ago I heard skeptics swear nobody would ever buy a car on the Web. Last year eBay Motors sold $11 billion worth of vehicles. EBay’s 2001 auction of a $4.9 million private jet would have shocked anyone in 1995—and still smells implausible today.

Nowhere in Ted Nelson’s convoluted sketches of hypertext transclusion did the fantasy of a global flea market appear. Especially as the ultimate business model! He hoped to franchise his Xanadu hypertext systems in the physical world at the scale of a copy shop or café—you would go to a store to do your hypertexting. Xanadu would take a cut of the action.

Instead, we have an open global flea market that handles 1.4 billion auctions every year and operates from your bedroom. Users do most of the work; they photograph, catalog, post, and manage their own auctions. And they police themselves; while eBay and other auction sites do call in the authorities to arrest serial abusers, the chief method of ensuring fairness is a system of user-generated ratings. Three billion feedback comments can work wonders.

What we all failed to see was how much of this new world would be manufactured by users, not corporate interests. Amazon.com customers rushed with surprising speed and intelligence to write the reviews that made the site’s long-tail selection usable. Owners of Adobe, Apple, and most major software products offer help and advice on the developer’s forum Web pages, serving as high-quality customer support for new buyers. And in the greatest leverage of the common user, Google turns traffic and link patterns generated by 2 billion searches a month into the organizing intelligence for a new economy. This bottom-up takeover was not in anyone’s 10-year vision.

No Web phenomenon is more confounding than blogging. Everything media experts knew about audiences—and they knew a lot—confirmed the focus group belief that audiences would never get off their butts and start making their own entertainment. Everyone knew writing and reading were dead; music was too much trouble to make when you could sit back and listen; video production was simply out of reach of amateurs. Blogs and other participant media would never happen, or if they happened they would not draw an audience, or if they drew an audience they would not matter. What a shock, then, to witness the near-instantaneous rise of 50 million blogs, with a new one appearing every two seconds. There—another new blog! One more person doing what AOL and ABC—and almost everyone else—expected only AOL and ABC to be doing. These user-created channels make no sense economically. Where are the time, energy, and resources coming from?

The audience.

I run a blog about cool tools. I write it for my own delight and for the benefit of friends. The Web extends my passion to a far wider group for no extra cost or effort. In this way, my site is part of a vast and growing gift economy, a visible underground of valuable creations—text, music, film, software, tools, and services—all given away for free. This gift economy fuels an abundance of choices. It spurs the grateful to reciprocate. It permits easy modification and reuse, and thus promotes consumers into producers.

The open source software movement is another example. Key ingredients of collaborative programming—swapping code, updating instantly, recruiting globally—didn’t work on a large scale until the Web was woven. Then software became something you could join, either as a beta tester or as a coder on an open source project. The clever "view source" browser option let the average Web surfer in on the act. And anyone could rustle up a link—which, it turns out, is the most powerful invention of the decade.

Linking unleashes involvement and interactivity at levels once thought unfashionable or impossible. It transforms reading into navigating and enlarges small actions into powerful forces. For instance, hyperlinks made it much easier to create a seamless, scrolling street map of every town. They made it easier for people to refer to those maps. And hyperlinks made it possible for almost anyone to annotate, amend, and improve any map embedded in the Web. Cartography has gone from spectator art to participatory democracy.

The electricity of participation nudges ordinary folks to invest huge hunks of energy and time into making free encyclopedias, creating public tutorials for changing a flat tire, or cataloging the votes in the Senate. More and more of the Web runs in this mode. One study found that only 40 percent of the Web is commercial. The rest runs on duty or passion.

Coming out of the industrial age, when mass-produced goods outclassed anything you could make yourself, this sudden tilt toward consumer involvement is a complete Lazarus move: "We thought that died long ago." The deep enthusiasm for making things, for interacting more deeply than just choosing options, is the great force not reckoned 10 years ago. This impulse for participation has upended the economy and is steadily turning the sphere of social networking—smart mobs, hive minds, and collaborative action—into the main event.

When a company opens its databases to users, as Amazon, Google, and eBay have done with their Web services, it is encouraging participation at new levels. The corporation’s data becomes part of the commons and an invitation to participate. People who take advantage of these capabilities are no longer customers; they’re the company’s developers, vendors, skunk works, and fan base.

A little over a decade ago, a phone survey by Macworld asked a few hundred people what they thought would be worth $10 per month on the information superhighway. The participants started with uplifting services: educational courses, reference books, electronic voting, and library information. The bottom of the list ended with sports statistics, role-playing games, gambling, and dating. Ten years later what folks actually use the Internet for is inverted. According to a 2004 Stanford study, people use the Internet for (in order): playing games, "just surfing," shopping the list ends with responsible activities like politics and banking. (Some even admitted to porn.) Remember, shopping wasn’t supposed to happen. Where’s Cliff Stoll, the guy who said the Internet was baloney and online catalogs humbug? He has a little online store where he sells handcrafted Klein bottles.

The public’s fantasy, revealed in that 1994 survey, began reasonably with the conventional notions of a downloadable world. These assumptions were wired into the infrastructure. The bandwidth on cable and phone lines was asymmetrical: Download rates far exceeded upload rates. The dogma of the age held that ordinary people had no need to upload; they were consumers, not producers. Fast-forward to today, and the poster child of the new Internet regime is BitTorrent. The brilliance of BitTorrent is in its exploitation of near-symmetrical communication rates. Users upload stuff while they are downloading. It assumes participation, not mere consumption. Our communication infrastructure has taken only the first steps in this great shift from audience to participants, but that is where it will go in the next decade.

With the steady advance of new ways to share, the Web has embedded itself into every class, occupation, and region. Indeed, people’s anxiety about the Internet being out of the mainstream seems quaint now. In part because of the ease of creation and dissemination, online culture is the culture. Likewise, the worry about the Internet being 100 percent male was entirely misplaced. Everyone missed the party celebrating the 2002 flip-point when women online first outnumbered men. Today, 52 percent of netizens are female. And, of course, the Internet is not and has never been a teenage realm. In 2005, the average user is a bone-creaking 41 years old.

What could be a better mark of irreversible acceptance than adoption by the Amish? I was visiting some Amish farmers recently. They fit the archetype perfectly: straw hats, scraggly beards, wives with bonnets, no electricity, no phones or TVs, horse and buggy outside. They have an undeserved reputation for resisting all technology, when actually they are just very late adopters. Still, I was amazed to hear them mention their Web sites.

"Amish Web sites?" I asked.

"For advertising our family business. We weld barbecue grills in our shop."

"Yes, but—"

"Oh, we use the Internet terminal at the public library. And Yahoo!"

I knew then the battle was over.

2015

The Web continues to evolve from a world ruled by mass media and mass audiences to one ruled by messy media and messy participation. How far can this frenzy of creativity go? Encouraged by Web-enabled sales, 175,000 books were published and more than 30,000 music albums were released in the US last year. At the same time, 14 million blogs launched worldwide. All these numbers are escalating. A simple extrapolation suggests that in the near future, everyone alive will (on average) write a song, author a book, make a video, craft a weblog, and code a program. This idea is less outrageous than the notion 150 years ago that someday everyone would write a letter or take a photograph.

What happens when the data flow is asymmetrical—but in favor of creators? What happens when everyone is uploading far more than they download? If everyone is busy making, altering, mixing, and mashing, who will have time to sit back and veg out? Who will be a consumer?

No one. And that’s just fine. A world where production outpaces consumption should not be sustainable; that’s a lesson from Economics 101. But online, where many ideas that don’t work in theory succeed in practice, the audience increasingly doesn’t matter. What matters is the network of social creation, the community of collaborative interaction that futurist Alvin Toffler called prosumption. As with blogging and BitTorrent, prosumers produce and consume at once. The producers are the audience, the act of making is the act of watching, and every link is both a point of departure and a destination.

But if a roiling mess of participation is all we think the Web will become, we are likely to miss the big news, again. The experts are certainly missing it. The Pew Internet & American Life Project surveyed more than 1,200 professionals in 2004, asking them to predict the Net’s next decade. One scenario earned agreement from two-thirds of the respondents: "As computing devices become embedded in everything from clothes to appliances to cars to phones, these networked devices will allow greater surveillance by governments and businesses." Another was affirmed by one-third: "By 2014, use of the Internet will increase the size of people’s social networks far beyond what has traditionally been the case."

These are safe bets, but they fail to capture the Web’s disruptive trajectory. The real transformation under way is more akin to what Sun’s John Gage had in mind in 1988 when he famously said, "The network is the computer." He was talking about the company’s vision of the thin-client desktop, but his phrase neatly sums up the destiny of the Web: As the OS for a megacomputer that encompasses the Internet, all its services, all peripheral chips and affiliated devices from scanners to satellites, and the billions of human minds entangled in this global network. This gargantuan Machine already exists in a primitive form. In the coming decade, it will evolve into an integral extension not only of our senses and bodies but our minds.

Today, the Machine acts like a very large computer with top-level functions that operate at approximately the clock speed of an early PC. It processes 1 million emails each second, which essentially means network email runs at 1 megahertz. Same with Web searches. Instant messaging runs at 100 kilohertz, SMS at 1 kilohertz. The Machine’s total external RAM is about 200 terabytes. In any one second, 10 terabits can be coursing through its backbone, and each year it generates nearly 20 exabytes of data. Its distributed "chip" spans 1 billion active PCs, which is approximately the number of transistors in one PC.

This planet-sized computer is comparable in complexity to a human brain. Both the brain and the Web have hundreds of billions of neurons (or Web pages). Each biological neuron sprouts synaptic links to thousands of other neurons, while each Web page branches into dozens of hyperlinks. That adds up to a trillion "synapses" between the static pages on the Web. The human brain has about 100 times that number—but brains are not doubling in size every few years. The Machine is.

Since each of its "transistors" is itself a personal computer with a billion transistors running lower functions, the Machine is fractal. In total, it harnesses a quintillion transistors, expanding its complexity beyond that of a biological brain. It has already surpassed the 20-petahertz threshold for potential intelligence as calculated by Ray Kurzweil. For this reason some researchers pursuing artificial intelligence have switched their bets to the Net as the computer most likely to think first. Danny Hillis, a computer scientist who once claimed he wanted to make an AI "that would be proud of me," has invented massively parallel supercomputers in part to advance us in that direction. He now believes the first real AI will emerge not in a stand-alone supercomputer like IBM’s proposed 23-teraflop Blue Brain, but in the vast digital tangle of the global Machine.

In 10 years, the system will contain hundreds of millions of miles of fiber-optic neurons linking the billions of ant-smart chips embedded into manufactured products, buried in environmental sensors, staring out from satellite cameras, guiding cars, and saturating our world with enough complexity to begin to learn. We will live inside this thing.

Today the nascent Machine routes packets around disturbances in its lines; by 2015 it will anticipate disturbances and avoid them. It will have a robust immune system, weeding spam from its trunk lines, eliminating viruses and denial-of-service attacks the moment they are launched, and dissuading malefactors from injuring it again. The patterns of the Machine’s internal workings will be so complex they won’t be repeatable; you won’t always get the same answer to a given question. It will take intuition to maximize what the global network has to offer. The most obvious development birthed by this platform will be the absorption of routine. The Machine will take on anything we do more than twice. It will be the Anticipation Machine.

One great advantage the Machine holds in this regard: It’s always on. It is very hard to learn if you keep getting turned off, which is the fate of most computers. AI researchers rejoice when an adaptive learning program runs for days without crashing. The fetal Machine has been running continuously for at least 10 years (30 if you want to be picky). I am aware of no other machine—of any type—that has run that long with zero downtime. While portions may spin down due to power outages or cascading infections, the entire thing is unlikely to go quiet in the coming decade. It will be the most reliable gadget we have.

And the most universal. By 2015, desktop operating systems will be largely irrelevant. The Web will be the only OS worth coding for. It won’t matter what device you use, as long as it runs on the Web OS. You will reach the same distributed computer whether you log on via phone, PDA, laptop, or HDTV.

In the 1990s, the big players called that convergence. They peddled the image of multiple kinds of signals entering our lives through one box—a box they hoped to control. By 2015 this image will be turned inside out. In reality, each device is a differently shaped window that peers into the global computer. Nothing converges. The Machine is an unbounded thing that will take a billion windows to glimpse even part of. It is what you’ll see on the other side of any screen.

And who will write the software that makes this contraption useful and productive? We will. In fact, we’re already doing it, each of us, every day. When we post and then tag pictures on the community photo album Flickr, we are teaching the Machine to give names to images. The thickening links between caption and picture form a neural net that can learn. Think of the 100 billion times per day humans click on a Web page as a way of teaching the Machine what we think is important. Each time we forge a link between words, we teach it an idea. Wikipedia encourages its citizen authors to link each fact in an article to a reference citation. Over time, a Wikipedia article becomes totally underlined in blue as ideas are cross-referenced. That massive cross-referencing is how brains think and remember. It is how neural nets answer questions. It is how our global skin of neurons will adapt autonomously and acquire a higher level of knowledge.

The human brain has no department full of programming cells that configure the mind. Rather, brain cells program themselves simply by being used. Likewise, our questions program the Machine to answer questions. We think we are merely wasting time when we surf mindlessly or blog an item, but each time we click a link we strengthen a node somewhere in the Web OS, thereby programming the Machine by using it.

What will most surprise us is how dependent we will be on what the Machine knows—about us and about what we want to know. We already find it easier to Google something a second or third time rather than remember it ourselves. The more we teach this megacomputer, the more it will assume responsibility for our knowing. It will become our memory. Then it will become our identity. In 2015 many people, when divorced from the Machine, won’t feel like themselves—as if they’d had a lobotomy.

Legend has it that Ted Nelson invented Xanadu as a remedy for his poor memory and attention deficit disorder. In this light, the Web as memory bank should be no surprise. Still, the birth of a machine that subsumes all other machines so that in effect there is only one Machine, which penetrates our lives to such a degree that it becomes essential to our identity—this will be full of surprises. Especially since it is only the beginning.

There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine. Later that Machine may run faster, but there is only one time when it is born.

You and I are alive at this moment.

We should marvel, but people alive at such times usually don’t. Every few centuries, the steady march of change meets a discontinuity, and history hinges on that moment. We look back on those pivotal eras and wonder what it would have been like to be alive then. Confucius, Zoroaster, Buddha, and the latter Jewish patriarchs lived in the same historical era, an inflection point known as the axial age of religion. Few world religions were born after this time. Similarly, the great personalities converging upon the American Revolution and the geniuses who commingled during the invention of modern science in the 17th century mark additional axial phases in the short history of our civilization.

Three thousand years from now, when keen minds review the past, I believe that our ancient time, here at the cusp of the third millennium, will be seen as another such era. In the years roughly coincidental with the Netscape IPO, humans began animating inert objects with tiny slivers of intelligence, connecting them into a global field, and linking their own minds into a single thing. This will be recognized as the largest, most complex, and most surprising event on the planet. Weaving nerves out of glass and radio waves, our species began wiring up all regions, all processes, all facts and notions into a grand network. From this embryonic neural net was born a collaborative interface for our civilization, a sensing, cognitive device with power that exceeded any previous invention. The Machine provided a new way of thinking (perfect search, total recall) and a new mind for an old species. It was the Beginning.

In retrospect, the Netscape IPO was a puny rocket to herald such a moment. The product and the company quickly withered into irrelevance, and the excessive exuberance of its IPO was downright tame compared with the dotcoms that followed. First moments are often like that. After the hysteria has died down, after the millions of dollars have been gained and lost, after the strands of mind, once achingly isolated, have started to come together—the only thing we can say is: Our Machine is born. It’s on.

© 2005 Kevin Kelly. Reprinted with permission.

Space policy

From Wikipedia, the free encyclopedia
 
Space policy is the political decision-making process for, and application of, public policy of a state (or association of states) regarding spaceflight and uses of outer space, both for civilian (scientific and commercial) and military purposes. International treaties, such as the 1967 Outer Space Treaty, attempt to maximize the peaceful uses of space and restrict the militarization of space.

Space policy intersects with science policy, since national space programs often perform or fund research in space science, and also with defense policy, for applications such as spy satellites and anti-satellite weapons. It also encompasses government regulation of third-party activities such as commercial communications satellites and private spaceflight.[1]

Space policy also encompasses the creation and application of space law, and space advocacy organizations exist to support the cause of space exploration.

Space law

Space law is an area of the law that encompasses national and international law governing activities in outer space. There are currently five treaties that make up the body of international space law. The inception of the field of space law began with the launch of the world's first artificial satellite by the Soviet Union in October 1957. Named Sputnik 1, the satellite was launched as part of the International Geophysical Year. Since that time, space law has evolved and assumed more importance as mankind has increasingly come to use and rely on space-based resources.

Space policy by country

Soviet Union

The Soviet Union became the world's first spacefaring state by launching its first satellite, Sputnik 1, on 4 October 1957.

United States

United States space policy is drafted by the Executive branch at the direction of the President of the United States, and submitted for approval and establishment of funding to the legislative process of the United States Congress.[2] The President may also negotiate with other nations and sign space treaties on behalf of the US, according to his or her constitutional authority. Congress' final space policy product is, in the case of domestic policy a bill explicitly stating the policy objectives and the budget appropriation for their implementation to be submitted to the President for signature into law, or else a ratified treaty with other nations.

Space advocacy organizations (such as the Space Science Institute, National Space Society, and the Space Generation Advisory Council, learned societies such as the American Astronomical Society and the American Astronautical Society; and policy organizations such as the National Academies) may provide advice to the government and lobby for space goals.

Civilian and scientific space policy is carried out by the National Aeronautics and Space Administration (NASA, subsequent to 29 July 1958), and military space activities (communications, reconnaissance, intelligence, mapping, and missile defense) are carried out by various agencies of the Department of Defense. The President is legally responsible for deciding which space activities fall under the civilian and military areas.[3] In addition, the Department of Commerce's National Oceanic and Atmospheric Administration operates various services with space components, such as the Landsat program.[4]

The President consults with NASA and Department of Defense on their space activity plans, as potential input for the policy draft submitted to Congress. He or she also consults with the National Security Council, the Office of Science and Technology Policy, and the Office of Management and Budget to take into account Congress's expected willingness to provide necessary funding levels for proposed programs.[5]

Once the President's policy draft or treaty is submitted to the Congress, civilian policies are reviewed by the House Subcommittee on Space and Aeronautics and the Senate Subcommittee on Science and Space. These committees also exercise oversight over NASA's operations and investigation of accidents such as the 1967 Apollo 1 fire. Military policies are reviewed and overseen by the House Subcommittee on Strategic Forces and the Senate Subcommittee on Strategic Forces, as well as the House Permanent Select Committee on Intelligence and the Senate Select Committee on Intelligence. The Senate Foreign Relations Committee conducts hearings on proposed space treaties, and the various appropriations committees have power over the budgets for space-related agencies. Space policy efforts are supported by Congressional agencies such as the Congressional Research Service, the Congressional Budget Office, and Government Accountability Office.[6]

History

President Kennedy committed the United States to landing a man on the Moon by the end of the 1960s decade, in response to contemporary Soviet space successes. This speech at Rice University on 12 September 1962 is famous for the quote "We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard."

The early history of United States space policy is linked to the US–Soviet Space Race of the 1960s. The National Aeronautics and Space Act creating NASA was passed in 1958, after the launch of the Soviet Sputnik 1 satellite. Thereafter, in response to the flight of Yuri Gagarin as the first man in space, Kennedy in 1961 committed the United States to landing a man on the Moon by the end of the decade. Although the costs of the Vietnam War and the programs of the Great Society forced cuts to NASA's budget as early as 1965, the first Moon landing occurred in 1969, early in Richard Nixon's presidency. Under the Nixon administration NASA's budget continued to decline and three of the planned Apollo Moon landings were cancelled. The Nixon administration approved the beginning of the Space Shuttle program in 1972, but did not support funding of other projects such as a Mars landing, colonization of the Moon, or a permanent space station.[7]

The Space Shuttle first launched in 1981, during Ronald Reagan's administration. Reagan in 1982 announced a renewed active space effort, which included initiatives such the construction of Space Station Freedom, and the military Strategic Defense Initiative, and, later in his term, a 30 percent increase in NASA's budget. The Space Shuttle Challenger disaster in January 1986 led to a reevaluation of the future of the national space program in the National Commission on Space report and the Ride Report.[7]

The United States has participated in the International Space Station beginning in the 1990s, the Space Shuttle program has continued, although the Space Shuttle Columbia disaster has led to the planned retirement of the Space Shuttle in mid-2011. There is a current debate on the post-Space Shuttle future of the civilian space program: the Constellation program of the George W. Bush administration directed NASA to create a set of new spacecraft with the goal of sending astronauts to the Moon and Mars,[8] but the Obama administration cancelled the Constellation program, opting instead to emphasize development of commercial rocket systems.

The Vision for Space Exploration established under the George W. Bush administration in 2004 was replaced with a new policy released by Barack Obama on 28 June 2010.[9]

Europe

The ESA is an international organization whose membership overlaps with, but is not the same as, that of the EU. 
 
  ESA and EU member countries
  ESA-only members
  EU-only members

The European Space Agency (ESA) is the common space agency for many European nations. It is independent of the European Union, though the 2007 European Space Policy provides a framework for coordination between the two organizations and member states, including issues such as security and defence, access to space, space science, and space exploration.[10]

The ESA was founded to serve as a counterweight to the dominant United States and Soviet space programs, and further the economic and military independence of Europe. This has included the development of the Ariane rockets, which by 1985 had captured over 40 percent of commercial launch market in the free world. The ESA budget is split between mandatory and voluntary programs, the latter of which allow individual member nations to pursue their own national space goals within the organization.[11]

The ESA Director General’s Proposal for the European Space Policy states, "Space systems are strategic assets demonstrating independence and the readiness to assume global responsibilities. Initially developed as defence or scientific projects, they now also provide commercial infrastructures on which important sectors of the economy depend and which are relevant in the daily life of citizens.... Europe needs an effective space policy to enable it to exert global leadership in selected policy areas in accordance with European interests and values."[12]

China

Although Chairman Mao Zedong planned after Russia's Sputnik 1 launch to place a Chinese satellite in orbit by 1959 to celebrate the 10th anniversary of the founding of the People's Republic of China (PRC),[13] China did not successfully launch its first satellite until 24 April 1970. Mao and Zhou Enlai decided on 14 July 1967 that the PRC should not be left behind, and started China's own human spaceflight program.[14] The first success came on 15 October 2003 when China sent its first astronaut into space for 21 hours aboard Shenzhou 5.
The Ministry of Aerospace Industry was responsible for the Chinese space program prior to July 1999, when it was split into the China National Space Administration responsible for setting policy, and the state-owned China Aerospace Science and Technology Corporation, responsible for implementation.

The China National Space Administration states its aims as maintaining the country's overall development strategy, making innovations in an independent and self-reliant manner, promoting the country's science and technology sector and encouraging economic and social development, and actively engaging in international cooperation.[15]

Russian Federation and Ukraine

The Russian Federation inherited their space programs in 1991 from its predecessor state, the Soviet Union. Russia's civilian space agency is the Russian Federal Space Agency, and its military counterpart is the Russian Aerospace Defence Forces.[16] Ukraine's agency is the State Space Agency of Ukraine, which handles both civilian and military programs.

In the 1980s the Soviet Union was considered to be technologically behind the United States, but it outspent the United States in its space budget, and its cosmonauts had spent three times as many days in space as American astronauts. The Soviet Union had also been more willing than the United States to embark on long-term programs, such as the Salyut and Mir space station programs, and increased their investment in space programs throughout the 1970s and 1980s.[17]

After the dissolution of the Soviet Union, the 1990s saw serious financial problems because of the decreased cash flow, which encouraged Roskosmos to improvise and seek other ways to keep space programs running. This resulted in Roskosmos' leading role in commercial satellite launches and space tourism. While scientific missions, such as interplanetary probes or astronomy missions during these years played a very small role, although roskosmos has connections with Russian aerospace forces, its budget is not part of the defense budget of the country, Roskosmos managed to operate the space station Mir well past its planned lifespan, contribute to the International Space Station, and continue to fly additional Soyuz and Progress missions.[18]

The Russian economy boomed throughout 2005 from high prices for exports, such as oil and gas, the outlook for subsequent funding became more favorable. The federal space budget for the year 2009 was left unchanged despite the global economic crisis, standing at about 82 billion rubles ($2.4 billion). Current priorities of the Russian space program include the new Angara rocket family and development of new communications, navigation and remote Earth sensing spacecraft. The GLONASS global navigation satellite system has for many years been one of the top priorities and has been given its own budget line in the federal space budget.

Degenerative disc disease

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Deg...