Search This Blog

Wednesday, June 20, 2018

Augmented reality

From Wikipedia, the free encyclopedia


Virtual Fixtures – first A.R. system, 1992, U.S. Air Force, WPAFB

Augmented Reality (AR) is an interactive experience of a real-world environment whose elements are "augmented" by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory.[1] The overlaid sensory information can be constructive (i.e. additive to the natural environment) or destructive (i.e. masking of the natural environment) and is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment.[2] In this way, augmented reality alters one’s ongoing perception of a real world environment, whereas virtual reality completely replaces the user's real world environment with a simulated one.[3][4] Augmented reality is related to two largely synonymous terms: mixed reality and computer-mediated reality.

The primary value of augmented reality is that it brings components of the digital world into a person's perception of the real world, and does so not as a simple display of data, but through the integration of immersive sensations that are perceived as natural parts of an environment. The first functional AR systems that provided immersive mixed reality experiences for users were invented in the early 1990s, starting with the Virtual Fixtures system developed at the U.S. Air Force's Armstrong Laboratory in 1992.[2][5][6][7] The first commercial augmented reality experiences were used largely in the entertainment and gaming businesses, but now other industries are also getting interested about AR's possibilities for example in knowledge sharing, educating, managing the information flood and organizing distant meetings. Augmented reality is also transforming the world of education, where content may be accessed by scanning or viewing an image with a mobile device.[8] Another example is an AR helmet for construction workers which display information about the construction sites.

Augmented reality is used to enhance natural environments or situations and offer perceptually enriched experiences. With the help of advanced AR technologies (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Information about the environment and its objects is overlaid on the real world. This information can be virtual[9][10][11][12][13][14] or real, e.g. seeing other real sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with where they actually are in space.[15][16][17] Augmented reality also has a lot of potential in the gathering and sharing of tacit knowledge. Augmentation techniques are typically performed in real time and in semantic context with environmental elements. Immersive perceptual information is sometimes combined with supplemental information like scores over a live video feed of a sporting event. This combines the benefits of both augmented reality technology and heads up display technology (HUD).

Technology


A Microsoft Hololens being worn by a man.

Hardware

Hardware components for augmented reality are: processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms.[18]

Display

Various technologies are used in augmented reality rendering, including optical projection systems, monitors, handheld devices, and display systems worn on the human body.

A head-mounted display (HMD) is a display device worn on the forehead, such as a harness or helmet. HMDs place images of both the physical world and virtual objects over the user's field of view. Modern HMDs often employ sensors for six degrees of freedom monitoring that allow the system to align virtual information to the physical world and adjust accordingly with the user's head movements.[19][20][21] HMDs can provide VR users with mobile and collaborative experiences.[22] Specific providers, such as uSens and Gestigon, include gesture controls for full virtual immersion.[23][24]

In January 2015, Meta launched a project led by Horizons Ventures, Tim Draper, Alexis Ohanian, BOE Optoelectronics and Garry Tan.[25][26][27] On February 17, 2016, Meta announced their second-generation product at TED, Meta 2. The Meta 2 head-mounted display headset uses a sensory array for hand interactions and positional tracking, visual field view of 90 degrees (diagonal), and resolution display of 2560 x 1440 (20 pixels per degree), which is considered the largest field of view (FOV) currently available.[28][29][30][31]
Eyeglasses

Vuzix AR3000 AugmentedReality SmartGlasses

AR displays can be rendered on devices resembling eyeglasses. Versions include eyewear that employs cameras to intercept the real world view and re-display its augmented view through the eyepieces[32] and devices in which the AR imagery is projected through or reflected off the surfaces of the eyewear's lenspieces.[33][34][35]
HUD

Headset computer

A head-up display (HUD) is a transparent display that presents data without requiring users to look away from their usual viewpoints. A precursor technology to augmented reality, heads-up displays were first developed for pilots in the 1950s, projecting simple flight data into their line of sight, thereby enabling them to keep their "heads up" and not look down at the instruments. Near-eye augmented reality devices can be used as portable head-up displays as they can show data, information, and images while the user views the real world. Many definitions of augmented reality only define it as overlaying the information.[36][37] This is basically what a head-up display does; however, practically speaking, augmented reality is expected to include registration and tracking between the superimposed perceptions, sensations, information, data, and images and some portion of the real world.[38]

CrowdOptic, an existing app for smartphones, applies algorithms and triangulation techniques to photo metadata including GPS position, compass heading, and a time stamp to arrive at a relative significance value for photo objects.[39] CrowdOptic technology can be used by Google Glass users to learn where to look at a given point in time.[40]

A number of smartglasses have been launched for augmented reality. Due to encumbered control, smartglasses are primarily designed for micro-interaction like reading a text message but still far from more well-rounded applications of augmented reality.[41] In January 2015, Microsoft introduced HoloLens, an independent smartglasses unit. Brian Blau, Research Director of Consumer Technology and Markets at Gartner, said that "Out of all the head-mounted displays that I've tried in the past couple of decades, the HoloLens was the best in its class."[42] First impressions and opinions were generally that HoloLens is a superior device to the Google Glass, and manages to do several things "right" in which Glass failed.[42][43]
Contact lenses
Contact lenses that display AR imaging are in development. These bionic contact lenses might contain the elements for display embedded into the lens including integrated circuitry, LEDs and an antenna for wireless communication. The first contact lens display was reported in 1999,[44] then 11 years later in 2010-2011.[45][46][47][48] Another version of contact lenses, in development for the U.S. military, is designed to function with AR spectacles, allowing soldiers to focus on close-to-the-eye AR images on the spectacles and distant real world objects at the same time.[49][50]

The futuristic short film Sight[51] features contact lens-like augmented reality devices.[52][53]

Many scientists have been working on contact lenses capable of many different technological feats. The company Samsung has been working on a contact lens as well. This lens, when finished, is meant to have a built-in camera on the lens itself.[54] The design is intended to have you blink to control its interface for recording purposed. It is also intended to be linked with your smartphone to review footage, and control it separately. When successful, the lens would feature a camera, or sensor inside of it. It is said that it could be anything from a light sensor, to a temperature sensor.

In Augmented Reality, the distinction is made between two distinct modes of tracking, known as ''marker'' and ''markerless''. Marker are visual cues which trigger the display of the virtual information.[55] A piece of paper with some distinct geometries can be used. The camera recognizes the geometries by identifying specific points in the drawing. Markerless also called instant tracking does not use marker. Instead the user positions the object in the camera view preferably in an horizontal plane.It uses sensors in mobile devices to accurately detect the real-world environment, such as the locations of walls and points of intersection.[56]
Virtual retinal display
A virtual retinal display (VRD) is a personal display device under development at the University of Washington's Human Interface Technology Laboratory under Dr. Thomas A. Furness III.[57] With this technology, a display is scanned directly onto the retina of a viewer's eye. This results in bright images with high revolution and high contrast. The viewer sees what appears to be a conventional display floating in space.[58]

Several of tests were done in order to analyze the safety of the VRD.[57] In one test, patients with partial loss of vision were selected to view images using the technology having either macular degeneration (a disease that degenerates the retina) or keratoconus. In the macular degeneration group, 5 out of 8 subjects preferred the VRD images to the CRT or paper images and thought they were better and brighter and were able to see equal or better resolution levels. The Kerocunus patients could all resolve smaller lines in several line tests using the VDR as opposed to their own correction. They also found the VDR images to be easier to view and sharper. As a result of these several tests, virtual retinal display is considered safe technology.

Virtual retinal display creates images that can be seen in ambient daylight and ambient roomlight. \The VRD is considered a preferred candidate to use in a surgical display due to its combination of high resolution and high contrast and brightness. Additional tests show high potential for VRD to be used as a display technology for patients that have low vision.
EyeTap
The EyeTap (also known as Generation-2 Glass[59]) captures rays of light that would otherwise pass through the center of the lens of the eye of the wearer, and substitutes synthetic computer-controlled light for each ray of real light.

The Generation-4 Glass[59] (Laser EyeTap) is similar to the VRD (i.e. it uses a computer-controlled laser light source) except that it also has infinite depth of focus and causes the eye itself to, in effect, function as both a camera and a display by way of exact alignment with the eye and resynthesis (in laser light) of rays of light entering the eye.[60]
Handheld
A Handheld display employs a small display that fits in a user's hand. All handheld AR solutions to date opt for video see-through. Initially handheld AR employed fiducial markers,[61] and later GPS units and MEMS sensors such as digital compasses and six degrees of freedom accelerometergyroscope. Today SLAM markerless trackers such as PTAM are starting to come into use. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR are the portable nature of handheld devices and the ubiquitous nature of camera phones. The disadvantages are the physical constraints of the user having to hold the handheld device out in front of them at all times, as well as the distorting effect of classically wide-angled mobile phone cameras when compared to the real world as viewed through the eye.[62] The issues arising from the user having to hold the handheld device (manipulability) and perceiving the visualisation correctly (comprehensibility) have been summarised into the HARUS usability questionnaire.[63]

Games such as Pokémon Go and Ingress utilize an Image Linked Map (ILM) interface, where approved geotagged locations appear on a stylized map for the user to interact with.[64]
Spatial
Spatial augmented reality (SAR) augments real-world objects and scenes without the use of special displays such as monitors, head-mounted displays or hand-held devices. SAR makes use of digital projectors to display graphical information onto physical objects. The key difference in SAR is that the display is separated from the users of the system. Because the displays are not associated with each user, SAR scales naturally up to groups of users, thus allowing for collocated collaboration between users.

Examples include shader lamps, mobile projectors, virtual tables, and smart projectors. Shader lamps mimic and augment reality by projecting imagery onto neutral objects, providing the opportunity to enhance the object's appearance with materials of a simple unit - a projector, camera, and sensor.

Other applications include table and wall projections. One innovation, the Extended Virtual Table, separates the virtual from the real by including beam-splitter mirrors attached to the ceiling at an adjustable angle.[65] Virtual showcases, which employ beam-splitter mirrors together with multiple graphics displays, provide an interactive means of simultaneously engaging with the virtual and the real. Many more implementations and configurations make spatial augmented reality display an increasingly attractive interactive alternative.

An SAR system can display on any number of surfaces of an indoor setting at once. SAR supports both a graphical visualization and passive haptic sensation for the end users. Users are able to touch physical objects in a process that provides passive haptic sensation.[12][66][67][68]

Tracking

Modern mobile augmented-reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID. These technologies offer varying levels of accuracy and precision. The most important is the position and orientation of the user's head. Tracking the user's hand(s) or a handheld input device can provide a 6DOF interaction technique.[69][70]

Networking

Mobile augmented reality applications are gaining popularity due to the wide adoption of mobile and especially wearable devices. However, they often rely on computationally intensive computer vision algorithms with extreme latency requirements. To compensate for the lack of computing power, offloading data processing to a distant machine is often desired. Computation offloading introduces new constraints in applications, especially in terms of latency and bandwidth. Although there are a plethora of real-time multimedia transport protocols, there is a need for support from network infrastructure as well.[71]

Input devices

Techniques include speech recognition systems that translate a user's spoken words into computer instructions, and gesture recognition systems that interpret a user's body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear.[72][73][74][75] Products which are trying to serve as a controller of AR headsets include Wave by Seebright Inc. and Nimble by Intugine Technologies.

Computer

The computer analyzes the sensed visual and other data to synthesize and position augmentations. Computers are responsible for the graphics that go with augmented reality. Augmented reality uses a computer-generated image and it has an striking effect on the way the real world is shown. With the improvement of technology and computers, augmented reality is going to have a drastic change on our perspective of the real world.[76] According to Time Magazine, in about 15–20 years it is predicted that Augmented reality and virtual reality are going to become the primary use for computer interactions.[77] Computers are improving at a very fast rate, which means that we are figuring out new ways to improve other technology. The more that computers progress, augmented reality will become more flexible and more common in our society. Computers are the core of augmented reality.

The Computer receives data from the sensors which determine the relative position of objects surface. This translates to an input to the computer which then outputs to the users by adding something that would otherwise not be there. The computer comprises memory and a processor.[79] The computer takes the scanned environment then generates images or a video and puts it on the receiver for the observer to see. The fixed marks on an objects surface are stored in the memory of a computer. The computer also withdrawals from its memory to present images realistically to the onlooker. The best example of this is of the Pepsi Max AR Bus Shelter.[80]

Software and algorithms

A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration, and uses different methods of computer vision, mostly related to video tracking.[81][82] Many computer vision methods of augmented reality are inherited from visual odometry.

Usually those methods consist of two parts. The first stage is to detect interest points, fiducial markers or optical flow in the camera images. This step can use feature detection methods like corner detection, blob detection, edge detection or thresholding, and other image processing methods.[83][84] The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiducial markers) are present in the scene. In some of those cases the scene 3D structure should be precalculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.[citation needed]

Augmented Reality Markup Language (ARML) is a data standard developed within the Open Geospatial Consortium (OGC),[85] which consists of XML grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects.

To enable rapid development of augmented reality applications, some software development kits (SDKs) have emerged.[86][87] A few SDKs such as CloudRidAR[88] leverage cloud computing for performance improvement. AR SDKs are offered by Vuforia,[89] ARToolKit, Catchoom CraftAR[90] Mobinett AR,[91] Wikitude,[92] Blippar[93] Layar,[94] Meta.[95][96] and ARLab.[97]

Development

The implementation of Augmented Reality in consumer products requires considering the design of the applications and the related constraints of the technology platform. Since AR system rely heavily on the immersion of the user and the interaction between the user and the system, design can facilitate the adoption of virtuality. For most Augmented Reality systems, a similar design guideline can be followed. The following lists some considerations for designing Augmented Reality applications:

Environmental/context design[98]

Context Design focuses on the end-user’s physical surrounding, spatial space, and accessibility that may play a role when using the AR system.  Designers should be aware of the possible physical scenarios the end-user may be in such as:
  • Public, in which the users uses their whole body to interact with the software
  • Personal, in which the user uses a smartphone in a public space
  • Intimate, in which the user is sitting with a desktop and is not really in movement
  • Private, in which the user has on a wearable.
By evaluating each physical scenario, potential safety hazard can be avoided and changes can be made to greater improve the end-user’s immersion. UX designers will have to define user journeys for the relevant physical scenarios and define how the interface will react to each.

Especially in AR systems, it is vital to also consider the spatial space and the surrounding elements that change the effectiveness of the AR technology. Environmental elements such as lighting, and sound can prevent the sensor of AR devices from detecting necessary data and ruin the immersion of the end-user.[99]

Another aspect of context design involves the design of the system’s functionality and its ability to accommodate for user preferences.[100][101] While accessibility tools are common in basic application design, some consideration should be made when designing time-limited prompts (to prevent unintentional operations), audio cues and overall engagement time. It is important to note that in some situations, the application’s functionality may hinder the user’s ability. For example, applications that is used for driving should reduce the amount of user interaction and user audio cues instead.

Interaction design

Interaction design in augmented reality technology centers on the user’s engagement with the end product to improve the overall user experience and enjoyment. The purpose of Interaction Design is to avoid alienating or confusing the user by organising the information presented. Since user interaction relies on the user’s input, designers must make system controls easier to understand and accessible. A common technique to improve usability for augmented reality applications is by discovering the frequently accessed areas in the device’s touch display and design the application to match those areas of control.[102] It is also important to structure the user journey maps and the flow of information presented which reduce the system’s overall cognitive load and greatly improves the learning curve of the application.[103]

In interaction design, it is important for developers to utilize augmented reality technology that complement the system’s function or purpose.[104] For instance, the utilization of exciting AR filters and the design of the unique sharing platform in Snapchat enables users to better the user’s social interactions. In other applications that require users to understand the focus and intent, designers can employ a reticle or raycast from the device.[100] Moreover, augmented reality developers may find it appropriate to have digital elements scale or react to the direction of the camera and the context of objects that can are detected.[99]

The most exciting factor of augmented reality technology is the ability to utilize the introduction of 3D space. This means that a user can potentially access multiple copies of 2D interfaces within a single AR application.[99] AR applications are collaborative, a user can also connect to another’s device and view or manipulate virtual objects in the other person’s context.

Visual design

In general, visual design is the appearance of the developing application that engages the user. To improve the graphic interface elements and user interaction, developers may use visual cues to inform user what elements of UI are designed to interact with and how to interact with them. Since navigating in AR application may appear difficult and seem frustrating, visual cues design can make interactions seem more natural.[98]

In some augmented reality applications that uses a 2D device as an interactive surface, the 2D control environment does not translate well in 3D space making users hesitant to explore their surroundings. To solve this issue, designers should apply visual cues to assist and encourage users to explore their surroundings.

It is important to note the two main objects in AR when developing VR applications: 3D volumetric objects that are manipulatable and realistically interact with light and shadow; and animated media imagery such as images and videos which are mostly traditional 2D media rendered in a new context for augmented reality.[98] When virtual objects are projected onto a real environment, it is challenging for augmented reality application designers to ensure a perfectly seamless integration with relative to the real world environment, especially with 2D objects. As such, designers can add weight to objects, use depths maps, and choose different material properties that highlight the object’s presence in the real world. Another visual design that can be applied is using different lighting techniques or casting shadows to improve overall depth judgment. For instance, a common lighting technique is simply placing a light source overhead at the 12 o’clock position, to create shadows upon virtual objects.[98]

Possible applications

Augmented reality has been explored for many applications.[105] Since the 1970s and early 1980s, Steve Mann has developed technologies meant for everyday use i.e. "horizontal" across all applications rather than a specific "vertical" market. Examples include Mann's "EyeTap Digital Eye Glass", a general-purpose seeing aid that does dynamic-range management (HDR vision) and overlays, underlays, simultaneous augmentation and diminishment (e.g. diminishing the electric arc while looking at a welding torch).[106]

Literature


An example of an AR code containing a QR code

The first description of AR as it is known today was in Virtual Light, the 1994 novel by William Gibson. In 2011, AR was blended with poetry by ni ka from Sekai Camera in Tokyo, Japan. The prose of these AR poems come from Paul Celan, "Die Niemandsrose", expressing the aftermath of the 2011 Tōhoku earthquake and tsunami.[107][108][109]

Archaeology

AR has been used to aid archaeological research. By augmenting archaeological features onto the modern landscape, AR allows archaeologists to formulate possible site configurations from extant structures.[110] Computer generated models of ruins, buildings, landscapes or even ancient people have been recycled into early archaeological AR applications.[111][112][113] For example, implementing a system like, "VITA (Visual Interaction Tool for Archaeology)" will allow users to imagine and investigate instant excavation results without leaving their home. Each user can collaborate by mutually "navigating, searching, and viewing data." Hrvjone Benko, a researcher for the computer science department at Colombia University, points out that these particular systems and others like it can provide "3D panoramic images and 3D models of the site itself at different excavation stages." All a while, it organizes much of the data in a collaborative way that is easy to use. Collaborative AR systems supply multimodal interactions that combine the real world with virtual images of both environments.[114]

Architecture

AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into a real life local view of a property before the physical building is constructed there; this was demonstrated publicly by Trimble Navigation in 2004. AR can also be employed within an architect's workspace, rendering animated 3D visualizations of their 2D drawings. Architecture sight-seeing can be enhanced with AR applications, allowing users viewing a building's exterior to virtually see through its walls, viewing its interior objects and layout.[115][116][117]

With the continual improvements to GPS accuracy, businesses are able to use augmented reality to visualize georeferenced models of construction sites, underground structures, cables and pipes using mobile devices.[118] Augmented reality is applied to present new projects, to solve on-site construction challenges, and to enhance promotional materials.[119] Examples include the Daqri Smart Helmet, an Android-powered hard hat used to create augmented reality for the industrial worker, including visual instructions, real-time alerts, and 3D mapping.

Following the Christchurch earthquake, the University of Canterbury released CityViewAR,[120] which enabled city planners and engineers to visualize buildings that had been destroyed.[121] Not only did this provide planners with tools to reference the previous cityscape, but it also served as a reminder to the magnitude of the devastation caused, as entire buildings had been demolished.

Visual art

AR applied in the visual arts allows objects or places to trigger artistic multidimensional experiences and interpretations of reality.

AR technology aided the development of eye tracking technology[122] to translate a disabled person's eye movements into drawings on a screen.[123]

Commerce


The AR-Icon can be used as a marker on print as well as on online media. It signals the viewer that digital content is behind it. The content can be viewed with a smartphone or tablet.

AR is used to integrate print and video marketing. Printed marketing material can be designed with certain "trigger" images that, when scanned by an AR-enabled device using image recognition, activate a video version of the promotional material. A major difference between augmented reality and straightforward image recognition is that one can overlay multiple media at the same time in the view screen, such as social media share buttons, the in-page video even audio and 3D objects. Traditional print-only publications are using augmented reality to connect many different types of media.[124][125][126][127][128]

AR can enhance product previews such as allowing a customer to view what's inside a product's packaging without opening it.[129] AR can also be used as an aid in selecting products from a catalog or through a kiosk. Scanned images of products can activate views of additional content such as customization options and additional images of the product in its use.[130][131]

By 2010, virtual dressing rooms had been developed for e-commerce.[132]

Augment SDK
Augment SDK offers brands and retailers the capability to personalize their customers' shopping experience by embedding AR product visualization into their eCommerce platforms.

In 2012, a mint used AR techniques to market a commemorative coin for Aruba. The coin itself was used as an AR trigger, and when held in front of an AR-enabled device it revealed additional objects and layers of information that were not visible without the device.[133][134]

In 2013, L'Oreal Paris used CrowdOptic technology to create an augmented reality experience at the seventh annual Luminato Festival in Toronto, Canada.[40]

In 2014, L'Oreal brought the AR experience to a personal level with their "Makeup Genius" app. It allowed users to try out make-up and beauty styles via a mobile device.[135]

In 2015, the Bulgarian startup iGreet developed its own AR technology and used it to make the first premade "live" greeting card. A traditional paper card was augmented with digital content which was revealed by using the iGreet app.[136][137]

Education

In educational settings, AR has been used to complement a standard curriculum. Text, graphics, video, and audio may be superimposed into a student's real-time environment. Textbooks, flashcards and other educational reading material may contain embedded "markers" or triggers that, when scanned by an AR device, produced supplementary information to the student rendered in a multimedia format.[138][139][140] This makes AR a good alternative method for presenting information and Multimedia Learning Theory can be applied.[141]

As AR evolved, students can participate interactively and interact with knowledge more authentically. Instead of remaining passive recipients, students can become active learners, able to interact with their learning environment. Computer-generated simulations of historical events allow students to explore and learning details of each significant area of the event site.[142]

In higher education, Construct3D, a Studierstube system, allows students to learn mechanical engineering concepts, math or geometry.[143] Chemistry AR apps allow students to visualize and interact with the spatial structure of a molecule using a marker object held in the hand.[144] Anatomy students can visualize different systems of the human body in three dimensions.[145]

Augmented reality technology enhances remote collaboration, allowing students and instructors in different locales to interact by sharing a common virtual learning environment populated by virtual objects and learning materials.[146]

Primary school children learn easily from interactive experiences. Astronomical constellations and the movements of objects in the solar system were oriented in 3D and overlaid in the direction the device was held, and expanded with supplemental video information. Paper-based science book illustrations could seem to come alive as video without requiring the child to navigate to web-based materials.

While some educational apps were available for AR in 2016, it was not broadly used. Apps that leverage augmented reality to aid learning included SkyView for studying astronomy,[147] AR Circuits for building simple electric circuits,[148] and SketchAr for drawing.[149]

AR would also be a way for parents and teachers to achieve their goals for modern education, which might include providing a more individualized and flexible learning, making closer connections between what is taught at school and the real world, and helping students to become more engaged in their own learning.

A recent research compared the functionalities of augmented reality tools with potential for education [150].

Emergency management/search and rescue

Augmented reality systems are used in public safety situations, from super storms to suspects at large.

As early as 2009, two articles from Emergency Management magazine discussed the power of this technology for emergency management. The first was "Augmented Reality--Emerging Technology for Emergency Management" by Gerald Baron.[151] Per Adam Crowe: "Technologies like augmented reality (ex: Google Glass) and the growing expectation of the public will continue to force professional emergency managers to radically shift when, where, and how technology is deployed before, during, and after disasters."[152]

Another early example was a search aircraft looking for a lost hiker in rugged mountain terrain. Augmented reality systems provided aerial camera operators with a geographic awareness of forest road names and locations blended with the camera video. The camera operator was better able to search for the hiker knowing the geographic context of the camera image. Once located, the operator could more efficiently direct rescuers to the hiker's location because the geographic position and reference landmarks were clearly labeled.[153]

Social interaction

AR can be used to facilitate social interaction. An augmented reality social network framework called Talk2Me enables people to disseminate information and view others’ advertised information in an augmented reality way. The timely and dynamic information sharing and viewing functionalities of Talk2Me help initiate conversations and make friends for users with people in physical proximity.[154]

Video games


Merchlar's mobile game Get On Target uses a trigger image as fiducial marker.

The gaming industry embraced AR technology. A number of games were developed for prepared indoor environments, such as AR air hockey, Titans of Space, collaborative combat against virtual enemies, and AR-enhanced pool table games.[155][156][157]

Augmented reality allowed video game players to experience digital game play in a real world environment. Companies and platforms like Niantic and Proxy42 emerged as major augmented reality gaming creators.[158][159] Niantic is notable for releasing the record-breaking game Pokémon Go.[160] Disney has partnered with Lenovo to create the augmented reality game Star Wars: Jedi Challenges that works with a Lenovo Mirage AR headset, a tracking sensor and a Lightsaber controller, scheduled to launch in December 2017.[161]

Industrial design

AR allows industrial designers to experience a product's design and operation before completion. Volkswagen has used AR for comparing calculated and actual crash test imagery.[162] AR has been used to visualize and modify car body structure and engine layout. It has also been used to compare digital mock-ups with physical mock-ups for finding discrepancies between them.[163][164]

Medical

Since 2005, a device called a near-infrared vein finder that films subcutaneous veins, processes and projects the image of the veins onto the skin has been used to locate veins.[165][166]

AR provides surgeons with patient monitoring data in the style of a fighter pilot's heads-up display, and allows patient imaging records, including functional videos, to be accessed and overlaid. Examples include a virtual X-ray view based on prior tomography or on real-time images from ultrasound and confocal microscopy probes,[167] visualizing the position of a tumor in the video of an endoscope,[168] or radiation exposure risks from X-ray imaging devices.[169][170] AR can enhance viewing a fetus inside a mother's womb.[171] Siemens, Karl Storz and IRCAD have developed a system for laparoscopic liver surgery that uses AR to view sub-surface tumors and vessels.[172] AR has been used for cockroach phobia treatment.[173] Patients wearing augmented reality glasses can be reminded to take medications.[174] Virtual reality has been seen promising in the medical field since the 90's.[175] Augmented reality can be very helpful in the medical field. It could be used to provide crucial information to a doctor or surgeon with having them take their eyes off the patient. On the 30th of April, 2015 Microsoft announced the Microsoft HoloLens, their first shot at augmented reality. The HoloLens has advanced through the years and it has gotten so advanced that it has been used to project holograms for near infrared fluorescence based image guided surgery.[176] As augment reality advances, the more it is implemented into medical use. Augmented reality and other computer based-utility is being used today to help train medical professionals.[177] With the creation of Google Glass and Microsoft HoloLens, has helped pushed Augmented Reality into medical education.

Spatial immersion and interaction

Augmented reality applications, running on handheld devices utilized as virtual reality headsets, can also digitalize human presence in space and provide a computer generated model of them, in a virtual space where they can interact and perform various actions. Such capabilities are demonstrated by "Project Anywhere", developed by a postgraduate student at ETH Zurich, which was dubbed as an "out-of-body experience".[178][179][180]

Flight training

Building on decades of perceptual-motor research in experimental psychology, researchers at the Aviation Research Laboratory of the University of Illinois at Urbana-Champaign used augmented reality in the form of a flight path in the sky to teach flight students how to land a flight simulator. An adaptive augmented schedule in which students were shown the augmentation only when they departed from the flight path proved to be a more effective training intervention than a constant schedule.[181][182] Flight students taught to land in the simulator with the adaptive augmentation learned to land a light aircraft more quickly than students with the same amount of landing training in the simulator but with constant augmentation or without any augmentation.[181]

Military


Augmented Reality System for Soldier ARC4(USA)

An interesting early application of AR occurred when Rockwell International created video map overlays of satellite and orbital debris tracks to aid in space observations at Air Force Maui Optical System. In their 1993 paper "Debris Correlation Using the Rockwell WorldView System" the authors describe the use of map overlays applied to video from space surveillance telescopes. The map overlays indicated the trajectories of various objects in geographic coordinates. This allowed telescope operators to identify satellites, and also to identify and catalog potentially dangerous space debris.[183]

Starting in 2003 the US Army integrated the SmartCam3D augmented reality system into the Shadow Unmanned Aerial System to aid sensor operators using telescopic cameras to locate people or points of interest. The system combined both fixed geographic information including street names, points of interest, airports, and railroads with live video from the camera system. The system offered a "picture in picture" mode that allows the system to show a synthetic view of the area surrounding the camera's field of view. This helps solve a problem in which the field of view is so narrow that it excludes important context, as if "looking through a soda straw". The system displays real-time friend/foe/neutral location markers blended with live video, providing the operator with improved situational awareness.

Researchers at USAF Research Lab (Calhoun, Draper et al.) found an approximately two-fold increase in the speed at which UAV sensor operators found points of interest using this technology.[184] This ability to maintain geographic awareness quantitatively enhances mission efficiency. The system is in use on the US Army RQ-7 Shadow and the MQ-1C Gray Eagle Unmanned Aerial Systems.

In combat, AR can serve as a networked communication system that renders useful battlefield data onto a soldier's goggles in real time. From the soldier's viewpoint, people and various objects can be marked with special indicators to warn of potential dangers. Virtual maps and 360° view camera imaging can also be rendered to aid a soldier's navigation and battlefield perspective, and this can be transmitted to military leaders at a remote command center.[185]

Navigation


LandForm video map overlay marking runways, road, and buildings during 1999 helicopter flight test

The NASA X-38 was flown using a Hybrid Synthetic Vision system that overlaid map data on video to provide enhanced navigation for the spacecraft during flight tests from 1998 to 2002. It used the LandForm software and was useful for times of limited visibility, including an instance when the video camera window frosted over leaving astronauts to rely on the map overlays.[186] The LandForm software was also test flown at the Army Yuma Proving Ground in 1999. In the photo at right one can see the map markers indicating runways, air traffic control tower, taxiways, and hangars overlaid on the video.[187]

AR can augment the effectiveness of navigation devices. Information can be displayed on an automobile's windshield indicating destination directions and meter, weather, terrain, road conditions and traffic information as well as alerts to potential hazards in their path.[188][189][190] Aboard maritime vessels, AR can allow bridge watch-standers to continuously monitor important information such as a ship's heading and speed while moving throughout the bridge or performing other tasks.[191]

Workplace

Augmented reality may have a good impact on work collaboration as people may be inclined to interact more actively with their learning environment. It may also encourage tacit knowledge renewal which makes firms more competitive. AR was used to facilitate collaboration among distributed team members via conferences with local and virtual participants. AR tasks included brainstorming and discussion meetings utilizing common visualization via touch screen tables, interactive digital whiteboards, shared design spaces, and distributed control rooms.[192][193][194]

Complex tasks such as assembly, maintenance, and surgery were simplified by inserting additional information into the field of view. For example, labels were displayed on parts of a system to clarify operating instructions for a mechanic performing maintenance on a system.[195][196] Assembly lines benefited from the usage of AR. In addition to Boeing, BMW and Volkswagen were known for incorporating this technology into assembly lines for monitoring process improvements.[197][198][199] Big machines are difficult to maintain because of their multiple layers or structures. AR permits people to look through the machine as if with an x-ray, pointing them to the problem right away.[200]

The new wave of professionals, the Millennial workforce, demands more efficient knowledge sharing solutions and easier access to rapidly growing knowledge bases. Augmented reality offers a solution to that.[201]

Broadcast and live events

Weather visualizations were the first application of augmented reality to television. It has now become common in weathercasting to display full motion video of images captured in real-time from multiple cameras and other imaging devices. Coupled with 3D graphics symbols and mapped to a common virtual geospace model, these animated visualizations constitute the first true application of AR to TV.

AR has become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay augmentation through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first down" line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. AR is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and snooker ball trajectories.[81][202]

Augmented reality for Next Generation TV allows viewers to interact with the programs they were watching. They can place objects into an existing program and interact with them, such as moving them around. Objects include avatars of real persons in real time who are also watching the same program.

AR has been used to enhance concert and theater performances. For example, artists allow listeners to augment their listening experience by adding their performance to that of other bands/groups of users.[203][204][205]

Tourism and sightseeing

Travelers may use AR to access real-time informational displays regarding a location, its features, and comments or content provided by previous visitors. Advanced AR applications include simulations of historical events, places, and objects rendered into the landscape.[206][207][208]

AR applications linked to geographic locations present location information by audio, announcing features of interest at a particular site as they become visible to the user.[209][210][211]

Companies can use AR to attract tourists to particular areas that they may not be familiar with by name. Tourists will be able to experience beautiful landscapes in first person with the use of AR devices. Companies like Phocuswright plan to use such technology in order to expose the lesser known but beautiful areas of the planet, and in turn, increase tourism. Other companies such as Matoke Tours have already developed an application where the user can see 360 degress from several different places in Uganda. Matoke Tours and Phocuswright have the ability to display their apps on virtual reality headsets like the Samsung VR and Oculus Rift.[212]

Translation

AR systems such as Word Lens can interpret the foreign text on signs and menus and, in a user's augmented view, re-display the text in the user's language. Spoken words of a foreign language can be translated and displayed in a user's view as printed subtitles.[213][214][215]

Music

It has been suggested that augmented reality may be used in new methods of music production, mixing, control and visualization.[216][217][218][219]

A tool for 3D music creation in clubs that, in addition to regular sound mixing features, allows the DJ to play dozens of sound samples, placed anywhere in 3D space, has been conceptualized.[220]

Leeds College of Music teams have developed an AR app that can be used with Audient desks and allow students to use their smartphone or tablet to put layers of information or interactivity on top of an Audient mixing desk.[221]

ARmony is a software package that makes use of augmented reality to help people to learn an instrument.[222]

In a proof-of-concept project Ian Sterling, interaction design student at California College of the Arts, and software engineer Swaroop Pal demonstrated a HoloLens app whose primary purpose is to provide a 3D spatial UI for cross-platform devices — the Android Music Player app and Arduino-controlled Fan and Light — and also allow interaction using gaze and gesture control.[223][224][225][226]

AR Mixer is an app that allows one to select and mix between songs by manipulating objects – such as changing the orientation of a bottle or can.[227]

In a video Uriel Yehezkel, demonstrates using the Leap Motion controller and GECO MIDI to control Ableton Live with hand gestures and states that by this method he was able to control more than 10 parameters simultaneously with both hands and take full control over the construction of the song, emotion and energy.[228][229][better source needed]

A novel musical instrument that allows novices to play electronic musical compositions, interactively remixing and modulating their elements, by manipulating simple physical objects has been proposed.[230]

A system using explicit gestures and implicit dance moves to control the visual augmentations of a live music performance that enable more dynamic and spontaneous performances and—in combination with indirect augmented reality—leading to a more intense interaction between artist and audience has been suggested.[231]

Research by members of the CRIStAL at the University of Lille makes use of augmented reality in order to enrich musical performance. The ControllAR project allows musicians to augment their MIDI control surfaces with the remixed graphical user interfaces of music software.[232] The Rouages project proposes to augment digital musical instruments in order to reveal their mechanisms to the audience and thus improve the perceived liveness.[233] Reflets is a novel augmented reality display dedicated to musical performances where the audience acts as a 3D display by revealing virtual content on stage, which can also be used for 3D musical interaction and collaboration.[234]

Retail

Augmented reality is becoming more frequently used for online advertising. Retailers offer the ability to upload a picture on their website and "try on" various clothes which is overlaid on the picture. Even further, companies such as Bodymetrics install dressing booths in department stores that offer full-body scanning. These booths render a 3-D model of the user, allowing the consumers to view different outfits on themselves without the need of physically changing clothes.[235]

Snapchat

Snapchat users have access to augmented reality in the company's instant messaging app through use of camera filters. In September 2017, Snapchat updated its app to include a camera filter that allowed users to render an animated, cartoon version of themselves called "Bitmoji". These animated avatars would be projected in the real world through the camera, and can be photographed or video recorded.[236] In the same month, Snapchat also announced a new feature called "Sky Filters" that will be available on its app. This new feature makes use of augmented reality to alter the look of a picture taken of the sky, much like how users can apply the app's filters to other pictures. Users can choose from sky filters such as starry night, stormy clouds, beautiful sunsets, and rainbow.[237]

Privacy concerns

The concept of modern augmented reality depends on the ability of the device to record and analyze the environment in real time. Because of this, there are potential legal concerns over privacy. While the First Amendment to the United States Constitution allows for such recording in the name of public interest, the constant recording of an AR device makes it difficult to do so without also recording outside of the public domain. Legal complications would be found in areas where a right to a certain amount of privacy is expected or where copyrighted media are displayed.

In terms of individual privacy, there exists the ease of access to information that one should not readily possess about a given person. This is accomplished through facial recognition technology. Assuming that AR automatically passes information about persons that the user sees, there could be anything seen from social media, criminal record, and marital status.[238]

Privacy-compliant image capture solutions can be deployed to temper the impact of constant filming on individual privacy.[239]

Notable researchers

  • Ivan Sutherland invented the first VR head-mounted display at Harvard University.
  • Steve Mann formulated an earlier concept of mediated reality in the 1970s and 1980s, using cameras, processors, and display systems to modify visual reality to help people see better (dynamic range management), building computerized welding helmets, as well as "augmediated reality" vision systems for use in everyday life. He is also an adviser to Meta.[240]
  • Louis Rosenberg developed one of the first known AR systems, called Virtual Fixtures, while working at the U.S. Air Force Armstrong Labs in 1991, and published the first study of how an AR system can enhance human performance.[241] Rosenberg's subsequent work at Stanford University in the early 90's, was the first proof that virtual overlays when registered and presented over a user's direct view of the real physical world, could significantly enhance human performance.[242][243][244]
  • Mike Abernathy pioneered one of the first successful augmented reality applications of video overlay using map data for space debris in 1993,[183] while at Rockwell International. He co-founded Rapid Imaging Software, Inc. and was the primary author of the LandForm system in 1995, and the SmartCam3D system.[186][187] LandForm augmented reality was successfully flight tested in 1999 aboard a helicopter and SmartCam3D was used to fly the NASA X-38 from 1999–2002. He and NASA colleague Francisco Delgado received the National Defense Industries Association Top5 awards in 2004.[245]
  • Steven Feiner, Professor at Columbia University, is the author of a 1993 paper on an AR system prototype, KARMA (the Knowledge-based Augmented Reality Maintenance Assistant), along with Blair MacIntyre and Doree Seligmann. He is also an advisor to Meta.[246]
  • Tracy McSheery, of Phasespace, developer of wide field of view AR lenses as used in Meta 2 and others.[247]
  • S. Ravela, B. Draper, J. Lim and A. Hanson develop marker/fixture-less augmented reality system with computer vision in 1994. They augmented an engine block observed from a single video camera with annotations for repair. They use model-based pose estimation, aspect graphs and visual feature tracking to dynamically register model with the observed video.[248]
  • Francisco "Frank" Delgado is a NASA engineer and project manager specializing in human interface research and development. Starting 1998 he conducted research into displays that combined video with synthetic vision systems (called hybrid synthetic vision at the time) that we recognize today as augmented reality systems for the control of aircraft and spacecraft. In 1999 he and colleague Mike Abernathy flight-tested the LandForm system aboard a US Army helicopter. Delgado oversaw integration of the LandForm and SmartCam3D systems into the X-38 Crew Return Vehicle.[186][187][249] In 2001, Aviation Week reported NASA astronaut's successful use of hybrid synthetic vision (augmented reality) to fly the X-38 during a flight test at Dryden Flight Research Center. The technology was used in all subsequent flights of the X-38. Delgado was co-recipient of the National Defense Industries Association 2004 Top 5 software of the year award for SmartCam3D.[245]
  • Bruce H. Thomas and Wayne Piekarski develop the Tinmith system in 1998.[250] They along with Steve Feiner with his MARS system pioneer outdoor augmented reality.
  • Mark Billinghurst is one of the world's leading[citation needed] augmented reality researchers. Director of the HIT Lab New Zealand (HIT Lab NZ) at the University of Canterbury in New Zealand, he has produced over 250 technical publications and presented demonstrations and courses at a wide variety of conferences.
  • Reinhold Behringer performed important early work in image registration for augmented reality, and prototype wearable testbeds for augmented reality. He also co-organized the First IEEE International Symposium on Augmented Reality in 1998 (IWAR'98), and co-edited one of the first books on augmented reality.[251][252][253]
  • Dieter Schmalstieg and Daniel Wagner developed a marker tracking systems for mobile phones and PDAs in 2009.[254]

History

  • 1901: L. Frank Baum, an author, first mentions the idea of an electronic display/spectacles that overlays data onto real life (in this case 'people'), it is named a 'character marker'.[255]
  • 1957–62: Morton Heilig, a cinematographer, creates and patents a simulator called Sensorama with visuals, sound, vibration, and smell.[256]
  • 1968: Ivan Sutherland invents the head-mounted display and positions it as a window into a virtual world.[257]
  • 1975: Myron Krueger creates Videoplace to allow users to interact with virtual objects.
  • 1980: The research by Gavan Lintern of the University of Illinois is the first published work to show the value of a heads up display for teaching real-world flight skills.[181]
  • 1980: Steve Mann creates the first wearable computer, a computer vision system with text and graphical overlays on a photographically mediated scene.[258] See EyeTap. See Heads Up Display.
  • 1981: Dan Reitan geospatially maps multiple weather radar images and space-based and studio cameras to earth maps and abstract symbols for television weather broadcasts, bringing a precursor concept to augmented reality (mixed real/graphical images) to TV.[259]
  • 1984: In the film The Terminator, the Terminator uses a heads-up display in several parts of the film. In one part, he accesses a diagram of the gear system of the truck he gets into towards the end of the film.
  • 1987: Douglas George and Robert Morris create a working prototype of an astronomical telescope-based "heads-up display" system (a precursor concept to augmented reality) which superimposed in the telescope eyepiece, over the actual sky images, multi-intensity star, and celestial body images, and other relevant information.[260][261]
  • 1989: Jaron Lanier creates VPL Research, an early commercial business around virtual worlds.
  • 1990: The term 'Augmented Reality' is attributed to Thomas P. Caudell, a former Boeing researcher.[262]
  • 1992: Louis Rosenberg developed one of the first functioning AR systems, called Virtual Fixtures, at the United States Air Force Research Laboratory—Armstrong, that demonstrated benefit to human perception.[263]
  • 1993: Steven Feiner, Blair MacIntyre and Doree Seligmann present an early paper on an AR system prototype, KARMA, at the Graphics Interface conference.
  • 1993: Mike Abernathy, et al., report the first use of augmented reality in identifying space debris using Rockwell WorldView by overlaying satellite geographic trajectories on live telescope video.[183]
  • 1993 A widely cited version of the paper above is published in Communications of the ACM – Special issue on computer augmented environments, edited by Pierre Wellner, Wendy Mackay, and Rich Gold.[264]
  • 1993: Loral WDL, with sponsorship from STRICOM, performed the first demonstration combining live AR-equipped vehicles and manned simulators. Unpublished paper, J. Barrilleaux, "Experiences and Observations in Applying Augmented Reality to Live Training", 1999.[265]
  • 1994: Julie Martin creates first 'Augmented Reality Theater production', Dancing In Cyberspace, funded by the Australia Council for the Arts, features dancers and acrobats manipulating body–sized virtual object in real time, projected into the same physical space and performance plane. The acrobats appeared immersed within the virtual object and environments. The installation used Silicon Graphics computers and Polhemus sensing system.
  • 1995: S. Ravela et al. at University of Massachusetts introduce a vision-based system using monocular cameras to track objects (engine blocks) across views for augmented reality.
  • 1998: Spatial Augmented Reality introduced at University of North Carolina at Chapel Hill by Ramesh Raskar, Welch, Henry Fuchs.[66]
  • 1999: Frank Delgado, Mike Abernathy et al. report successful flight test of LandForm software video map overlay from a helicopter at Army Yuma Proving Ground overlaying video with runways, taxiways, roads and road names.[186][187]
  • 1999: The US Naval Research Laboratory engages on a decade-long research program called the Battlefield Augmented Reality System (BARS) to prototype some of the early wearable systems for dismounted soldier operating in urban environment for situation awareness and training.[266]
  • 1999: Hirokazu Kato (加藤 博一) created ARToolKit at HITLab, where AR later was further developed by other HITLab scientists, demonstrating it at SIGGRAPH.
  • 2000: Bruce H. Thomas develops ARQuake, the first outdoor mobile AR game, demonstrating it in the International Symposium on Wearable Computers.
  • 2001: NASA X-38 flown using LandForm software video map overlays at Dryden Flight Research Center.[267]
  • 2004: Outdoor helmet-mounted AR system demonstrated by Trimble Navigation and the Human Interface Technology Laboratory.[117]
  • 2008: Wikitude AR Travel Guide launches on 20 Oct 2008 with the G1 Android phone.[268]
  • 2009: ARToolkit was ported to Adobe Flash (FLARToolkit) by Saqoosha, bringing augmented reality to the web browser.[269]
  • 2012: Launch of Lyteshot, an interactive AR gaming platform that utilizes smart glasses for game data
  • 2013: Meta announces the Meta 1 developer kit, the first to market AR see-through display[citation needed]
  • 2013: Google announces an open beta test of its Google Glass augmented reality glasses. The glasses reach the Internet through Bluetooth, which connects to the wireless service on a user’s cellphone. The glasses respond when a user speaks, touches the frame or moves the head.[270][271]
  • 2014: Mahei creates the first generation of augmented reality enhanced educational toys.[272]
  • 2015: Microsoft announces Windows Holographic and the HoloLens augmented reality headset. The headset utilizes various sensors and a processing unit to blend high definition "holograms" with the real world.[273]
  • 2016: Niantic released Pokémon Go for iOS and Android in July 2016. The game quickly became one of the most popular smartphone applications and in turn spikes the popularity of augmented reality games.[274]
  • 2017: Magic Leap announces the use of Digital Lightfield technology embedded into the Magic Leap One headset. The creators edition headset includes the glasses and a computing pack worn on your belt.[275]

Ray Kurzweil

From Wikipedia, the free encyclopedia
 
Ray Kurzweil
Raymond Kurzweil Fantastic Voyage.jpg
Kurzweil on or prior to July 5, 2005
Born Raymond Kurzweil
February 12, 1948 (age 70)
Queens, New York City, U.S.
Nationality American
Alma mater Massachusetts Institute of Technology (B.S.)
Occupation Author, entrepreneur, futurist and inventor
Employer Google Inc.
Spouse(s) Sonya Rosenwald Kurzweil (1975–present)[1]
Awards
Website Official website

Raymond Kurzweil (/ˈkɜːrzwl/ KURZ-wyle; born February 12, 1948) is an American author, computer scientist, inventor and futurist. Aside from futurism, he is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments. He has written books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements, and gives public talks to share his optimistic outlook on life extension technologies and the future of nanotechnology, robotics, and biotechnology.

Kurzweil was the principal inventor of a series of firsts:
Kurzweil received the 1999 National Medal of Technology and Innovation, the United States' highest honor in technology, from President Clinton in a White House ceremony[6]. He was the recipient of the $500,000 Lemelson-MIT Prize for 2001,[7] the world's largest for innovation. And in 2002 he was inducted into the National Inventors Hall of Fame, established by the U.S. Patent Office. He has received twenty-one honorary doctorates, and honors from three U.S. presidents. Kurzweil has been described as a "restless genius"[8] by The Wall Street Journal and "the ultimate thinking machine"[9] by Forbes. PBS included Kurzweil as one of 16 "revolutionaries who made America"[10] along with other inventors of the past two centuries. Inc. magazine ranked him #8 among the "most fascinating" entrepreneurs in the United States and called him "Edison's rightful heir".[11]

Kurzweil has written seven books, five of which have been national bestsellers[12]. The Age of Spiritual Machines has been translated into 9 languages and was the #1 best-selling book on Amazon in science. Kurzweil's book The Singularity Is Near was a New York Times bestseller, and has been the #1 book on Amazon in both science and philosophy. Kurzweil speaks widely to audiences both public and private and regularly delivers keynote speeches at industry conferences like DEMO, SXSW, and TED. He maintains the news website KurzweilAI.net, which has over three million readers annually.[5]

Kurzweil has been employed by Google since 2012, where he is a "director of engineering".

Life, inventions, and business career

Early life

Ray Kurzweil grew up in the New York City borough of Queens. He attended NYC Public Education Kingsbury Elementary School PS188. He was born to secular Jewish parents who had emigrated from Austria just before the onset of World War II. He was exposed via Unitarian Universalism to a diversity of religious faiths during his upbringing.[13][citation needed] His Unitarian church had the philosophy of many paths to the truth – the religious education consisted of studying a single religion for six months before moving on to the next.[citation needed] His father, Fredric was a concert pianist, a noted conductor, and a music educator. His mother, Hannah was a visual artist. He has one sibling, his sister Enid.

Kurzweil decided he wanted to be an inventor at the age of five.[14] As a young boy, Kurzweil had an inventory of parts from various construction toys he’d been given and old electronic gadgets he’d collected from neighbors. In his youth, Kurzweil was an avid reader of science fiction literature. At the age of eight, nine, and ten, he read the entire Tom Swift Jr. series. At the age of seven or eight, he built a robotic puppet theater and robotic game. He was involved with computers by the age of twelve (in 1960), when only a dozen computers existed in all of New York City, and built computing devices and statistical programs for the predecessor of Head Start.[15] At the age of fourteen, Kurzweil wrote a paper detailing his theory of the neocortex.[16] His parents were involved with the arts, and he is quoted in the documentary Transcendent Man[17] as saying that the household always produced discussions about the future and technology.

Kurzweil attended Martin Van Buren High School. During class, he often held onto his class textbooks to seemingly participate, but instead, focused on his own projects which were hidden behind the book. His uncle, an engineer at Bell Labs, taught young Kurzweil the basics of computer science.[18] In 1963, at age fifteen, he wrote his first computer program.[19] He created a pattern-recognition software program that analyzed the works of classical composers, and then synthesized its own songs in similar styles. In 1965, he was invited to appear on the CBS television program I've Got a Secret, where he performed a piano piece that was composed by a computer he also had built.[20] Later that year, he won first prize in the International Science Fair for the invention;[21] Kurzweil's submission to Westinghouse Talent Search of his first computer program alongside several other projects resulted in him being one of its national winners, which allowed him to be personally congratulated by President Lyndon B. Johnson during a White House ceremony. These activities collectively impressed upon Kurzweil the belief that nearly any problem could be overcome.[22]

Mid-life

While in high school, Kurzweil had corresponded with Marvin Minsky and was invited to visit him at MIT, which he did. Kurzweil also visited Frank Rosenblatt at Cornell.[23]

He obtained a B.S. in computer science and literature in 1970 at MIT. He went to MIT to study with Marvin Minsky. He took all of the computer programming courses (eight or nine) offered at MIT in the first year and a half.

In 1968, during his sophomore year at MIT, Kurzweil started a company that used a computer program to match high school students with colleges. The program, called the Select College Consulting Program, was designed by him and compared thousands of different criteria about each college with questionnaire answers submitted by each student applicant. Around this time, he sold the company to Harcourt, Brace & World for $100,000 (roughly $670,000 in 2013 dollars) plus royalties.[24]

In 1974, Kurzweil founded Kurzweil Computer Products, Inc. and led development of the first omni-font optical character recognition system, a computer program capable of recognizing text written in any normal font. Before that time, scanners had only been able to read text written in a few fonts. He decided that the best application of this technology would be to create a reading machine, which would allow blind people to understand written text by having a computer read it to them aloud. However, this device required the invention of two enabling technologies—the CCD flatbed scanner and the text-to-speech synthesizer. Development of these technologies was completed at other institutions such as Bell Labs, and on January 13, 1976, the finished product was unveiled during a news conference headed by him and the leaders of the National Federation of the Blind. Called the Kurzweil Reading Machine, the device covered an entire tabletop.

Kurzweil's next major business venture began in 1978, when Kurzweil Computer Products began selling a commercial version of the optical character recognition computer program. LexisNexis was one of the first customers, and bought the program to upload paper legal and news documents onto its nascent online databases.

Kurzweil sold his Kurzweil Computer Products to Lernout & Hauspie. Following the legal and bankruptcy problems of the latter, the system became a subsidiary of Xerox later known as Scansoft and now as Nuance Communications, and he functioned as a consultant for the former until 1995.

Kurzweil's next business venture was in the realm of electronic music technology. After a 1982 meeting with Stevie Wonder, in which the latter lamented the divide in capabilities and qualities between electronic synthesizers and traditional musical instruments, Kurzweil was inspired to create a new generation of music synthesizers capable of accurately duplicating the sounds of real instruments. Kurzweil Music Systems was founded in the same year, and in 1984, the Kurzweil K250 was unveiled. The machine was capable of imitating a number of instruments, and in tests musicians were unable to discern the difference between the Kurzweil K250 on piano mode from a normal grand piano.[25] The recording and mixing abilities of the machine, coupled with its abilities to imitate different instruments, made it possible for a single user to compose and play an entire orchestral piece.

Kurzweil Music Systems was sold to South Korean musical instrument manufacturer Young Chang in 1990. As with Xerox, Kurzweil remained as a consultant for several years. Hyundai acquired Young Chang in 2006 and in January 2007 appointed Raymond Kurzweil as Chief Strategy Officer of Kurzweil Music Systems.[26]

Later life

Concurrent with Kurzweil Music Systems, Kurzweil created the company Kurzweil Applied Intelligence (KAI) to develop computer speech recognition systems for commercial use. The first product, which debuted in 1987, was an early speech recognition program.

Kurzweil started Kurzweil Educational Systems in 1996 to develop new pattern-recognition-based computer technologies to help people with disabilities such as blindness, dyslexia and attention-deficit hyperactivity disorder (ADHD) in school. Products include the Kurzweil 1000 text-to-speech converter software program, which enables a computer to read electronic and scanned text aloud to blind or visually impaired users, and the Kurzweil 3000 program, which is a multifaceted electronic learning system that helps with reading, writing, and study skills.

Raymond Kurzweil at the Singularity Summit at Stanford University in 2006

During the 1990s, Kurzweil founded the Medical Learning Company.[27] The company's products included an interactive computer education program for doctors and a computer-simulated patient. Around the same time, Kurzweil started KurzweilCyberArt.com—a website featuring computer programs to assist the creative art process. The site used to offer free downloads of a program called AARON—a visual art synthesizer developed by Harold Cohen—and of "Kurzweil's Cybernetic Poet", which automatically creates poetry. During this period he also started KurzweilAI.net, a website devoted towards showcasing news of scientific developments, publicizing the ideas of high-tech thinkers and critics alike, and promoting futurist-related discussion among the general population through the Mind-X forum.

In 1999, Kurzweil created a hedge fund called "FatKat" (Financial Accelerating Transactions from Kurzweil Adaptive Technologies), which began trading in 2006. He has stated that the ultimate aim is to improve the performance of FatKat's A.I. investment software program, enhancing its ability to recognize patterns in "currency fluctuations and stock-ownership trends."[28] He predicted in his 1999 book, The Age of Spiritual Machines, that computers will one day prove superior to the best human financial minds at making profitable investment decisions. In June 2005, Kurzweil introduced the "Kurzweil-National Federation of the Blind Reader" (K-NFB Reader)—a pocket-sized device consisting of a digital camera and computer unit. Like the Kurzweil Reading Machine of almost 30 years before, the K-NFB Reader is designed to aid blind people by reading written text aloud. The newer machine is portable and scans text through digital camera images, while the older machine is large and scans text through flatbed scanning.

In December 2012, Kurzweil was hired by Google in a full-time position to "work on new projects involving machine learning and language processing".[29] He was personally hired by Google co-founder Larry Page.[30] Larry Page and Kurzweil agreed on a one-sentence job description: "to bring natural language understanding to Google".[31]

He received a Technical Grammy on February 8, 2015, recognizing his diverse technical and creative accomplishments. For purposes of the Grammy, perhaps most notable was the aforementioned Kurzweil K250.[32]

Kurzweil has joined the Alcor Life Extension Foundation, a cryonics company. In the event of his declared death, Kurzweil plans to be perfused with cryoprotectants, vitrified in liquid nitrogen, and stored at an Alcor facility in the hope that future medical technology will be able to repair his tissues and revive him.[33]

Personal life

Kurzweil is agnostic about the existence of a soul.[34] On the possibility of divine intelligence, Kurzweil is quoted as saying, "Does God exist? I would say, 'Not yet.'"[35]

Kurzweil married Sonya Rosenwald Kurzweil in 1975 and has two children.[36] Sonya Kurzweil, Ph.D. is a psychologist in private practice in Newton MA, working with women, children, parents and families. She holds faculty appointments at Harvard Medical School and William James College for Graduate Education in Psychology. Her research interests and publications are in the area of psychotherapy practice. Dr. Kurzweil also serves as an active Overseer at Boston Children's Museum. [37]

He has a son, Ethan Kurzweil, who is a venture capitalist,[38] and a daughter, Amy Kurzweil,[39] who is a writer and cartoonist.

Ray Kurzweil's sister, Enid Kurzweil Sterling is a Certified Public Accountant who lives in Santa Barbara, CA.

Ray Kurzweil is a cousin of writer Allen Kurzweil.

Creative approach

Kurzweil said "I realize that most inventions fail not because the R&D department can’t get them to work, but because the timing is wrong‍—‌not all of the enabling factors are at play where they are needed. Inventing is a lot like surfing: you have to anticipate and catch the wave at just the right moment."[40][41]

For the past several decades, Kurzweil's most effective and common approach to doing creative work has been conducted during his lucid dreamlike state which immediately precedes his awakening state. He claims to have constructed inventions, solved difficult problems, such as algorithmic, business strategy, organizational, and interpersonal problems, and written speeches in this state.[23]

Books

Kurzweil's first book, The Age of Intelligent Machines, was published in 1990. The nonfiction work discusses the history of computer artificial intelligence (AI) and forecasts future developments. Other experts in the field of AI contribute heavily to the work in the form of essays. The Association of American Publishers' awarded it the status of Most Outstanding Computer Science Book of 1990.[42]

In 1993, Kurzweil published a book on nutrition called The 10% Solution for a Healthy Life. The book's main idea is that high levels of fat intake are the cause of many health disorders common in the U.S., and thus that cutting fat consumption down to 10% of the total calories consumed would be optimal for most people.

In 1999, Kurzweil published The Age of Spiritual Machines, which further elucidates his theories regarding the future of technology, which themselves stem from his analysis of long-term trends in biological and technological evolution. Much emphasis is on the likely course of AI development, along with the future of computer architecture.

Kurzweil's next book, published in 2004, returned to human health and nutrition. Fantastic Voyage: Live Long Enough to Live Forever was co-authored by Terry Grossman, a medical doctor and specialist in alternative medicine.

The Singularity Is Near, published in 2006, was made into a movie starring Pauley Perrette from NCIS. In February 2007, Ptolemaic Productions acquired the rights to The Singularity is Near, The Age of Spiritual Machines and Fantastic Voyage including the rights to film Kurzweil's life and ideas for the documentary film Transcendent Man[43], which was directed by Barry Ptolemy.

Transcend: Nine Steps to Living Well Forever,[44] a follow-up to Fantastic Voyage, was released on April 28, 2009.

Kurzweil's book, How to Create a Mind: The Secret of Human Thought Revealed, was released on Nov. 13, 2012.[45] In it Kurzweil describes his Pattern Recognition Theory of Mind, the theory that the neocortex is a hierarchical system of pattern recognizers, and argues that emulating this architecture in machines could lead to an artificial superintelligence.[46]

Movies

Kurzweil wrote and co-produced a movie directed by Anthony Waller, called The Singularity Is Near: A True Story About the Future, in 2010 based, in part, on his 2005 book The Singularity Is Near. Part fiction, part non-fiction, he interviews 20 big thinkers like Marvin Minsky, plus there is a B-line narrative story that illustrates some of the ideas, where a computer avatar (Ramona) saves the world from self-replicating microscopic robots. In addition to his movie, an independent, feature-length documentary was made about Kurzweil, his life, and his ideas, called Transcendent Man[47]. Filmmakers Barry Ptolemy and Felicia Ptolemy followed Kurzweil, documenting his global speaking-tour. Premiered in 2009 at the Tribeca Film Festival, Transcendent Man documents Kurzweil's quest to reveal mankind's ultimate destiny and explores many of the ideas found in his New York Times bestselling book, The Singularity Is Near, including his concept exponential growth, radical life expansion, and how we will transcend our biology. The Ptolemys documented Kurzweil's stated goal of bringing back his late father using AI. The film also features critics who argue against Kurzweil's predictions.

In 2010, an independent documentary film called Plug & Pray premiered at the Seattle International Film Festival, in which Kurzweil and one of his major critics, the late Joseph Weizenbaum, argue about the benefits of eternal life.

The feature-length documentary film The Singularity by independent filmmaker Doug Wolens (released at the end of 2012), showcasing Kurzweil, has been acclaimed as "a large-scale achievement in its documentation of futurist and counter-futurist ideas” and “the best documentary on the Singularity to date."[48]

Kurzweil frequently comments on the application of cell-size nanotechnology to the workings of the human brain and how this could be applied to building AI. While being interviewed for a February 2009 issue of Rolling Stone magazine, Kurzweil expressed a desire to construct a genetic copy of his late father, Fredric Kurzweil, from DNA within his grave site. This feat would be achieved by exhumation and extraction of DNA, constructing a clone of Fredric and retrieving memories and recollections—from Ray's mind—of his father. Kurzweil kept all of his father's records, notes, and pictures in order to maintain as much of his father as he could. Ray is known for taking over 200 pills a day, meant to reprogram his biochemistry. This, according to Ray, is only a precursor to the devices at the nano scale that will eventually replace a blood-cell, self updating of specific pathogens to improve the immune system.

Views

The Law of Accelerating Returns

In his 1999 book The Age of Spiritual Machines, Kurzweil proposed "The Law of Accelerating Returns", according to which the rate of change in a wide variety of evolutionary systems (including the growth of technologies) tends to increase exponentially.[49] He gave further focus to this issue in a 2001 essay entitled "The Law of Accelerating Returns", which proposed an extension of Moore's law to a wide variety of technologies, and used this to argue in favor of Vernor Vinge's concept of a technological singularity.[50] Kurzweil suggests that this exponential technological growth is counter-intuitive to the way our brains perceive the world—since our brains were biologically inherited from humans living in a world that was linear and local—and, as a consequence, he claims it has encouraged great skepticism in his future projections.

Stance on the future of genetics, nanotechnology, and robotics

Kurzweil was working with the Army Science Board in 2006 to develop a rapid response system to deal with the possible abuse of biotechnology. He suggested that the same technologies that are empowering us to reprogram biology away from cancer and heart disease could be used by a bioterrorist to reprogram a biological virus to be more deadly, communicable, and stealthy. However, he suggests that we have the scientific tools to successfully defend against these attacks, similar to the way we defend against computer software viruses. He has testified before Congress on the subject of nanotechnology, advocating that nanotechnology has the potential to solve serious global problems such as poverty, disease, and climate change. "Nanotech Could Give Global Warming a Big Chill".[51]

In media appearances, Kurzweil has stressed the extreme potential dangers of nanotechnology[20] but argues that in practice, progress cannot be stopped because that would require a totalitarian system, and any attempt to do so would drive dangerous technologies underground and deprive responsible scientists of the tools needed for defense. He suggests that the proper place of regulation is to ensure that technological progress proceeds safely and quickly, but does not deprive the world of profound benefits. He stated, "To avoid dangers such as unrestrained nanobot replication, we need relinquishment at the right level and to place our highest priority on the continuing advance of defensive technologies, staying ahead of destructive technologies. An overall strategy should include a streamlined regulatory process, a global program of monitoring for unknown or evolving biological pathogens, temporary moratoriums, raising public awareness, international cooperation, software reconnaissance, and fostering values of liberty, tolerance, and respect for knowledge and diversity."[52]

Health and aging

Kurzweil admits that he cared little for his health until age 35, when he was found to suffer from a glucose intolerance, an early form of type II diabetes (a major risk factor for heart disease). Kurzweil then found a doctor (Terry Grossman, M.D.) who shares his somewhat unconventional beliefs to develop an extreme regimen involving hundreds of pills, chemical intravenous treatments, red wine, and various other methods to attempt to live longer. Kurzweil was ingesting "250 supplements, eight to 10 glasses of alkaline water and 10 cups of green tea" every day and drinking several glasses of red wine a week in an effort to "reprogram" his biochemistry.[53] By 2008, he had reduced the number of supplement pills to 150.[34]

Kurzweil has made a number of bold claims for his health regimen. In his book The Singularity Is Near, he claimed that he brought his cholesterol level down from the high 200s to 130, raised his HDL (high-density lipoprotein) from below 30 to 55, and lowered his homocysteine from an unhealthy 11 to a much safer 6.2. He also claimed that his C-reactive protein "and all of my other indexes (for heart disease, diabetes, and other conditions) are at ideal levels." He further claimed that his health regimen, including dramatically reducing his fat intake, successfully "reversed" his type 2 diabetes. (The Singularity Is Near, p. 211)

He has written three books on the subjects of nutrition, health, and immortality: The 10% Solution for a Healthy Life, Fantastic Voyage: Live Long Enough to Live Forever and Transcend: Nine Steps to Living Well Forever. In all, he recommends that other people emulate his health practices to the best of their abilities. Kurzweil and his current "anti-aging" doctor, Terry Grossman, now have two websites promoting their first and second book.

Kurzweil asserts that in the future, everyone will live forever.[54] In a 2013 interview, he said that in 15 years, medical technology could add more than a year to one's remaining life expectancy for each year that passes, and we could then "outrun our own deaths". Among other things, he has supported the SENS Research Foundation's approach to finding a way to repair aging damage, and has encouraged the general public to hasten their research by donating.[31][55]

Kurzweil's view of the human neocortex

According to Kurzweil, technologists will be creating synthetic neocortexes based on the operating principles of the human neocortex with the primary purpose of extending our own neocortexes. He claims to believe that the neocortex of an adult human consists of approximately 300 million pattern recognizers. He draws on the commonly accepted belief that the primary anatomical difference between humans and other primates that allowed for superior intellectual abilities was the evolution of a larger neocortex. He claims that the six-layered neocortex deals with increasing abstraction from one layer to the next. He says that at the low levels, the neocortex may seem cold and mechanical because it can only make simple decisions, but at the higher levels of the hierarchy, the neocortex is likely to be dealing with concepts like being funny, being sexy, expressing a loving sentiment, creating a poem or understanding a poem, etc. He claims to believe that these higher levels of the human neocortex were the enabling factors to permit the human development of language, technology, art, and science. He stated, "If the quantitative improvement from primates to humans with the big forehead was the enabling factor to allow for language, technology, art, and science, what kind of qualitative leap can we make with another quantitative increase? Why not go from 300 million pattern recognizers to a billion?”

Encouraging futurism and transhumanism

Kurzweil's standing as a futurist and transhumanist has led to his involvement in several singularity-themed organizations. In December 2004, Kurzweil joined the advisory board of the Machine Intelligence Research Institute.[56] In October 2005, Kurzweil joined the scientific advisory board of the Lifeboat Foundation.[57] On May 13, 2006, Kurzweil was the first speaker at the Singularity Summit at Stanford University in Palo Alto, California.[58] In May 2013, Kurzweil was the keynote speaker at the 2013 proceeding of the Research, Innovation, Start-up and Employment (RISE) international conference in Seoul, Korea Republic.

In February 2009, Kurzweil, in collaboration with Google and the NASA Ames Research Center in Mountain View, California, announced the creation of the Singularity University training center for corporate executives and government officials. The University's self-described mission is to "assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity's grand challenges". Using Vernor Vinge's Singularity concept as a foundation, the university offered its first nine-week graduate program to 40 students in June, 2009.

Predictions

Past predictions

Kurzweil's first book, The Age of Intelligent Machines, presented his ideas about the future. It was written from 1986 to 1989 and published in 1990. Building on Ithiel de Sola Pool's "Technologies of Freedom" (1983), Kurzweil claims to have forecast the dissolution of the Soviet Union due to new technologies such as cellular phones and fax machines disempowering authoritarian governments by removing state control over the flow of information.[59] In the book, Kurzweil also extrapolated preexisting trends in the improvement of computer chess software performance to predict that computers would beat the best human players "by the year 2000".[60] In May 1997, chess World Champion Garry Kasparov was defeated by IBM's Deep Blue computer in a well-publicized chess match.[61]

Perhaps most significantly, Kurzweil foresaw the explosive growth in worldwide Internet use that began in the 1990s. At the time of the publication of The Age of Intelligent Machines, there were only 2.6 million Internet users in the world,[62] and the medium was unreliable, difficult to use, and deficient in content. He also stated that the Internet would explode not only in the number of users but in content as well, eventually granting users access "to international networks of libraries, data bases, and information services". Additionally, Kurzweil claims to have correctly foreseen that the preferred mode of Internet access would inevitably be through wireless systems, and he was also correct to estimate that the latter would become practical for widespread use in the early 21st century.

In October 2010, Kurzweil released his report, "How My Predictions Are Faring" in PDF format,[63] which analyzes the predictions he made in his book The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999) and The Singularity is Near (2005). Of the 147 total predictions, Kurzweil claims that 115 were 'entirely correct', 12 were "essentially correct", and 17 were "partially correct", and only 3 were "wrong". Adding together the "entirely" and "essentially" correct, Kurzweil's claimed accuracy rate comes to 86%.

Daniel Lyons, writing in Newsweek magazine, criticized Kurzweil for some of his predictions that turned out to be wrong, such as the economy continuing to boom from the 1998 dot-com through 2009, a US company having a market capitalization of more than $1 trillion, a supercomputer achieving 20 petaflops, speech recognition being in widespread use and cars that would drive themselves using sensors installed in highways; all by 2009.[64] To the charge that a 20 petaflop supercomputer was not produced in the time he predicted, Kurzweil responded that he considers Google a giant supercomputer, and that it is indeed capable of 20 petaflops.[64]

Kurzweil's predictions for 2009 were mostly inaccurate, claims Forbes magazine. For example, Kurzweil predicted, "The majority of text is created using continuous speech recognition." This is not the case.[65]

Future predictions

In 1999, Kurzweil published a second book titled The Age of Spiritual Machines, which goes into more depth explaining his futurist ideas. The third and final part of the book is devoted to predictions over the coming century, from 2009 through 2099. In The Singularity Is Near he makes fewer concrete short-term predictions, but includes many longer-term visions.

He states that with radical life extension will come radical life enhancement. He says he is confident that within 10 years we will have the option to spend some of our time in 3D virtual environments that appear just as real as real reality, but these will not yet be made possible via direct interaction with our nervous system. "If you look at video games and how we went from pong to the virtual reality we have available today, it is highly likely that immortality in essence will be possible." He believes that 20 to 25 years from now, we will have millions of blood-cell sized devices, known as nanobots, inside our bodies fighting against diseases, improving our memory, and cognitive abilities. Kurzweil says that a machine will pass the Turing test by 2029, and that around 2045, "the pace of change will be so astonishingly quick that we won't be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating". Kurzweil states that humans will be a hybrid of biological and non-biological intelligence that becomes increasingly dominated by its non-biological component. He stresses that "AI is not an intelligent invasion from Mars. These are brain extenders that we have created to expand our own mental reach. They are part of our civilization. They are part of who we are. So over the next few decades our human-machine civilization will become increasingly dominated by its non-biological component. In Transcendent Man[66] Kurzweil states "We humans are going to start linking with each other and become a metaconnection we will all be connected and all be omnipresent, plugged into this global network that is connected to billions of people, and filled with data." [67] Kurzweil states in a press conference that we are the only species that goes beyond our limitations- "we didn't stay in the caves, we didn't stay on the planet, and we're not going to stay with the limitations of our biology". In his singularity based documentary he is quoted saying "I think people are fooling themselves when they say they have accepted death".

In 2008, Kurzweil said in an expert panel in the National Academy of Engineering that solar power will scale up to produce all the energy needs of Earth's people in 20 years. According to Kurzweil, we only need to capture 1 part in 10,000 of the energy from the Sun that hits Earth's surface to meet all of humanity's energy needs.[68]

Reception

Praise

Kurzweil was referred to as "the ultimate thinking machine" by Forbes[9] and as a "restless genius"[8] by The Wall Street Journal. PBS included Kurzweil as one of 16 "revolutionaries who made America"[10] along with other inventors of the past two centuries. Inc. magazine ranked him #8 among the "most fascinating" entrepreneurs in the United States and called him "Edison's rightful heir".[11]

Criticism

Although the idea of a technological singularity is a popular concept in science fiction, some authors such as Neal Stephenson[69] and Bruce Sterling have voiced skepticism about its real-world plausibility. Sterling expressed his views on the singularity scenario in a talk at the Long Now Foundation entitled The Singularity: Your Future as a Black Hole.[70][71] Other prominent AI thinkers and computer scientists such as Daniel Dennett,[72] Rodney Brooks,[73] David Gelernter[74] and Paul Allen[75] also criticized Kurzweil's projections.

In the cover article of the December 2010 issue of IEEE Spectrum, John Rennie criticizes Kurzweil for several predictions that failed to become manifest by the originally predicted date. "Therein lie the frustrations of Kurzweil's brand of tech punditry. On close examination, his clearest and most successful predictions often lack originality or profundity. And most of his predictions come with so many loopholes that they border on the unfalsifiable."[76]

Bill Joy, cofounder of Sun Microsystems, agrees with Kurzweil's timeline of future progress, but thinks that technologies such as AI, nanotechnology and advanced biotechnology will create a dystopian world.[77] Mitch Kapor, the founder of Lotus Development Corporation, has called the notion of a technological singularity "intelligent design for the IQ 140 people...This proposition that we're heading to this point at which everything is going to be just unimaginably different—it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me."[28]

Some critics have argued more strongly against Kurzweil and his ideas. Cognitive scientist Douglas Hofstadter has said of Kurzweil's and Hans Moravec's books: "It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid."[78] Biologist P. Z. Myers has criticized Kurzweil's predictions as being based on "New Age spiritualism" rather than science and says that Kurzweil does not understand basic biology.[79][80] VR pioneer Jaron Lanier has even described Kurzweil's ideas as "cybernetic totalism" and has outlined his views on the culture surrounding Kurzweil's predictions in an essay for Edge.org entitled One Half of a Manifesto.[48][81]

British philosopher John Gray argues that contemporary science is what magic was for ancient civilizations. It gives a sense of hope for those who are willing to do almost anything in order to achieve eternal life. He quotes Kurzweil's Singularity as another example of a trend which has almost always been present in the history of mankind.[82]

The Brain Makers, a history of artificial intelligence written in 1994 by HP Newquist, noted that "Born with the same gift for self-promotion that was a character trait of people like P.T. Barnum and Ed Feigenbaum, Kurzweil had no problems talking up his technical prowess . . . Ray Kurzweil was not noted for his understatement." [83]

In a 2015 paper, William D. Nordhaus of Yale University, takes an economic look at the impacts of an impending technological singularity. He comments: "There is remarkably little writing on Singularity in the modern macroeconomic literature." [84] Nordhaus supposes that the Singularity could arise from either the demand or supply side of a market economy, but for information technology to proceed at the kind of pace Kurzweil suggests, there would have to be significant productivity trade-offs. Namely, in order to devote more resources to producing super computers we must decrease our production of non-information technology goods. Using a variety of econometric methods, Nordhaus runs six supply side tests and one demand side test to track the macroeconomic viability of such steep rises in information technology output. Of the seven tests only two indicated that a Singularity was economically possible and both of those two predicted, at minimum, 100 years before it would occur.

Awards and honors

  • First place in the 1965 International Science Fair[21] for inventing the classical music synthesizing computer.
  • The 1978 Grace Murray Hopper Award from the Association for Computing Machinery. The award is given annually to one "outstanding young computer professional" and is accompanied by a $35,000 prize.[85] Kurzweil won it for his invention of the Kurzweil Reading Machine.[86]
  • In 1986, Kurzweil was named Honorary Chairman for Innovation of the White House Conference on Small Business by President Reagan.
  • In 1987, Kurzweil received an Honorary Doctorate of Music from Berklee College of Music. [87]
  • In 1988, Kurzweil was named Inventor of the Year by MIT and the Boston Museum of Science.[88]
  • In 1990, Kurzweil was voted Engineer of the Year by the over one million readers of Design News Magazine and received their third annual Technology Achievement Award.[88][89]
  • The 1994 Dickson Prize in Science. One is awarded every year by Carnegie Mellon University to individuals who have "notably advanced the field of science." Both a medal and a $50,000 prize are presented to winners.[90]
  • The 1998 "Inventor of the Year" award from the Massachusetts Institute of Technology.[91]
  • The 1999 National Medal of Technology.[92] This is the highest award the President of the United States can bestow upon individuals and groups for pioneering new technologies, and the President dispenses the award at his discretion.[93] Bill Clinton presented Kurzweil with the National Medal of Technology during a White House ceremony in recognition of Kurzweil's development of computer-based technologies to help the disabled.
  • The 2000 Telluride Tech Festival Award of Technology.[94] Two other individuals also received the same honor that year. The award is presented yearly to people who "exemplify the life, times and standard of contribution of Tesla, Westinghouse and Nunn."
  • The 2001 Lemelson-MIT Prize for a lifetime of developing technologies to help the disabled and to enrich the arts.[95] Only one is awarded each year – it is given to highly successful, mid-career inventors. A $500,000 award accompanies the prize.[96]
  • Kurzweil was inducted into the National Inventors Hall of Fame in 2002 for inventing the Kurzweil Reading Machine.[97] The organization "honors the women and men responsible for the great technological advances that make human, social and economic progress possible."[98] Fifteen other people were inducted into the Hall of Fame the same year.[99]
  • The Arthur C. Clarke Lifetime Achievement Award on April 20, 2009 for lifetime achievement as an inventor and futurist in computer-based technologies.[100]
  • In 2011, Kurzweil was named a Senior Fellow of the Design Futures Council.[101]
  • In 2013, Kurzweil was honored as a Silicon Valley Visionary Award winner on June 26 by SVForum.[102]
  • In 2014, Kurzweil was honored with the American Visionary Art Museum’s Grand Visionary Award on January 30.[103][104][105]
  • Kurzweil has received 20 honorary doctorates in science, engineering, music and humane letters from Rensselaer Polytechnic Institute, Hofstra University and other leading colleges and universities, as well as honors from three U.S. presidents – Clinton, Reagan and Johnson.[5][106]
  • Kurzweil has received seven national and international film awards including the CINE Golden Eagle Award and the Gold Medal for Science Education from the International Film and TV Festival of New York.[88]

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...