Search This Blog

Wednesday, September 6, 2023

SETI@home


From Wikipedia, the free encyclopedia
 
SETI@home
screensaver with custom background
Developer(s)University of California, Berkeley
Initial releaseMay 17, 1999
Stable releaseSETI@home v8:8.00 / December 30, 2015; 7 years ago

SETI@home v8 for NVIDIA and AMD/ATi GPU Card:8.12/
May 19, 2016; 7 years ago
AstroPulse v7:7.00/
October 7, 2014; 8 years ago

AstroPulse v7 for nVidia and AMD/ATi GPU Card:7.10/
April 23, 2015; 8 years ago
Development statusIn hibernation
Project goal(s)Discovery of radio evidence of extraterrestrial life
FundingPublic funding and private donations
Operating systemMicrosoft Windows, Linux, Android, macOS, Solaris,
IBM AIX, FreeBSD, DragonflyBSD, OpenBSD, NetBSD, HP-UX, IRIX, Tru64 Unix, OS/2 Warp, eComStation
PlatformCross-platform
TypeVolunteer computing
LicenseGPL
Active usersDecrease 91,454 (March 2020)
Total usersIncrease 1,803,163 (March 2020)
Active hosts144,779 (March 2020)
Total hosts165,178 (March 2020)
Websitesetiathome.berkeley.edu

SETI@home ("SETI at home") is a project of the Berkeley SETI Research Center to analyze radio signals with the aim of searching for signs of extraterrestrial intelligence. Until March 2020, it was run as an Internet-based public volunteer computing project that employed the BOINC software platform. It is hosted by the Space Sciences Laboratory at the University of California, Berkeley, and is one of many activities undertaken as part of the worldwide SETI effort.

SETI@home software was released to the public on May 17, 1999, making it the third large-scale use of volunteer computing over the Internet for research purposes, after Great Internet Mersenne Prime Search (GIMPS) was launched in 1996 and distributed.net in 1997. Along with MilkyWay@home and Einstein@home, it is the third major computing project of this type that has the investigation of phenomena in interstellar space as its primary purpose.

In March 2020, the project stopped sending out new work to SETI@home users, bringing the crowdsourced computing aspect of the project to a stop. At the time, the team intended to shift focus onto the analysis and interpretation of the 20 years' worth of accumulated data. However, the team left open the possibility of eventually resuming volunteer computing using data from other radio telescopes, such as MeerKAT and FAST.

As of November 2021, the science team has analysed the data and removed noisy signals (Radio Frequency Interference) using the Nebula tool they developed and will choose the top-scoring 100 or so multiplets to be observed using the Five-hundred-meter Aperture Spherical Telescope, to which they have been granted 24 hours of observation time.

Scientific research

The two original goals of SETI@home were:

  • to do useful scientific work by supporting an observational analysis to detect intelligent life outside Earth
  • to prove the viability and practicality of the "volunteer computing" concept

The second of these goals is considered to have succeeded completely. The current BOINC environment, a development of the original SETI@home, is providing support for many computationally intensive projects in a wide range of disciplines.

The first of these goals has to date yielded no conclusive results: no evidence for ETI signals has been shown via SETI@home. However, the ongoing continuation is predicated on the assumption that the observational analysis is not "ill-posed." The remainder of this article deals specifically with the original SETI@home observations/analysis. The vast majority of the sky (over 98%) has yet to be surveyed, and each point in the sky must be surveyed many times to exclude even a subset of possibilities.

Procedure details

SETI@home searches for possible evidence of radio transmissions from extraterrestrial intelligence using observational data from the Arecibo radio telescope and the Green Bank Telescope. The data is taken "piggyback" or "passively" while the telescope is used for other scientific programs. The data is digitized, stored, and sent to the SETI@home facility. The data is then parsed into small chunks in frequency and time, and analyzed, using software, to search for any signals—that is, variations which cannot be ascribed to noise, and hence contain information. Using volunteer computing, SETI@home sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears a difficult problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.

The software searches for five types of signals that distinguish them from noise:

  • Spikes in power spectra
  • Gaussian rises and falls in transmission power, possibly representing the telescope beam's main lobe passing over a radio source
  • Triplets – three power spikes in a row
  • Pulsing signals that possibly represent a narrowband digital-style transmission
  • Autocorrelation detects signal waveforms.

There are many variations on how an ETI signal may be affected by the interstellar medium, and by the relative motion of its origin compared to Earth. The potential "signal" is thus processed in many ways (although not testing all detection methods nor scenarios) to ensure the highest likelihood of distinguishing it from the scintillating noise already present in all directions of outer space. For instance, another planet is very likely to be moving at a speed and acceleration with respect to Earth, and that will shift the frequency, over time, of the potential "signal." Checking for this through processing is done, to an extent, in the SETI@home software.

The process is somewhat like tuning a radio to various channels, and looking at the signal strength meter. If the strength of the signal goes up, that gets attention. More technically, it involves a lot of digital signal processing, mostly discrete Fourier transforms at various chirp rates and durations.

Results

To date, the project has not confirmed the detection of any ETI signals. However, it has identified several candidate targets (sky positions), where the spike in intensity is not easily explained as noise spots, for further analysis. The most significant candidate signal to date was announced on September 1, 2004, named Radio source SHGb02+14a.

While the project has not reached the stated primary goal of finding extraterrestrial intelligence, it has proved to the scientific community that volunteer computing projects using Internet-connected computers can succeed as a viable analysis tool, and even beat the largest supercomputers. However, it has not been demonstrated that the order of magnitude excess in computers used, many outside the home (the original intent was to use 50,000–100,000 "home" computers), has benefited the project scientifically. (For more on this, see § Challenges below.)

Astronomer Seth Shostak stated in 2004 that he expects to get a conclusive signal and proof of alien contact between 2020 and 2025, based on the Drake equation. This implies that a prolonged effort may benefit SETI@home, despite its (present) twenty-year run without success in ETI detection.

Technology

Anybody with an at least intermittently Internet-connected computer was able to participate in SETI@home by running a free program that downloaded and analyzed radio telescope data.

Observational data were recorded on 2-terabyte SATA hard disk drives fed from the Arecibo Telescope in Puerto Rico, each holding about 2.5 days of observations, which were then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the SETI@home server over the Internet to personal computers around the world to analyze.

Data was merged into a database using SETI@home computers in Berkeley. Interference was rejected, and various pattern-detection algorithms were applied to search for the most interesting signals.

The project used CUDA for GPU processing starting in 2015.

In 2016 SETI@home began processing data from the Breakthrough Listen project.

Software

The BOINC Manager working on the SETI@home project (v 7.6.22)
Screenshot of SETI@home Classic Screensaver (v3.07)

The SETI@home volunteer computing software ran either as a screensaver or continuously while a user worked, making use of processor time that would otherwise be unused.

The initial software platform, now referred to as "SETI@home Classic", ran from May 17, 1999, to December 15, 2005. This program was only capable of running SETI@home; it was replaced by Berkeley Open Infrastructure for Network Computing (BOINC), which also allows users to contribute to other volunteer computing projects at the same time as running SETI@home. The BOINC platform also allowed testing for more types of signals.

The discontinuation of the SETI@home Classic platform rendered older Macintosh computers running the classic Mac OS (pre December, 2001) unsuitable for participating in the project.

SETI@home was available for the Sony PlayStation 3 console.

On May 3, 2006, new work units for a new version of SETI@home called "SETI@home Enhanced" started distribution. Since computers had the power for more computationally intensive work than when the project began, this new version was more sensitive by a factor of two concerning Gaussian signals and to some kinds of pulsed signals than the original SETI@home (BOINC) software. This new application had been optimized to the point where it would run faster on some work units than earlier versions. However, some work units (the best work units, scientifically speaking) would take significantly longer.

In addition, some distributions of the SETI@home applications were optimized for a particular type of CPU. They were referred to as "optimized executables", and had been found to run faster on systems specific for that CPU. As of 2007, most of these applications were optimized for Intel processors and their corresponding instruction sets.

The results of the data processing were normally automatically transmitted when the computer was next connected to the Internet; it could also be instructed to connect to the Internet as needed.

Statistics

With over 5.2 million participants worldwide, the project was the volunteer computing project with the most participants to date. The original intent of SETI@home was to utilize 50,000–100,000 home computers. Since its launch on May 17, 1999, the project has logged over two million years of aggregate computing time. On September 26, 2001, SETI@home had performed a total of 1021 floating point operations. It was acknowledged by the 2008 edition of the Guinness World Records as the largest computation in history. With over 145,000 active computers in the system (1.4 million total) in 233 countries, as of 23 June 2013, SETI@home had the ability to compute over 668 teraFLOPS. For comparison, the Tianhe-2 computer, which as of 23 June 2013 was the world's fastest supercomputer, was able to compute 33.86 petaFLOPS (approximately 50 times greater).

Project future

There were plans to get data from the Parkes Observatory in Australia to analyze the southern hemisphere. However, as of 3 June 2018, these plans were not mentioned in the project's website. Other plans include a Multi-Beam Data Recorder, a Near Time Persistency Checker and Astropulse (an application that uses coherent dedispersion to search for pulsed signals). Astropulse will team with the original SETI@home to detect other sources, such as rapidly rotating pulsars, exploding primordial black holes, or as-yet unknown astrophysical phenomena. Beta testing of the final public release version of Astropulse was completed in July 2008, and the distribution of work units to higher spec machines capable of processing the more CPU intensive work units started in mid-July 2008.

On March 31, 2020, UC Berkeley stopped sending out new data for SETI@Home clients to process, ending the effort for the time being. The program stated they were at a point of "diminishing returns" with the volunteer processing and needed to put the effort into hibernation while they processed the results.

Competitive aspect

SETI@home users quickly started to compete with one another to process the maximum number of work units. Teams were formed to combine the efforts of individual users. The competition continued and grew larger with the introduction of BOINC.

As with any competition, attempts have been made to "cheat" the system and claim credit for work that has not been performed. To combat cheats, the SETI@home system sends every work unit to multiple computers, a value known as "initial replication" (currently 2). Credit is only granted for each returned work unit once a minimum number of results have been returned and the results agree, a value known as "minimum quorum" (currently 2). If, due to computation errors or cheating by submitting false data, not enough results agree, more identical work units are sent out until the minimum quorum can be reached. The final credit granted to all machines which returned the correct result is the same and is the lowest of the values claimed by each machine.

Some users have installed and run SETI@home on computers at their workplaces; an act known as "Borging", after the assimilation-driven Borg of Star Trek. In some cases, SETI@home users have misused company resources to gain work-unit results with at least two individuals getting fired for running SETI@home on an enterprise production system. There is a thread in the newsgroup alt.sci.seti which bears the title "Anyone fired for SETI screensaver" and ran starting as early as September 14, 1999.

Other users collect large quantities of equipment together at home to create "SETI farms", which typically consist of a number of computers consisting of only a motherboard, CPU, RAM and power supply that are arranged on shelves as diskless workstations running either Linux or old versions of Microsoft Windows "headless" (without a monitor).

Challenges

Closure of Arecibo Observatory

Until 2020, SETI@home procured its data from the Arecibo Observatory facility that was operated by the National Astronomy and Ionosphere Center and administered by SRI International.

The decreasing operating budget for the observatory has created a shortfall of funds which has not been made up from other sources such as private donors, NASA, other foreign research institutions, nor private non-profit organizations such as SETI@home.

However, in the overall long-term views held by many involved with the SETI project, any usable radio telescope could take over from Arecibo (which completely collapsed in December 2020), as all the SETI systems are portable and relocatable.

More restrictive computer use policies in businesses

In one documented case, an individual was fired for explicitly importing and using the SETI@home software on computers used for the U.S. state of Ohio. In another incident a school IT director resigned after his installation allegedly cost his school district $1 million in removal costs; however, other reasons for this firing included lack of communication with his superiors, not installing firewall software and alleged theft of computer equipment, leading a ZDNet editor to comment that "the volunteer computing nonsense was simply the best and most obvious excuse the district had to terminate his contract with cause".

As of 16 October 2005, approximately one-third of the processing for the non-BOINC version of the software was performed on work or school based machines. As many of these computers will give reduced privileges to ordinary users, it is possible that much of this has been done by network administrators.

To some extent, this may be offset by better connectivity to home machines and increasing performance of home computers, especially those with GPUs, which have also benefited other volunteer computing projects such as Folding@Home. The spread of mobile computing devices provides another large resource for volunteer computing. For example, in 2012, Piotr Luszczek (a former doctoral student of Jack Dongarra) presented results showing that an iPad 2 matched the historical performance of a Cray-2 (the fastest computer in the world in 1985) on an embedded LINPACK benchmark.

Funding

There is currently no government funding for SETI research, and private funding is always limited. Berkeley Space Science Lab has found ways of working with small budgets, and the project has received donations allowing it to go well beyond its original planned duration, but it still has to compete for limited funds with other SETI projects and other space sciences projects.

In a December 16, 2007 plea for donations, SETI@home stated its present modest state and urged donations of $476,000 needed for continuation into 2008.

Unofficial clients

A number of individuals and companies made unofficial changes to the distributed part of the software to try to produce faster results, but this compromised the integrity of all the results. As a result, the software had to be updated to make it easier to detect such changes, and discover unreliable clients. BOINC will run on unofficial clients; however, clients that return different and therefore incorrect data are not allowed, so corrupting the result database is avoided. BOINC relies on cross-checking to validate data but unreliable clients need to be identified, to avoid situations when two of these report the same invalid data and therefore corrupt the database. A very popular unofficial client (lunatic) allows users to take advantage of the special features provided by their processor(s) such as SSE, SSE2, SSE3, SSSE3, SSE4.1, and AVX to allow for faster processing.

Hardware and database failures

SETI@home is a test bed for further development not only of BOINC but of other hardware and software (database) technology. Under SETI@home processing loads, these experimental technologies can be more challenging than expected, as SETI databases do not have typical accounting and business data or relational structures. The non-traditional database uses often do incur greater processing overheads and risk of database corruption and outright database failure. Hardware, software and database failures can (and do) cause dips in project participation.

The project has had to shut down several times to change over to new databases capable of handling more massive datasets. Hardware failure has proven to be a substantial source of project shutdowns, as hardware failure is often coupled with database corruption.

Active SETI

From Wikipedia, the free encyclopedia
A representation of the 1679-bit Arecibo message.

Active SETI (Active Search for Extra-Terrestrial Intelligence) is the attempt to send messages to intelligent extraterrestrial life. Active SETI messages are predominantly sent in the form of radio signals. Physical messages like that of the Pioneer plaque may also be considered an active SETI message. Active SETI is also known as METI (Messaging to Extra-Terrestrial Intelligence). 

History

'Active SETI' was a term as early as 2005, though some decades after the term SETI. The term METI was coined in 2006 by Russian scientist Alexander Zaitsev, who proposed a subtle distinction between Active SETI and METI:

The science known as SETI deals with searching for messages from aliens. METI deals with the creation and transmission of messages to aliens. Thus, SETI and METI proponents have quite different perspectives. SETI scientists are in a position to address only the local question “does Active SETI make sense?” In other words, would it be reasonable, for SETI success, to transmit with the object of attracting ETI's attention? In contrast to Active SETI, METI pursues not a local, but a more global purpose – to overcome the Great Silence in the Universe, bringing to our extraterrestrial neighbors the long-expected annunciation “You are not alone!”

Concern over METI was raised by the science journal Nature in an editorial in October 2006, which commented on a recent meeting of the International Academy of Astronautics SETI study group. The editor said, "It is not obvious that all extraterrestrial civilizations will be benign, or that contact with even a benign one would not have serious repercussions". In the same year, astronomer and science fiction author David Brin expressed similar concerns.

In 2010, Douglas A. Vakoch from SETI Institute, addressed concerns about the validity of Active SETI alone as an experimental science by proposing the integration of Active SETI and Passive SETI programs to engage in a clearly articulated, ongoing, and evolving set of experiments to test various versions of the Zoo Hypothesis, including specific dates at which a first response to messages sent to particular stars could be expected.

On 13 February 2015, scientists including Douglas Vakoch, David Grinspoon, Seth Shostak, and David Brin at an annual meeting of the American Association for the Advancement of Science discussed Active SETI, and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea. That same week, a statement was released, signed by many in the SETI community including Berkeley SETI Research Center director Andrew Siemion, advocating that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent".

In July 2015, the Breakthrough Message program was announced. This was an open competition to design a digital message that could be transmitted from Earth to an extraterrestrial civilization, with a US$1,000,000 prize pool. The message was to be "representative of humanity and planet Earth". The program pledged "not to transmit any message until there has been a wide-ranging debate at high levels of science and politics on the risks and rewards of contacting advanced civilizations".

Rationale for METI

In the paper Rationale for METI, transmission of the information into the Cosmos is treated as one of the pressing needs of an advanced civilization. This view is not universally accepted, and it is not agreed with by those who are against the transmission of interstellar radio messages, but at the same time are not against SETI searching. Such duality is called the SETI Paradox.

Radio message construction

The lack of an established communications protocol is a challenge for METI. While trying to synthesize and project an Interstellar Radio Message (IRM), the receiving extraterrestrials will (ETs) first encounter a physical phenomenon and, only after that, perceive the information. Initially, a receiving system will detect the radio signal; then the issue of the extraction of the received information and comprehension of the obtained message will arise. Therefore, above all, the constructor of an IRM should be concerned about the ease of signal determination. In other words, the signal should have maximum openness, which is understood here as an antonym of the term security. This branch of signal synthesis is termed anticryptography.

To this end, in 2010, Michael W. Busch created a general-purpose binary language, later used in the Lone Signal project to transmit crowdsourced messages to extraterrestrial intelligence. Busch developed the coding scheme and provided Rachel M. Reddick with a test message, in a blind test of decryption. Reddick decoded the entire message after approximately twelve hours of work. This was followed by an attempt to extend the syntax used in the Lone Signal hailing message to communicate in a way that, while neither mathematical nor strictly logical, was nonetheless understandable given the prior definition of terms and concepts in the hailing message.

In addition, characteristics of the radio signal, such as wavelength, type of polarization, and modulation are considered. Over galactic distances, the interstellar medium induces some scintillation effects and artificial modulation of electromagnetic signals. This modulation is higher at lower frequencies and is a function of the sky direction. Over large distances, the depth of the modulation can exceed 100%, making any METI signal very difficult to decode.

Error correction

In METI research, any message must have some redundancy, although the exact amount of redundancy and message formats are still in great dispute. Using ideograms, instead of binary sequence, already offers some improvement against noise resistance. In faxlike transmissions, ideograms are spread on many lines. This increases its resistance against short bursts of noise like radio frequency interference or interstellar scintillation.

One format approach proposed for interstellar messages was to use the product of two prime numbers to construct an image. Unfortunately, this method works only if all the bits are present. As an example, the message sent by Frank Drake from the Arecibo Observatory in 1974 did not have any feature to support mechanisms to cope with the inevitable noise degradation of the interstellar medium.

Error correction tolerance rates for previous METI messages are 9% (one page) for the 1974 Arecibo Message, 44% (23 separate pages) for the 1999 Evpatoria Message, and 46% (one page, estimated) for the 2003 Evpatoria Message.

Examples

The 1999 Cosmic Call transmission was far from being optimal (from a terrestrial viewpoint) as it was essentially a monochromatic signal spiced with supplementary information. Additionally, the message had a very small modulation index overall, a condition not viewed as being optimal for interstellar communication. Over the 370,967 bits (46,371 bytes) sent, some 314,239 were “1” and 56,768 were “0”—5.54 times as many 1's as 0's. Since frequency-shift keying modulation scheme was used, most of the time the signal was on the “0” frequency. In addition, “0” tended to be sent in long stretches, which appeared as white lines in the message.

Realized projects

The below projects have targeted stars between 17 and 69 light-years from the Earth. The exception is the Arecibo message, which targeted globular cluster M13, approximately 24,000 light-years away. The first interstellar message to reach its destination was the Altair (Morimoto - Hirabayashi) Message, which likely reached its target in 1999.

Transmissions

Below is a table of messages sent and target/destination stars, ordered chronologically by date of sending:

Name Designation Constellation Date sent
(YYYY-MM-DD)
Arrival date
(YYYY-MM-DD)
Message
Messier 13 NGC 6205 Hercules 1974-11-16 27000~ Arecibo Message
Altair Alpha Aql Aquila 1983-08-15 2017 Altair (Morimoto - Hirabayashi) Message[29]
Libra
Libra 1995
NASDA Cosmic-College



1996
NASDA Cosmic-College
Spica Alpha Vir Virgo 1997-08 2247 NASDA Cosmic-College



1998
NASDA Cosmic-College
16 Cyg A HD 186408 Cygnus 1999-05-24 2069-11 Cosmic Call 1
15 Sge HD 190406 Sagitta 1999-06-30 2057-02

HD 178428 2067-10
Gl 777 HD 190360 Cygnus 1999-07-01 2051-04

HD 197076 Delphinus 2001-08-29 2070-02 Teen Age Message
47 UMa HD 95128 Ursa Major 2001-09-03 2047-07
37 Gem HD 50692 Gemini 2057-12

HD 126053 Virgo 2001-09 2059-01

HD 76151 Hydra 2001-09-04 2057-05

HD 193664 Draco 2059-01

HIP 4872 Cassiopeia 2003-07-06 2036-04 Cosmic Call 2

HD 245409 Orion 2040-08
55 Cnc HD 75732 Cancer 2044-05

HD 10307 Andromeda 2044-09
47 UMa HD 95128 Ursa Major 2049-05
Polaris HIP 11767 Ursa Minor 2008-02-04 2439 Across the Universe
Gliese 581 HIP 74995 Libra 2008-10-09 2029 A Message From Earth
2009-08-28 2030 Hello From Earth
GJ 83.1 GJ 83.1 Aries 2009-11-07 2024 RuBisCo Stars
Teegarden's Star SO J025300.5+165258 2022
Kappa1 Ceti GJ 137 Cetus 2039

HIP 34511 Gemini 2012-08-15 2163 Wow! Reply
37 Gem HD 50692 2069
55 Cnc HD 75732 Cancer 2053
GJ 526 HD 119850 Boötes 2013-07-10 2031 Lone Signal
55 Cnc HD 75732 Cancer 2013-09-22 2053 JAXA Space Camp (UDSC-1)
55 Cnc HD 75732 Cancer 2014-08-23 2054 JAXA Space Camp (UDSC-2)
Polaris HIP 11767 Ursa Minor 2016-10-10 2450 A Simple Response to an Elemental Message
Luyten's Star HIP 36208 Canis Major 2017-10-16 2030-03 Sónar Calling GJ273b

Controversy

Whether or not to conduct Active SETI, as well as the tone of any message, is a highly controversial topic. Active SETI has primariy been criticized due to the perceived risk of revealing the location of the Earth to alien civilizations, without some process of prior international consultation. That is, Active SETI does not meet the criteria for informed consent in a mass experiment involving human subjects and, potentially, nonhuman sentient subjects.

Active SETI is discussed in terms of the ethics of space policy. Issues include whether to send belligerent versus defensive messages, cosmopolitanism, communicative burden, consensus, messaging content, proscriptions on premature messaging, responsibility, and shared values, with concerns that even if successful, humanity could be reduced to a cargo cult. David Brin also urged for an extensive international consultation before any METI activities and has debunked key rationalizations for active SETI (METI), such as the "barn door" argument (unintentional "leaked signals" were millions-fold weaker than intentional METI signals), ignoring/dismissing the precautionary principle (that requires taking extreme precaution e.g. handling extraterrestrial samples even without any known example of risks), and treating METI as being prayer-like which disregards the issue of informed consent from other people. Notable among METI's critics was Stephen Hawking. Hawking, who in his book A Brief History of Time suggests that "alerting" extraterrestrial intelligences to our existence is foolhardy, citing humankind's history of treating its own kind harshly in meetings of civilizations with a significant technology gap, e.g., the extermination of Tasmanian aborigines. He suggested, in view of this history, that we "lay low". Scientist and science fiction author David Brin expressed similar concerns. Similarly, Liu Cixin's trilogy of novels The Three Body Problem highlights the potential dangers of METI.

However, some scientists consider these fears about the dangers of METI as panic and irrational superstition; Russian and Soviet radio engineer and astronomer Alexander L. Zaitsev has argued against these concerns. Zaitsev argues that we should consider the risks of not attempting to contact extraterrestrial civilizations, since the knowledge and wisdom an ETI could impart to us would save us from humanity's self-destructive tendencies. Similarly, in a March 2015 essay astronomer Seth Shostak considered the risk and ended by stressing that any danger was hypothetical and that humanity would better off risk contact than "endlesly tremble at the sight of the stars".

Astronomer Jill Tarter also disagrees with Hawking, arguing that aliens developed and long-lived enough to communicate and travel across interstellar distances would have evolved a cooperative and less violent intelligence. She however thinks it is too soon for humans to attempt active SETI and that humans should be more advanced technologically first but keep listening in the meantime.

Example of a high-resolution pictorial message to potential eti at Proxima Centauri. These messages usually contain information about the location of the solar system in the Milky Way.

To lend a quantitative basis to discussions of the risks of transmitting deliberate messages from Earth, the SETI Permanent Study Group of the International Academy of Astronautics adopted in 2007 a new analytical tool, the San Marino Scale. Developed by Prof. Ivan Almar and Prof. H. Paul Shuch, the San Marino Scale evaluates the significance of transmissions from Earth as a function of signal intensity and information content. Its adoption suggests that not all such transmissions are created equal, thus each must be evaluated separately before establishing a blanket international policy regarding Active SETI.

In 2012, Jacob Haqq-Misra, Michael Busch, Sanjoy Som, and Seth Baum argued that while the benefits of radio communication on Earth likely outweigh the potential harms of detection by extraterrestrial watchers, the uncertainty regarding the outcome of contact with extraterrestrial beings creates difficulty in assessing whether or not to engage in long-term and large-scale METI.

In 2015, in the context of the Zoo Hypothesis, biologist João Pedro de Magalhães proposed transmitting an invitation message to any extraterrestrial intelligences watching us already and inviting them to respond, arguing this would not put us in any more danger than we are already if the Zoo Hypothesis is correct.

Douglas Vakoch, president of METI, argues that passive SETI itself is already an endorsement of active SETI, since "If we detect a signal from aliens through a SETI program, there's no way to prevent a cacophony of responses from Earth."

In the context of potentially detected extraterrestrial activity on Earth, physicist Mark Buchanan argued that humanity needs to determine whether it would be safe or wise to attempt to communicate with extraterrestrials and work on ways to handle such attempts in an organized manner.

Beacon proposals

One proposal for a 10 billion watt interstellar SETI beacon was dismissed by Robert A. Freitas Jr. as being infeasible for a pre-Type I civilization, such as humanity, on the Kardashev scale. However, this 1980s technical argument assumes omni-directional beacons, which may not be the best way to proceed on many technical grounds. Advances in consumer electronics have made possible transmitters that simultaneously transmit many narrow beams, covering the million or so nearest stars but not the spaces between. This multibeam approach can reduce the power and cost to levels that are reasonable with early 21st century Earth technology.

Once civilizations have discovered each other's locations, the energy requirements for maintaining contact and exchanging information can be significantly reduced through the use of highly directional transmission technologies. To this end, a 2018 study estimated a 1-2 megawatt infrared laser focused through a 30-45 meter telescope could be seen from about 20,000 light years away.

Ufology

From Wikipedia, the free encyclopedia

Ufology (/jˈfɒləi/ yoo-FOL-ə-jee) is the investigation of unidentified flying objects (UFOs) by people who believe that they may be of extraordinary origins (most frequently of extraterrestrial alien visitors). While there are instances of government, private, and fringe science investigations of UFOs, ufology is generally regarded by skeptics and science educators as a canonical example of pseudoscience.

Etymology

Ufology is a neologism derived from UFO (a term apparently coined by Edward J. Ruppelt), and is derived from appending the acronym UFO with the suffix -logy (from the Ancient Greek -λογία (-logia)). Early uses of ufology include an article in Fantastic Universe (1957) and a 1958 presentation for the UFO "research organization" The Planetary Center.

Historical background

A Swedish Air Force officer searches for a "ghost rocket" in Lake Kölmjärv, Norrland, Sweden, in July 1946.

The roots of ufology include the "mystery airships" of the late 1890s, the "foo fighters" reported by Allied airmen during World War II, the "ghost fliers" of Europe and North America during the 1930s, the "ghost rockets" of Scandinavia (mostly Sweden) in 1946, and the Kenneth Arnold "flying saucer" sighting of 1947. Media attention to the Arnold sighting helped publicize the concept of flying saucers.

Publicity of UFOs increased after World War II, coinciding with the escalation of the Cold War and strategic concerns related to the development and detection (e.g., the Ground Observer Corps) of advanced Soviet aircraft. Official, government-sponsored activities in the United States related to ufology ended in the late 1960s following the Condon Committee report and the termination of Project Blue Book. Government-sponsored, UFO-related activities in other countries, including the United Kingdom, Canada, Denmark, Italy, and Sweden also ended. An exception to this trend is France, which maintains the GEIPAN program, formerly known as GEPAN (1977–1988) and SEPRA (1988–2004), operated by the French Space Agency CNES.

As a field

Status as a pseudoscience

Despite investigations sponsored by governments and private entities, ufology is not embraced by academia as a scientific field of study, and is instead generally considered a pseudoscience by skeptics and science educators, being often included on lists of topics characterized as pseudoscience as either a partial or total pseudoscience. Pseudoscience is a term that classifies arguments that are claimed to exemplify the methods and principles of science, but do not in fact adhere to an appropriate scientific method, lack supporting evidence, plausibility, falsifiability, or otherwise lack scientific status.

Some writers have identified social factors that contribute to the status of ufology as a pseudoscience, with one study suggesting that "any science doubt surrounding unidentified flying objects and aliens was not primarily due to the ignorance of ufologists about science, but rather a product of the respective research practices of and relations between ufology, the sciences, and government investigative bodies". One study suggests that "the rudimentary standard of science communication attending to the extraterrestrial intelligence (ETI) hypothesis for UFOs inhibits public understanding of science, dissuades academic inquiry within the physical and social sciences, and undermines progressive space policy initiatives".

Current interest

In 2021, astronomer Avi Loeb launched The Galileo Project which intends to collect and report scientific evidence of extraterrestrials or extraterrestrial technology on or near Earth via telescopic observations.

In Germany, the University of Würzburg is developing intelligent sensors that can help detect and analyze aerial objects in hopes of applying such technology to UAP.

A 2021 Gallup poll found that belief among Americans in some UFOs being extraterrestrial spacecraft grew between 2019 and 2021 from 33% to 41%. Gallup cited increased coverage in mainstream news and scrutiny from government authorities as a factor in changing attitudes towards UFOs.

In 2022, NASA announced a nine-month study starting in fall to help establish a road map for investigating UAP – or for reconnaissance of the publicly available data it might use for such research.

In 2023, the RAND Corporation, published a study reviewing 101,151 public reports of UAP sightings in the United States from 1998 to 2022. The models used to conduct the analysis showed that reports of UAP sightings were less likely within 30 km of weather stations, 60 km of civilian airports, and in more–densely populated areas, while rural areas tended to have a higher rate of UAP reports. The most consistent and statistically significant finding was that reports of UAP sightings were more likely to occur in areas within 30 km of military operations areas, where routine military training occurs.

Methodological issues

Although some ufologists (e.g., Peter A. Sturrock) have proposed explicit methodological activities for investigation of UFOs, scientific UFO research is challenged by the facts that the phenomena are spatially and temporally unpredictable, are not reproducible, and lack tangible physicality. That most UFO sightings have mundane explanations limits interpretive power of "interesting," extraordinary UFO-related events, with the astronomer Carl Sagan writing: "The reliable cases are uninteresting and the interesting cases are unreliable. Unfortunately there are no cases that are both reliable and interesting."

Josef Allen Hynek (left) and Jacques Vallée

The ufologists J. Allen Hynek and Jacques Vallée have each developed descriptive systems for characterizing UFO sightings, and by extension for organizing ufology investigations.

Phenomena linked to ufology

In addition to UFO sightings, certain supposedly related phenomena are of interest to some ufologists, including crop circles, cattle mutilations, anomalous materials, alien abductions and implants.

Some ufologists have also promoted UFO conspiracy theories, including the Roswell Incident of 1947, the Majestic 12 documents, and UFO disclosure advocates.

Skeptic Robert Sheaffer has accused ufology of having a "credulity explosion," writing that, "the kind of stories generating excitement and attention in any given year would have been rejected by mainstream ufologists a few years earlier for being too outlandish." The physicist James E. McDonald also identified "cultism" and "extreme...subgroups" as negatively impacting ufology.

In Posadism

During the Cold War, ufology was synthesized with the ideas of a Trotskyist movement in South America known as Posadism. Posadism's main theorist, Juan Posadas, believed the human race must "appeal to the beings on other planets...to intervene and collaborate with Earth's inhabitants in suppressing poverty;" i.e., Posadas wished to collaborate with extraterrestrials in order to create a socialist system on Earth. The adoption of this belief among Posadists, who had previously been a significant political force in South America, has been noted as a contributing factor in their decline.

Governmental and private ufology studies

Starting in the 1940s, investigations, studies, and conferences related to ufology were sponsored by governmental agencies and private groups. Typically motivated by visual UFO sightings, the goals of these studies included critical evaluation of the observational evidence, attempts to resolve and identify the observed events, and the development of policy recommendations. These studies include Project Sign, Project Magnet, Project Blue Book, the Robertson Panel, and the Condon Committee in the United States, the Flying Saucer Working Party and Project Condign in Britain, GEIPAN in France, and Project Hessdalen in Norway. Private studies of UFO phenomena include those produced by the RAND Corporation in 1968, Harvey Rutledge of the University of Missouri from 1973 to 1980, and the National Press Club's Disclosure Project in 2001. Additionally, the United Nations from 1977 to 1979 sponsored meetings and hearings concerning UFO sightings. In August 2020, the United States Department of Defense established the Unidentified Aerial Phenomena Task Force to detect, analyze and catalog unidentified aerial phenomena that could potentially pose a threat to U.S. national security.

UFO organizations and events

A large number of private organizations dedicated to the study, discussion, and publicity of ufology and other UFO-related topics exist throughout the world, including the United States, the United Kingdom, Australia, and Switzerland. Along with such "pro-UFO" groups are skeptic organizations that emphasize the pseudoscientific nature of ufology.

During the annual World UFO Day (July 2), ufologists and associated organizations raise public awareness of ufology, in an effort to "tell the truth about earthly visits from outer space aliens." The day's events include group gatherings to search for and observe UFOs.

Chatbot

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Chatbot
A virtual assistant chatbot
The 1966 ELIZA chatbot

A chatbot (originally chatterbot) is a software application or web interface that aims to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use artificial intelligence (AI) systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such technologies often utilize aspects of deep learning and natural language processing, but more simplistic chatbots have been around for decades prior.

Recently, this field has gained widespread attention due to the popularity of OpenAI's ChatGPT (using GPT-3 or GPT-4), released in 2022, followed by alternatives such as Microsoft's Bing Chat (which uses OpenAI's GPT-4) and Google's Bard. Such examples reflect the recent practice of such products being built based upon broad foundational large language models that get fine-tuned so as to target specific tasks or applications (i.e. simulating human conversation, in the case of chatbots). Chatbots can also be designed or customized to further target even more specific situations and/or particular subject-matter domains.

A major area where chatbots have long been used is in customer service and support, such as with various sorts of virtual assistants. Companies spanning various industries have begun using the latest generative artificial intelligence technologies to power more advanced developments in such areas.

Background

In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published, which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge to the extent that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:

In artificial intelligence, machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained, its magic crumbles away; it stands revealed as a mere collection of procedures. The observer says to himself "I could have written that". With that thought, he moves the program in question from the shelf marked "intelligent", to that reserved for curios. The object of this paper is to cause just such a re-evaluation of the program about to be "explained". Few programs ever needed it more.

ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of the corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY'). Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".

Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods".

Development

Among the most notable early chatbots are ELIZA (1966) and PARRY (1972). More recent notable programs include A.L.I.C.E., Jabberwacky and D.U.D.E (Agence Nationale de la Recherche and CNRS 2006). While ELIZA and PARRY were used exclusively to simulate typed conversation, many chatbots now include other functional features, such as games and web searching abilities. In 1984, a book called The Policeman's Beard is Half Constructed was published, allegedly written by the chatbot Racter (though the program as released would not have been capable of doing so).

From 1978 to some time after 1983, the CYRUS project led by Janet Kolodner constructed a chatbot simulating Cyrus Vance (57th United States Secretary of State). It used case-based reasoning, and updated its database daily by parsing wire news from United Press International. The program was unable to process the news items subsequent to the surprise resignation of Cyrus Vance in April 1980, and the team constructed another chatbot simulating his sucessor, Edmund Muskie.

One pertinent field of AI research is natural-language processing. Usually, weak AI fields employ specialized software or programming languages created specifically for the narrow function required. For example, A.L.I.C.E. uses a markup language called AIML, which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so-called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities.

Jabberwacky learns new responses and context based on real-time user interactions, rather than being driven from a static database. Some more recent chatbots also combine real-time learning with evolutionary algorithms that optimize their ability to communicate based on each conversation held. Still, there is currently no general purpose conversational artificial intelligence, and some software developers focus on the practical aspect, information retrieval.

Chatbot competitions focus on the Turing test or more specific goals. Two such annual contests are the Loebner Prize and The Chatterbox Challenge (the latter has been offline since 2015, however, materials can still be found from web archives).

Chatbots may use neural networks as a language model. For example, generative pre-trained transformers (GPT), which use the transformer architecture, have become common to build sophisticated chatbots. The "pre-training" in its name refers to the initial training process on a large text corpus, which provides a solid foundation for the model to perform well on downstream tasks with limited amounts of task-specific data. An example of a GPT chatbot is ChatGPT. Despite criticism of its accuracy, ChatGPT has gained attention for its detailed responses and historical knowledge. Another example is BioGPT, developed by Microsoft, which focuses on answering biomedical questions.

DBpedia created a chatbot during the GSoC of 2017. It can communicate through Facebook Messenger.

Application

Messaging apps

Many companies' chatbots run on messaging apps or simply via SMS. They are used for B2C customer service, sales and marketing.

In 2016, Facebook Messenger allowed developers to place chatbots on their platform. There were 30,000 bots created for Messenger in the first six months, rising to 100,000 by September 2017.

Since September 2017, this has also been as part of a pilot program on WhatsApp. Airlines KLM and Aeroméxico both announced their participation in the testing; both airlines had previously launched customer services on the Facebook Messenger platform.

The bots usually appear as one of the user's contacts, but can sometimes act as participants in a group chat.

Many banks, insurers, media companies, e-commerce companies, airlines, hotel chains, retailers, health care providers, government entities and restaurant chains have used chatbots to answer simple questions, increase customer engagement, for promotion, and to offer additional ways to order from them.

A 2017 study showed 4% of companies used chatbots. According to a 2016 study, 80% of businesses said they intended to have one by 2020.

As part of company apps and websites

Previous generations of chatbots were present on company websites, e.g. Ask Jenn from Alaska Airlines which debuted in 2008 or Expedia's virtual customer service agent which launched in 2011. The newer generation of chatbots includes IBM Watson-powered "Rocky", introduced in February 2017 by the New York City-based e-commerce company Rare Carat to provide information to prospective diamond buyers.

Chatbot sequences

Used by marketers to script sequences of messages, very similar to an autoresponder sequence. Such sequences can be triggered by user opt-in or the use of keywords within user interactions. After a trigger occurs a sequence of messages is delivered until the next anticipated user response. Each user response is used in the decision tree to help the chatbot navigate the response sequences to deliver the correct response message.

Company internal platforms

Other companies explore ways they can use chatbots internally, for example for Customer Support, Human Resources, or even in Internet-of-Things (IoT) projects. Overstock.com, for one, has reportedly launched a chatbot named Mila to automate certain simple yet time-consuming processes when requesting sick leave. Other large companies such as Lloyds Banking Group, Royal Bank of Scotland, Renault and Citroën are now using automated online assistants instead of call centres with humans to provide a first point of contact. A SaaS chatbot business ecosystem has been steadily growing since the F8 Conference when Facebook's Mark Zuckerberg unveiled that Messenger would allow chatbots into the app. In large companies, like in hospitals and aviation organizations, IT architects are designing reference architectures for Intelligent Chatbots that are used to unlock and share knowledge and experience in the organization more efficiently, and reduce the errors in answers from expert service desks significantly. These Intelligent Chatbots make use of all kinds of artificial intelligence like image moderation and natural-language understanding (NLU), natural-language generation (NLG), machine learning and deep learning.

Customer service

Many high-tech banking organizations are looking to integrate automated AI-based solutions such as chatbots into their customer service in order to provide faster and cheaper assistance to their clients who are becoming increasingly comfortable with technology. In particular, chatbots can efficiently conduct a dialogue, usually replacing other communication tools such as email, phone, or SMS. In banking, their major application is related to quick customer service answering common requests, as well as transactional support.

Several studies report significant reduction in the cost of customer services, expected to lead to billions of dollars of economic savings in the next ten years. In 2019, Gartner predicted that by 2021, 15% of all customer service interactions globally will be handled completely by AI. A study by Juniper Research in 2019 estimates retail sales resulting from chatbot-based interactions will reach $112 billion by 2023.

Since 2016, when Facebook allowed businesses to deliver automated customer support, e-commerce guidance, content, and interactive experiences through chatbots, a large variety of chatbots were developed for the Facebook Messenger platform.

In 2016, Russia-based Tochka Bank launched the world's first Facebook bot for a range of financial services, including a possibility of making payments.

In July 2016, Barclays Africa also launched a Facebook chatbot, making it the first bank to do so in Africa.

The France's third-largest bank by total assets Société Générale launched their chatbot called SoBot in March 2018. While 80% of users of the SoBot expressed their satisfaction after having tested it, Société Générale deputy director Bertrand Cozzarolo stated that it will never replace the expertise provided by a human advisor. 

The advantages of using chatbots for customer interactions in banking include cost reduction, financial advice, and 24/7 support.

Healthcare

Chatbots are also appearing in the healthcare industry. A study suggested that physicians in the United States believed that chatbots would be most beneficial for scheduling doctor appointments, locating health clinics, or providing medication information.

Whatsapp has teamed up with the World Health Organisation (WHO) to make a chatbot service that answers users' questions on COVID-19.

In 2020, The Indian Government launched a chatbot called MyGov Corona Helpdesk, that worked through Whatsapp and helped people access information about the Coronavirus (COVID-19) pandemic.

Certain patient groups are still reluctant to use chatbots. A mixed-methods study showed that people are still hesitant to use chatbots for their healthcare due to poor understanding of the technological complexity, the lack of empathy, and concerns about cyber-security. The analysis showed that while 6% had heard of a health chatbot and 3% had experience of using it, 67% perceived themselves as likely to use one within 12 months. The majority of participants would use a health chatbot for seeking general health information (78%), booking a medical appointment (78%), and looking for local health services (80%). However, a health chatbot was perceived as less suitable for seeking results of medical tests and seeking specialist advice such as sexual health. The analysis of attitudinal variables showed that most participants reported their preference for discussing their health with doctors (73%) and having access to reliable and accurate health information (93%). While 80% were curious about new technologies that could improve their health, 66% reported only seeking a doctor when experiencing a health problem and 65% thought that a chatbot was a good idea. Interestingly, 30% reported dislike about talking to computers, 41% felt it would be strange to discuss health matters with a chatbot and about half were unsure if they could trust the advice given by a chatbot. Therefore, perceived trustworthiness, individual attitudes towards bots, and dislike for talking to computers are the main barriers to health chatbots.

Politics

In New Zealand, the chatbot SAM – short for Semantic Analysis Machine (made by Nick Gerritsen of Touchtech) – has been developed. It is designed to share its political thoughts, for example on topics such as climate change, healthcare and education, etc. It talks to people through Facebook Messenger.

In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated for The Synthetic Party to run in the Danish parliamentary election, and was built by the artist collective Computer Lars. Leader Lars differed from earlier virtual politicians by leading a political party and by not pretending to be an objective candidate. This chatbot engaged in critical discussions on politics with users from around the world.

In India, the state government has launched a chatbot for its Aaple Sarkar platform, which provides conversational access to information regarding public services managed.

Toys

Chatbots have also been incorporated into devices not primarily meant for computing, such as toys.

Hello Barbie is an Internet-connected version of the doll that uses a chatbot provided by the company ToyTalk, which previously used the chatbot for a range of smartphone-based characters for children. These characters' behaviors are constrained by a set of rules that in effect emulate a particular character and produce a storyline.

The My Friend Cayla doll was marketed as a line of 18-inch (46 cm) dolls which uses speech recognition technology in conjunction with an Android or iOS mobile app to recognize the child's speech and have a conversation. It, like the Hello Barbie doll, attracted controversy due to vulnerabilities with the doll's Bluetooth stack and its use of data collected from the child's speech.

IBM's Watson computer has been used as the basis for chatbot-based educational toys for companies such as CogniToys intended to interact with children for educational purposes.

Malicious use

Malicious chatbots are frequently used to fill chat rooms with spam and advertisements, by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They were commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.

Tay, an AI chatbot that learns from previous interaction, caused major controversy due to it being targeted by internet trolls on Twitter. The bot was exploited, and after 16 hours began to send extremely offensive Tweets to users. This suggests that although the bot learned effectively from experience, adequate protection was not put in place to prevent misuse.

If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seems plausible, for instance making false claims during an election. With enough chatbots, it might be even possible to achieve artificial social proof.

Limitations of chatbots

The creation and implementation of chatbots is still a developing area, heavily related to artificial intelligence and machine learning, so the provided solutions, while possessing obvious advantages, have some important limitations in terms of functionalities and use cases. However, this is changing over time.

The most common limitations are listed below:

  • As the input/output database is fixed and limited, chatbots can fail while dealing with an unsaved query.
  • A chatbot's efficiency highly depends on language processing and is limited because of irregularities, such as accents and mistakes.
  • Chatbots are unable to deal with multiple questions at the same time and so conversation opportunities are limited.
  • Chatbots require a large amount of conversational data to train. Generative models, which are based on deep learning algorithms to generate new responses word by word based on user input, are usually trained on a large dataset of natural-language phrases.
  • Chatbots have difficulty managing non-linear conversations that must go back and forth on a topic with a user.
  • As it happens usually with technology-led changes in existing services, some consumers, more often than not from older generations, are uncomfortable with chatbots due to their limited understanding, making it obvious that their requests are being dealt with by machines.

Chatbots and jobs

Chatbots are increasingly present in businesses and often are used to automate tasks that do not require skill-based talents. With customer service taking place via messaging apps as well as phone calls, there are growing numbers of use-cases where chatbot deployment gives organizations a clear return on investment. Call center workers may be particularly at risk from AI-driven chatbots.

Chatbot jobs

Chatbot developers create, debug, and maintain applications that automate customer services or other communication processes. Their duties include reviewing and simplifying code when needed. They may also help companies implement bots in their operations.

A study by Forrester (June 2017) predicted that 25% of all jobs would be impacted by AI technologies by 2019.

Origin of replication

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Origin_of_replication   Models for bac...