Search This Blog

Wednesday, December 13, 2023

Free software

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Free_software

An operating system's computer screen, the screen completely covered by various free software applications.
GNU Guix. An example of a GNU FSDG complying free-software operating system running some representative applications. Shown are the GNOME desktop environment, the GNU Emacs text editor, the GIMP image editor, and the VLC media player.

Free software, libre software, or libreware is computer software distributed under terms that allow users to run the software for any purpose as well as to study, change, and distribute it and any adapted versions. Free software is a matter of liberty, not price; all users are legally free to do what they want with their copies of a free software (including profiting from them) regardless of how much is paid to obtain the program. Computer programs are deemed "free" if they give end-users (not just the developer) ultimate control over the software and, subsequently, over their devices.

The right to study and modify a computer program entails that the source code—the preferred format for making changes—be made available to users of that program. While this is often called "access to source code" or "public availability", the Free Software Foundation (FSF) recommends against thinking in those terms, because it might give the impression that users have an obligation (as opposed to a right) to give non-users a copy of the program.

Although the term "free software" had already been used loosely in the past and other permissive software like the Berkeley Software Distribution released in 1978 existed, Richard Stallman is credited with tying it to the sense under discussion and starting the free software movement in 1983, when he launched the GNU Project: a collaborative effort to create a freedom-respecting operating system, and to revive the spirit of cooperation once prevalent among hackers during the early days of computing.

Context

Free software thus differs from:

For software under the purview of copyright to be free, it must carry a software license whereby the author grants users the aforementioned rights. Software that is not covered by copyright law, such as software in the public domain, is free as long as the source code is also in the public domain, or otherwise available without restrictions.

Proprietary software uses restrictive software licences or EULAs and usually does not provide users with the source code. Users are thus legally or technically prevented from changing the software, and this results in reliance on the publisher to provide updates, help, and support. (See also vendor lock-in and abandonware). Users often may not reverse engineer, modify, or redistribute proprietary software. Beyond copyright law, contracts and a lack of source code, there can exist additional obstacles keeping users from exercising freedom over a piece of software, such as software patents and digital rights management (more specifically, tivoization).

Free software can be a for-profit, commercial activity or not. Some free software is developed by volunteer computer programmers while other is developed by corporations; or even by both.

Naming and differences with open source

Although both definitions refer to almost equivalent corpora of programs, the Free Software Foundation recommends using the term "free software" rather than "open-source software" (an alternative, yet similar, concept coined in 1998), because the goals and messaging are quite dissimilar. According to the Free Software Foundation, "Open source" and its associated campaign mostly focus on the technicalities of the public development model and marketing free software to businesses, while taking the ethical issue of user rights very lightly or even antagonistically. Stallman has also stated that considering the practical advantages of free software is like considering the practical advantages of not being handcuffed, in that it is not necessary for an individual to consider practical reasons in order to realize that being handcuffed is undesirable in itself.

The FSF also notes that "Open Source" has exactly one specific meaning in common English, namely that "you can look at the source code." It states that while the term "Free Software" can lead to two different interpretations, at least one of them is consistent with the intended meaning unlike the term "Open Source". The loan adjective "libre" is often used to avoid the ambiguity of the word "free" in the English language, and the ambiguity with the older usage of "free software" as public-domain software.

Definition and the Four Essential Freedoms of Free Software

Diagram of free and nonfree software, as defined by the Free Software Foundation. Left: free software, right: proprietary software, encircled: gratis software

The first formal definition of free software was published by FSF in February 1986. That definition, written by Richard Stallman, is still maintained today and states that software is free software if people who receive a copy of the software have the following four freedoms. The numbering begins with zero, not only as a spoof on the common usage of zero-based numbering in programming languages, but also because "Freedom 0" was not initially included in the list, but later added first in the list as it was considered very important.

  • Freedom 0: The freedom to use the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute and make copies so you can help your neighbor.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

Freedoms 1 and 3 require source code to be available because studying and modifying software without its source code can range from highly impractical to nearly impossible.

Thus, free software means that computer users have the freedom to cooperate with whom they choose, and to control the software they use. To summarize this into a remark distinguishing libre (freedom) software from gratis (zero price) software, the Free Software Foundation says: "Free software is a matter of liberty, not price. To understand the concept, you should think of 'free' as in 'free speech', not as in 'free beer'".

In the late 1990s, other groups published their own definitions that describe an almost identical set of software. The most notable are Debian Free Software Guidelines published in 1997, and The Open Source Definition, published in 1998.

The BSD-based operating systems, such as FreeBSD, OpenBSD, and NetBSD, do not have their own formal definitions of free software. Users of these systems generally find the same set of software to be acceptable, but sometimes see copyleft as restrictive. They generally advocate permissive free software licenses, which allow others to use the software as they wish, without being legally forced to provide the source code. Their view is that this permissive approach is more free. The Kerberos, X11, and Apache software licenses are substantially similar in intent and implementation.

Examples

There are thousands of free applications and many operating systems available on the Internet. Users can easily download and install those applications via a package manager that comes included with most Linux distributions.

The Free Software Directory maintains a large database of free-software packages. Some of the best-known examples include Linux-libre, Linux-based operating systems, the GNU Compiler Collection and C library; the MySQL relational database; the Apache web server; and the Sendmail mail transport agent. Other influential examples include the Emacs text editor; the GIMP raster drawing and image editor; the X Window System graphical-display system; the LibreOffice office suite; and the TeX and LaTeX typesetting systems.

History

From the 1950s up until the early 1970s, it was normal for computer users to have the software freedoms associated with free software, which was typically public-domain software. Software was commonly shared by individuals who used computers and by hardware manufacturers who welcomed the fact that people were making software that made their hardware useful. Organizations of users and suppliers, for example, SHARE, were formed to facilitate exchange of software. As software was often written in an interpreted language such as BASIC, the source code was distributed to use these programs. Software was also shared and distributed as printed source code (Type-in program) in computer magazines (like Creative Computing, SoftSide, Compute!, Byte, etc.) and books, like the bestseller BASIC Computer Games. By the early 1970s, the picture changed: software costs were dramatically increasing, a growing software industry was competing with the hardware manufacturer's bundled software products (free in that the cost was included in the hardware cost), leased machines required software support while providing no revenue for software, and some customers able to better meet their own needs did not want the costs of "free" software bundled with hardware product costs. In United States vs. IBM, filed January 17, 1969, the government charged that bundled software was anti-competitive. While some software might always be free, there would henceforth be a growing amount of software produced primarily for sale. In the 1970s and early 1980s, the software industry began using technical measures (such as only distributing binary copies of computer programs) to prevent computer users from being able to study or adapt the software applications as they saw fit. In 1980, copyright law was extended to computer programs.

In 1983, Richard Stallman, one of the original authors of the popular Emacs program and a longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU Project, the purpose of which was to produce a completely non-proprietary Unix-compatible operating system, saying that he had become frustrated with the shift in climate surrounding the computer world and its users. In his initial declaration of the project and its purpose, he specifically cited as a motivation his opposition to being asked to agree to non-disclosure agreements and restrictive licenses which prohibited the free sharing of potentially profitable in-development software, a prohibition directly contrary to the traditional hacker ethic. Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. He developed a free software definition and the concept of "copyleft", designed to ensure software freedom for all. Some non-software industries are beginning to use techniques similar to those used in free software development for their research and development process; scientists, for example, are looking towards more open development processes, and hardware such as microchips are beginning to be developed with specifications released under copyleft licenses (see the OpenCores project, for instance). Creative Commons and the free-culture movement have also been largely influenced by the free software movement.

1980s: Foundation of the GNU Project

In 1983, Richard Stallman, longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU Project, saying that he had become frustrated with the effects of the change in culture of the computer industry and its users. Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. An article outlining the project and its goals was published in March 1985 titled the GNU Manifesto. The manifesto included significant explanation of the GNU philosophy, Free Software Definition and "copyleft" ideas.

1990s: Release of the Linux kernel

The Linux kernel, started by Linus Torvalds, was released as freely modifiable source code in 1991. The first licence was a proprietary software licence. However, with version 0.12 in February 1992, he relicensed the project under the GNU General Public License. Much like Unix, Torvalds' kernel attracted the attention of volunteer programmers. FreeBSD and NetBSD (both derived from 386BSD) were released as free software when the USL v. BSDi lawsuit was settled out of court in 1993. OpenBSD forked from NetBSD in 1995. Also in 1995, The Apache HTTP Server, commonly referred to as Apache, was released under the Apache License 1.0.

Licensing

Copyleft, a novel use of copyright law to ensure that works remain unrestricted, originates in the world of free software.

All free-software licenses must grant users all the freedoms discussed above. However, unless the applications' licenses are compatible, combining programs by mixing source code or directly linking binaries is problematic, because of license technicalities. Programs indirectly connected together may avoid this problem.

The majority of free software falls under a small set of licenses. The most popular of these licenses are:

The Free Software Foundation and the Open Source Initiative both publish lists of licenses that they find to comply with their own definitions of free software and open-source software respectively:

The FSF list is not prescriptive: free-software licenses can exist that the FSF has not heard about, or considered important enough to write about. So it is possible for a license to be free and not in the FSF list. The OSI list only lists licenses that have been submitted, considered and approved. All open-source licenses must meet the Open Source Definition in order to be officially recognized as open source software. Free software, on the other hand, is a more informal classification that does not rely on official recognition. Nevertheless, software licensed under licenses that do not meet the Free Software Definition cannot rightly be considered free software.

Apart from these two organizations, the Debian project is seen by some to provide useful advice on whether particular licenses comply with their Debian Free Software Guidelines. Debian does not publish a list of approved licenses, so its judgments have to be tracked by checking what software they have allowed into their software archives. That is summarized at the Debian web site.

It is rare that a license announced as being in-compliance with the FSF guidelines does not also meet the Open Source Definition, although the reverse is not necessarily true (for example, the NASA Open Source Agreement is an OSI-approved license, but non-free according to FSF).

There are different categories of free software.

  • Public-domain software: the copyright has expired, the work was not copyrighted (released without copyright notice before 1988), or the author has released the software onto the public domain with a waiver statement (in countries where this is possible). Since public-domain software lacks copyright protection, it may be freely incorporated into any work, whether proprietary or free. The FSF recommends the CC0 public domain dedication for this purpose.
  • Permissive licenses, also called BSD-style because they are applied to much of the software distributed with the BSD operating systems: many of these licenses are also known as copyfree as they have no restrictions on distribution. The author retains copyright solely to disclaim warranty and require proper attribution of modified works, and permits redistribution and any modification, even closed-source ones. In this sense, a permissive license provides an incentive to create non-free software, by reducing the cost of developing restricted software. Since this is incompatible with the spirit of software freedom, many people consider permissive licenses to be less free than copyleft licenses.
  • Copyleft licenses, with the GNU General Public License being the most prominent: the author retains copyright and permits redistribution under the restriction that all such redistribution is licensed under the same license. Additions and modifications by others must also be licensed under the same "copyleft" license whenever they are distributed with part of the original licensed product. This is also known as a viral, protective, or reciprocal license. Due to the restriction on distribution not everyone considers this type of license to be free.

Security and reliability

Although nearly all computer viruses only affect Microsoft Windows, antivirus software such as ClamTk (shown here) is still provided for Linux and other Unix-based systems, so that users can detect malware that might infect Windows hosts.

There is debate over the security of free software in comparison to proprietary software, with a major issue being security through obscurity. A popular quantitative test in computer security is to use relative counting of known unpatched security flaws. Generally, users of this method advise avoiding products that lack fixes for known security flaws, at least until a fix is available.

Free software advocates strongly believe that this methodology is biased by counting more vulnerabilities for the free software systems, since their source code is accessible and their community is more forthcoming about what problems exist, (This is called "Security Through Disclosure") and proprietary software systems can have undisclosed societal drawbacks, such as disenfranchising less fortunate would-be users of free programs. As users can analyse and trace the source code, many more people with no commercial constraints can inspect the code and find bugs and loopholes than a corporation would find practicable. According to Richard Stallman, user access to the source code makes deploying free software with undesirable hidden spyware functionality far more difficult than for proprietary software.

Some quantitative studies have been done on the subject.

Binary blobs and other proprietary software

In 2006, OpenBSD started the first campaign against the use of binary blobs in kernels. Blobs are usually freely distributable device drivers for hardware from vendors that do not reveal driver source code to users or developers. This restricts the users' freedom effectively to modify the software and distribute modified versions. Also, since the blobs are undocumented and may have bugs, they pose a security risk to any operating system whose kernel includes them. The proclaimed aim of the campaign against blobs is to collect hardware documentation that allows developers to write free software drivers for that hardware, ultimately enabling all free operating systems to become or remain blob-free.

The issue of binary blobs in the Linux kernel and other device drivers motivated some developers in Ireland to launch gNewSense, a Linux-based distribution with all the binary blobs removed. The project received support from the Free Software Foundation and stimulated the creation, headed by the Free Software Foundation Latin America, of the Linux-libre kernel. As of October 2012, Trisquel is the most popular FSF endorsed Linux distribution ranked by Distrowatch (over 12 months). While Debian is not endorsed by the FSF and does not use Linux-libre, it is also a popular distribution available without kernel blobs by default since 2011.

The Linux community uses the term "blob" to refer to all nonfree firmware in a kernel whereas OpenBSD uses the term to refer to device drivers. The FSF does not consider OpenBSD to be blob free under the Linux community's definition of blob. 

Business model

Selling software under any free-software licence is permissible, as is commercial use. This is true for licenses with or without copyleft.

Since free software may be freely redistributed, it is generally available at little or no fee. Free software business models are usually based on adding value such as customization, accompanying hardware, support, training, integration, or certification. Exceptions exist however, where the user is charged to obtain a copy of the free application itself.

Fees are usually charged for distribution on compact discs and bootable USB drives, or for services of installing or maintaining the operation of free software. Development of large, commercially used free software is often funded by a combination of user donations, crowdfunding, corporate contributions, and tax money. The SELinux project at the United States National Security Agency is an example of a federally funded free-software project.

Proprietary software, on the other hand, tends to use a different business model, where a customer of the proprietary application pays a fee for a license to legally access and use it. This license may grant the customer the ability to configure some or no parts of the software themselves. Often some level of support is included in the purchase of proprietary software, but additional support services (especially for enterprise applications) are usually available for an additional fee. Some proprietary software vendors will also customize software for a fee.

The Free Software Foundation encourages selling free software. As the Foundation has written, "distributing free software is an opportunity to raise funds for development. Don't waste it!". For example, the FSF's own recommended license (the GNU GPL) states that "[you] may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee."

Microsoft CEO Steve Ballmer stated in 2001 that "open source is not available to commercial companies. The way the license is written, if you use any open-source software, you have to make the rest of your software open source." This misunderstanding is based on a requirement of copyleft licenses (like the GPL) that if one distributes modified versions of software, they must release the source and use the same license. This requirement does not extend to other software from the same developer. The claim of incompatibility between commercial companies and free software is also a misunderstanding. There are several large companies, e.g. Red Hat and IBM (IBM acquired RedHat in 2019), which do substantial commercial business in the development of free software.

Economic aspects and adoption

Free software played a significant part in the development of the Internet, the World Wide Web and the infrastructure of dot-com companies. Free software allows users to cooperate in enhancing and refining the programs they use; free software is a pure public good rather than a private good. Companies that contribute to free software increase commercial innovation.

"We migrated key functions from Windows to Linux because we needed an operating system that was stable and reliable – one that would give us in-house control. So if we needed to patch, adjust, or adapt, we could."

Official statement of the United Space Alliance, which manages the computer systems for the International Space Station (ISS), regarding their May 2013 decision to migrate ISS computer systems from Windows to Linux

The economic viability of free software has been recognized by large corporations such as IBM, Red Hat, and Sun Microsystems. Many companies whose core business is not in the IT sector choose free software for their Internet information and sales sites, due to the lower initial capital investment and ability to freely customize the application packages. Most companies in the software business include free software in their commercial products if the licenses allow that.

Free software is generally available at no cost and can result in permanently lower TCO (total cost of ownership) compared to proprietary software. With free software, businesses can fit software to their specific needs by changing the software themselves or by hiring programmers to modify it for them. Free software often has no warranty, and more importantly, generally does not assign legal liability to anyone. However, warranties are permitted between any two parties upon the condition of the software and its usage. Such an agreement is made separately from the free software license.

A report by Standish Group estimates that adoption of free software has caused a drop in revenue to the proprietary software industry by about $60 billion per year. Eric S. Raymond argued that the term free software is too ambiguous and intimidating for the business community. Raymond promoted the term open-source software as a friendlier alternative for the business and corporate world.

Open Knowledge Foundation

From Wikipedia, the free encyclopedia
AbbreviationOKF
Formation20 May 2004 (19 years ago)
FounderRufus Pollock
TypeNonprofit organisation
05133759
FocusOpen knowledge broadly, including open access, open content, open science and open data
Location
  • 86-90 Paul Street, London, EC2A 4NE, United Kingdom
Area served
International
Key people
Rufus Pollock, Renata Ávila Pinto (CEO)
Websiteokfn.org

Open Knowledge Foundation (OKF) is a global, non-profit network that promotes and shares information at no charge, including both content and data. It was founded by Rufus Pollock on 20 May 2004 in Cambridge, UK. It is incorporated in England and Wales as a private company limited by guarantee. Between May 2016 and May 2019 the organisation was named Open Knowledge International, but decided in May 2019 to return to Open Knowledge Foundation.

Aims

The aims of Open Knowledge Foundation are:

  • Promoting the idea of open knowledge, both what it is, and why it is a good idea.
  • Running open knowledge events, such as OKCon.
  • Working on open knowledge projects, such as Open Economics or Open Shakespeare.
  • Providing infrastructure, and potentially a home, for open knowledge projects, communities and resources. For example, the KnowledgeForge service and CKAN.
  • Acting at UK, European and international levels on open knowledge issues.

People

Renata Ávila Pinto joined as the new Chief Executive Officer of the Open Knowledge Foundation in October 2021. From February 2019 to August 2020, Catherine Stihler served as CEO. She left the Open Knowledge Foundation to become the CEO of Creative Commons. Between 2015–2017 Pavel Richter took on the role of CEO of Open Knowledge Foundation. Pavel was formerly Executive Director of Wikimedia Deutschland.

The Open Knowledge Foundation Advisory Council includes people from the areas of open access, open data, open content, open science, data visualization and digital rights. In 2015, it consisted of:

Network

As of 2018, Open Knowledge Foundation has 11 official chapters and 38 groups in different countries. In November 2022, the Open Knowledge Network was relaunched with two new projects.

It also supports 19 working groups.

Operations

Many of Open Knowledge Foundation's projects are technical in nature. Its most prominent project, CKAN, is used by many of the world's governments to host open catalogues of data that their countries possess.

The organisation tends to support its aims by hosting infrastructure for semi-independent projects to develop. This approach to organising was hinted as one of its earliest projects was a project management service called KnowledgeForge, which runs on the KForge platform. KnowledgeForge allows sectoral working groups to have space to manage projects related to open knowledge. More widely, the project infrastructure includes both technical and face-to-face aspects. The organisation hosts several dozen mailing lists for virtual discussion, utilises IRC for real-time communications and also hosts events.

Advocacy

Open Knowledge Foundation is an active partner with organisations working in similar areas, such as open educational resources.

Open Knowledge Foundation has produced the Open Knowledge Definition, an attempt to clarify some of the ambiguity surrounding the terminology of openness, as well as the Open Software Service Definition. It also supported the development of the Open Database License (ODbL).

Outside of technology, Open Knowledge Foundation plays a role in advocating for openness broadly. This includes supporting the drafting of reports, facilitating consultation and producing guides.

Rufus Pollock, one of Open Knowledge Foundation's founders, and current board secretary sits on the UK government's Public Sector Transparency Board.

Technical

Banner for the Geodata project in Spanish
OpenGLAM logo

The foundation places a strong interest in the use of open source technologies. Its software projects are hosted on GitHub, which utilises the Git version control software. Some of the projects are listed below:

  • CKAN, a tool that provides store for metadata. This enables governments to quickly and cheaply provide a catalogue of their data.
  • Datahub, a community-run catalogue of useful sets of data on the Internet. Depending on the type of data (and its conditions of use), Datahub may also be able to store a copy of the data or host it in a database, and provide some basic visualisation tools.
  • Frictionless Data, a collection of standards and tools for publishing data.
  • Open bibliography, broadly construed as efforts to catalogue and build tools for working with and publishing bibliographic resources, with particular emphasis on those works that are in the public domain and public domain calculators. Examples include the Bibliographica, Public Domain Works, Open Shakespeare, Open Text Book, and The Public Domain Review projects.
  • OpenGLAM, an initiative that promotes free and open access to digital cultural heritage, held by GLAMs: Galleries, Libraries, Archives and Museums. OpenGLAM is co-funded by the European Commission as part of the DM2E (Digitised Manuscripts to Europeana) project.
  • Open Economics
  • Open Knowledge Forums
  • Information Accessibility Initiative
  • Open geodata
  • Guide to open data licensing
  • "Get the Data" — a web-site for questions and answer on how to get data sets.
  • POD - Product Open Data

Events

Much of the collaboration with other related organisations occurs via events that the foundation hosts. Its premier event is the Open Knowledge Conference (OKCon), which has been held occasionally since 2007. Other events have been organised within the areas of data visualisation and free information network infrastructure.

Annually, Open Knowledge Foundation supports International Open Data Day

Panton Principles and Fellowships (Open data in Science)

The Panton Principles (for Open Data in Science) in 2010 had large contributions from Open Knowledge people and in 2011 Jonathan Gray and Peter Murray-Rust successfully obtained funding from OSF for two fellowships, held by Sophie Kershaw and Ross Mounce. In 2013 OKF obtained sponsorship from CCIA for 3 fellowships, which were awarded to Rosemarie Graves, Sam Moore, and Peter Kraker.

Other

D-CENT logo

Open Knowledge Foundation also supports Apps for Europe, and D-CENT, a European project created to share and organise data from seven countries, which ran from October 2013 to May 2016.

The Rosetta Foundation

From Wikipedia, the free encyclopedia
 
The Rosetta Foundation
FocusHumanitarian
Location
Area served
Worldwide
Websitehttp://www.therosettafoundation.org/

The Rosetta Foundation is a nonprofit organization that promotes social localization, the process of making social services information available to individuals around the world in their native languages.

The Rosetta Foundation was registered as a charitable organization in Ireland. It was an offshoot of the Localization Research Centre (LRC) at the University of Limerick, Ireland, and of the Centre for Next Generation Localization (CNGL), a research initiative supported by the Irish government.

The Rosetta Foundation developed the Service-Oriented Localization Architecture Solution (SOLAS), whereby volunteer translators and not-for-profit organizations contribute to the translation and distribution of materials for language localization. The first preview of Translation Exchange, now called SOLAS Match, was given on 17 May 2011; the first pilot project using SOLAS Match was launched on 20 October 2012. The Rosetta Foundation launched the Translation Commons (or "Trommons") on 18 May 2013.

On 15 June 2017, the Rosetta Foundation merged with Translators without Borders (TWB). The two now operate jointly under the TWB name. This merger was announced at a Localization World conference in Barcelona.

Origin of name

The foundation was named after the Rosetta Stone.

Goals and aims

The Rosetta foundation aims to provide infrastructure for translation and localization, and to use this to remove language barriers and provide access to information. Their goal is that this access to information will relieve poverty, support healthcare and develop education.

The Rosetta Foundation aimed to provide information to as many people as possible in their languages. The core concept was outlined in a paper published by organization founder Reinhard Schaler: Information Sharing across Languages.

History

European launch

The European launch occurred at the AGIS '09 conference in Limerick, Ireland, from 21 to 23 September 2009. The president of the University of Limerick, Don Barry, announced the launch of the Rosetta Foundation on 21 September 2009 during his welcoming address to the AGIS '09 delegates. AGIS, Action for Global Information Sharing, provided an opportunity for volunteer translators, localization specialists, and NGOs to come together to learn, network and celebrate their work.

North American launch

The North American launch took place at the Localization World conference in Santa Clara, California, on 20 October 2009. This pre-conference workshop provided an overview of the organizational structure, the aims and objectives, and the strategic plan of the Rosetta Foundation. Participants were introduced to the foundation's translation and localization technology platform, GlobalSight.

International No Language Barrier Day

In 2012, The Rosetta Foundation declared 19 April the international "No Language Barrier Day." The day is meant to raise international awareness that it is not languages that represent barriers but rather access to translation services that is the barrier preventing communities from accessing and sharing information across languages. The annual celebration of this day aims to raise awareness about and grow global community translation efforts. One example is the BBB Volunteer Interpretation Service, which helps communication in Korea, and Interpreters Without Borders from Babel verse.

Translation Commons (Trommons)

On 18 May 2013, the Rosetta Foundation launched the Translation Commons or Trommons. Trommons was an open, non-profit space for those offering free community language services. Trommons was powered by the Service-Oriented Localization Architecture Solution (SOLAS). The Rosetta Foundation switched over overproduction on 8 May 2013, attracting language communities from 44 countries within hours.

Social Localisation

The concept of "Social Localization" was introduced by Reinhard Schaler, director of the Localization Research Centre at the University of Limerick, at a particular Localization World Silicon Valley session on 10 October 2011. The main objective of social localization is to promote a demand rather than a supply-driven approach to localization. Social localization supports user-driven and needs-based localization scenarios. The Rosetta Foundation launched its initiative at a special event in Dublin on 27 October 2011 with volunteers, partner organizations, and funders.

Areas of activity

The Rosetta Foundation supports not-for-profit activities of the localization and translation communities. It works with those who want equal access to information across languages, independent of economic or market considerations, including localization and translation companies, technology developers, and not-for-profit and non-governmental organizations. The objective is to cater to translations requirements beyond those services offered by mainstream translation service providers and user communities on the ground.

Technology platform

The Rosetta Foundation is actively involved in developing GlobalSight and Crowdsight. Both systems are open source systems originally developed by Transware and then moved into the open-source space by their new owners We localize in early 2009. Sponsored by We localize, GlobalSight is an open-source Globalization Management System (GMS) that helps automate tasks associated with the creation, translation, review, storage, and management of global content. CrowdSight is another open-source application fully integrated with GlobalSight. It is used specifically to engage the right community to deliver a quick-turn translation for on-demand content. The GlobalSight community has over 1,500 members.

The first preview of Translation eXchange (now SOLAS Match), a significant component developed as part of The Rosetta Foundation technology platform in collaboration with the Centre for Next Generation Localization (CNGL), was given in a webinar by Reinhard Schaler and Eoin O Conchúir on 17 May 2011. SOLAS Match was developed at the University of Limerick and is based on ideas developed at the Rosetta Foundation Design Fest in San Francisco, 5–6 February 2012, by around 25 localization experts. SOLAS Match, matches translation projects with volunteers' expertise and interests.

SOLAS based on ORM design principles

Service-Oriented Localization Architecture Solution (SOLAS) Design is based on the ORM design principles: O-pen (easy to join and to participate), R-ight (serve the right task to the right volunteer), and M-inimalistic (crisp, clear, uncluttered). SOLAS consists of SOLAS Match (matching projects and volunteers) and SOLAS Productivity (a suite of translation productivity tools and technologies). SOLAS was originally developed as part of the Next Generation Localization research track of the CNGL at the University of Limerick. SOLAS Match has been released under an open-source GPL license and can be downloaded from the SOLAS web page. SOLAS Productivity currently consists of six components, all sharing an XLIFF-based common data layer:

  • Workflow Recommender (workflow optimization)
  • Localization Knowledge Repository (source language checking)
  • XLIFF Phoenix (re-use of metadata)
  • MT-Mapper (identification of suitable MT engine)
  • LocConnect (orchestration of components)

International advisory committee

Committee Member Company
Reinhard Schäler Localisation Research Centre
Alan Barret Independent
Brian Kelly Breakout Interactive Ltd
Mahesh Kulkarni Centre for the Development of Advanced Computing
John Papaioannou Bentley Systems
Stephen Roantree Roantree Consulting
Páraic Sheridan Centre for Next Generation Localisation
Michael Smith iStockphoto
Francis Tsang Adobe Systems Inc.
Smith Yewell Welocalize

Board of directors

Board Member Company
Reinhard Schäler Localisation Research Centre
Alan Barret Independent
Gerry McNally McNally O'Brien & Co.

The Non-profit Technology Enterprise Network

In March 2010, The Rosetta Foundation became a member of the Non-profit Technology Enterprise Network (NTEN), a membership organization made up of individuals, non-profit and for-profit organizations which seeks to support non-profit organizations in their use of technology to fulfil their missions.

Artificial consciousness

From Wikipedia, the free encyclopedia

Artificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feel qualia).

Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness or NCC. Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious.

Philosophical views

As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.

Plausibility debate

Type-identity theorists and other skeptics hold the view that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution.

In his article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can no longer be reprogrammed, from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."

For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.

Computational Foundation argument

One of the most explicit arguments for the plausibility of artificial sentience comes from David Chalmers. His proposal is roughly that the right kinds of computations are sufficient for the possession of a conscious mind. Chalmers proposes that a system implements a computation if "the causal structure of the system mirrors the formal structure of the computation", and that any system that implements certain computations is sentient.

The most controversial part of Chalmers' proposal is that mental properties are "organizationally invariant". Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are "characterized by their causal role". Aided by previous work, he says that "[s]ystems with the same causal topology…will share their psychological properties".

Phenomenological properties, unlike psychological properties, are not definable in terms of their causal roles. Establishing that phenomenological properties are a consequence of a causal topology, therefore, requires argument. Chalmers provides his Dancing Qualia argument for this purpose.

Chalmers begins by assuming that his principle of organization invariance is false: that agents with identical causal organizations could have different experiences. He then asks us to conceive of changing one agent into the other by the replacement of parts (neural parts replaced by silicon, say) while preserving its causal organization. The experience of the agent under transformation would change (as the parts were replaced), but there would be no change in causal topology and therefore no means whereby the agent could "notice" the shift in experience; Chalmers considers this state of affairs an implausible reducto ad absurdum establishing that his principle of organizational invariance must almost certainly be true.

Critics of artificial sentience object that Chalmers begs the question in assuming that all mental properties and external connections are sufficiently captured by abstract causal organization.

Controversies

In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA chatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous. Philosopher Nick Bostrom said that he thinks LaMDA probably is not conscious, but asked "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain... there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."

Testing

The most well-known method for testing machine intelligence is the Turing test. But when interpreted as only observational, this test contradicts the philosophy of science principles of theory dependence of observations. It also has been suggested that Alan Turing's recommendation of imitating not a human adult consciousness, but a human child consciousness, should be taken seriously.

Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Although various systems may display various signs of behavior correlated with functional consciousness, there is no conceivable way in which third-person tests can have access to first-person phenomenological features. Because of that, and because there is no empirical definition of sentience, a test of presence of sentience in AC may be impossible.

In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.

Ethics

If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer of a building of larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction (see below).

In 2021, German philosopher Thomas Metzinger argued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering".

Research and implementation proposals

Aspects of consciousness

Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious. The functions of consciousness suggested by Bernard Baars are Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, and Autoprogramming and Self-maintenance Function. Igor Aleksander suggested 12 principles for artificial consciousness and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.

Awareness

Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling of the physical world, modeling of one's own internal states and processes, and modeling of other conscious entities.

There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.

Memory

Conscious events interact with memory systems in learning, rehearsal, and retrieval. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA, there is evidence that this is also the case in the nervous system. In IDA, these two memories are implemented computationally using a modified version of Kanerva’s Sparse distributed memory architecture.

Learning

Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events. Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".

Anticipation

The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander. The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.

Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.

Subjective experience

Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism.

Role of cognitive architectures

The term "cognitive architecture" may refer to a theory about the structure of the human mind, or any portion or function thereof, including consciousness. In another context, a cognitive architecture uses computers to instantiate the abstract structure. An example is QuBIC: Quantum and Bio-inspired Cognitive Architecture for Machine Consciousness. One of the main goals of a cognitive architecture is to summarize the various results of cognitive psychology in a comprehensive computer model. However, the results need to be in a formalized form so they can be the basis of a computer program. Also, the role of cognitive architecture is for the A.I. to clearly structure, build, and implement its thought process.

Symbolic or hybrid proposals

Franklin's Intelligent Distribution Agent

Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory. His brainchild IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996–2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled. While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task." IDA has been extended to LIDA (Learning Intelligent Distribution Agent).

Ron Sun's CLARION cognitive architecture

The CLARION cognitive architecture posits a two-level representation that explains the distinction between conscious and unconscious mental processes.

CLARION has been successful in accounting for a variety of psychological data. A number of well-known skill learning tasks have been simulated using CLARION that span the spectrum ranging from simple reactive skills to complex cognitive skills. The tasks include serial reaction time (SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC) tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA) task, and the Tower of Hanoi (TOH) task. Among them, SRT, AGL, and PC are typical implicit learning tasks, very much relevant to the issue of consciousness as they operationalized the notion of consciousness in the context of psychological experiments.

Ben Goertzel's OpenCog

Ben Goertzel made an embodied AI through the open-source OpenCog project. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the Hong Kong Polytechnic University.

Connectionist proposals

Haikonen's cognitive architecture

Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many. A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.

Shanahan's cognitive architecture

Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").

Takeno's self-awareness research

Self-awareness in robots is being investigated by Junichi Takenoat Meiji University in Japan. Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it. Takeno asserts that he first contrived the computational module called a MoNAD, which has a self-aware function, and he then constructed the artificial consciousness system by formulating the relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno 2007). Takeno completed a mirror image cognition experiment using a robot equipped with the MoNAD system. Takeno proposed the Self-Body Theory stating that "humans feel that their own mirror image is closer to themselves than an actual part of themselves." The most important point in developing artificial consciousness or clarifying human consciousness is the development of a function of self-awareness, and he claims that he has demonstrated physical and mathematical evidence for this in his thesis. He also demonstrated that robots can study episodes in memory where the emotions were stimulated and use this experience to take predictive actions to prevent the recurrence of unpleasant emotions (Torigoe, Takeno 2009).

Aleksander's impossible mind

Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and wrote in his 1996 book Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language. Whether this is true remains to be demonstrated and the basic principle stated in Impossible Minds—that the brain is a neural state machine—is open to doubt.

Thaler's Creativity Machine Paradigm

Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI), or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies. He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity. Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.

Michael Graziano's attention schema

In 2011, Michael Graziano and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema. Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain". This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well-studied body schema that tracks the spatial place of a person's body. This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.

"Self-modeling"

Hod Lipson defines "self-modeling" as a necessary component of self-awareness or consciousness in robots. "Self-modeling" consists of a robot running an internal model or simulation of itself.

In fiction

Characters with artificial consciousness (or at least with personalities that imply they have consciousness), from works of fiction:

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...